May 02, 2016

OpenStack in Production

Resource management at CERN



As part of the recent OpenStack summit in Austin, the Scientific Working group was established looking into how scientific organisations can best make use of OpenStack clouds.

During our discussions with more than 70 people (etherpad), we concluded on 4 top areas to look at and started to analyse the approaches and common needs. The areas were
  1. Parallel file system support in Manila. There are a number of file systems supported by Manila but many High Performance Computing sites (HPC) use Lustre which is focussed on the needs of the HPC user community.
  2. Bare metal management looking at how to deploy bare metal for the maximum performance within the OpenStack frameworks for identity, quota and networking. This team will work on understanding additional needs with the OpenStack Ironic project.
  3. Accounting covering the wide range of needs to track usage of resources and showback/chargeback to the appropriate user communities.
  4. Stories is addressing how we collect requirements from the scientific use cases and work with the OpenStack community teams, such as the Product working group, to include these into the development roadmaps along with defining reference architectures on how to cover common use cases such as high performance or high throughput computing clouds in the scientific domain.
Most of the applications run at CERN are high throughput, embarrassingly parallel applications. Simulation and analysis of the collisions such as in the LHC can be farmed to different compute resources with each event being handled independently and no need for fast interconnects. While the working group will cover all the areas (and some outside this list), our focus is on accounting (3).

Given limited time available, it was not possible for each of the interested members of the accounting team to explain their environment. This blog is intended to provide the details of the CERN cloud usage, the approach to resource management and some areas where OpenStack could provide additional function to improve the way we manage the accounting process. Within the Scientific Working group, these stories will be refined and reviewed to produce specifications and identify the potential communities who could start on the development.

CERN Pledges

The CERN cloud provides computing resources for the Large Hadron Collider and other experiments. The cloud is currently around 160,000 cores in total spread across two data centres in Geneva and Budapest. Resources are managed world wide with the World Wide Computing LHC Grid which executes over 2 million jobs per day. Compute resources in the WLCG are allocated via a pledge model. Rather than providing direct funds from the sites, the sites commit to provide compute capacity and storage for a period of time as a pledge and these are recorded in the REBUS system. These are then made available using a variety of middleware technologies.

Given the allocation of resources across 100s of sites, the experiments then select the appropriate models to place their workloads at each site according to compute/storage/networking capabilities. Some sites will be suitable for simulation of collisions (high CPU, low storage and network). Others would provide archival storage and significant storage IOPS for more data intensive applications. For storage, the pledges are made in capacity on disk and tape. The compute resource capacity is pledges in Kilo-HepSpec06 units, abbreviated to kHS06 (based on a subset of the Spec 2006 benchmark) that allows faster processors to be given a higher weight in the pledge compared to slower ones (as High Energy Physics computing is an embarrassingly parallel high throughput computing problem).

The pledges are reviewed on a regular basis to check the requests are consistent with the experiments’ computing models, the allocated resources are being used efficiently and the pledges are compatible with the requests.

Within the WLCG, CERN provides the Tier-0 resources for the safe keeping of the raw data and performs the first pass at reconstructing the raw data into meaningful information. The Tier-0 distributes the raw data and the reconstructed output to Tier 1s, and reprocesses data when the LHC is not running.

Procurement Process<o:p></o:p>

The purchases for the Tier-0 pledge for compute is translated into a formal procurement process. Given the annual orders exceed 750 KCHF, the process requires a formal procedure:
  • A market survey to determine which companies in the CERN member states could reply to requests in general areas such as compute servers or disk storage. Typical criteria would be the size of the company, the level of certification with component vendors and offering products in the relevant area (such as industry standard servers) 
  • A tender which specifies the technical specifications and quantity for which an offer is required. These are adjudicated on the lowest cost compliant with specifications criteria. Cost in this case is defined as the cost of the material over 3 years including warranty, power, rack and network infrastructure needed. The quantity is specified in terms of kHS06 with 2GB/core and 20GB storage/core which means that the suppliers are free to try different combinations of top bin processors which may be a little more expensive or lower performing ones which would then require more total memory and storage. Equally, the choice of motherboard components has significant flexibility within the required features such as 19” rack compatible and enterprise quality drives. The typical configurations are white box manufacturers.
  • Following testing of the proposed systems to ensure compliance, an order is placed with several suppliers, the machines manufactured, delivered, racked-up and burnt it using a set of high load stress tests to identify issues such as cooling or firmware problems.
Typical volumes are around 2,000 servers a year in one or two rounds of procurement. The process from start to delivered capacity takes around 280 days so bulk purchases are needed followed by allocation to users rather than ordering on request. If there are issues found, this process can take significantly longer.

Step
Time (Days)
Elapsed (Days)
User expresses requirement
0
Market Survey prepared
15
15
Market Survey for possible vendors
30
45
Specifications prepared
15
60
Vendor responses
30
90
Test systems evaluated
30
120
Offers adjudicated
10
130
Finance committee
30
160
Hardware delivered
90
250
Burn in and acceptance
30 days typical with 380 worst case
280
Total
280+ Days

Given the time the process takes, there are only one to two procurement processes run per year. This means that a continuous delivery model cannot be used and therefore there is a need for capacity planning on an annual basis and to find approaches to use the resources before they are allocated out to their final purpose.

Physical Infrastructure

CERN manages two data centres in Meyrin, Geneva and Wigner, Budapest. The full data is available at the CERN data centre overview page. When hardware is procured, the final destination is defined as part of the order according to rack space, cooling and electrical availability. 




While the installations in Budapest are new installations, some of the Geneva installations involve replacing old hardware. We typically retire hardware between 4 and 5 years old when the CPU power/watt is significantly better with new purchases and the hardware repair costs for new equipment is more predictable and sustainable.

Within the Geneva centre, there are two significant areas, physics and redundant power. Physics power has a single power source which is expected to fail in the event of an electricity cut lasting beyond the few minutes supported by the battery units. The redundant power area is backed by diesels. The Wigner centre is entirely redundant.

Lifecycle

With an annual procurement cycle with 2-3 vendors per cycle, each one with their own optimisations to arrive at the lowest cost for the specifications, the hardware is highly heterogeneous. This has a significant benefit when there are issues, such as disk firmware or BMC controllers, that lead to delays in one of the deliveries being accepted, so the remaining hardware can be made available to experiments.

However, we run the machines for the 3 year warranty and then some additional years on minimal repairs (i.e. simple parts are replaced with components from servers of the same series), we have around 15-20 different hardware configurations for compute servers active in the centre at any time. There are variations in the specifications (as technologies such as SSDs and 10Gb Ethernet became commodity, the new tenders needed these) and those between vendor responses for the same specifications (e.g. slower memory or different processor models).

These combinations do mean that offering standard flavors for each hardware complication would be very confusing for the users, given that there is no easy way for a user to know if resources are available in a particular flavor except to try to create a VM with that flavor.

Given new hardware deliveries and limited space, there are equivalent retirement campaigns. The aim is to replace the older hardware by more efficient newer boxes that can deliver more HS06 within the same power/cooling envelope. The process to empty machines depends on the workloads running on the servers. Batch workloads generally finish within a couple of weeks so setting the servers to no longer accept new work just before the retirements is sufficient. For servers and personal build/test machines, we aim to migrate the workloads to capacity on new servers. This operation is increasingly being performed using live migration and MPLS to extend the broadcast domains for networks to the new capacity.

Projects and Quota

All new users are allocated a project, “Personal XXXX” where XXXX is their CERN account when they subscribe to the CERN cloud service through the CERN resource portal. The CERN resource portal is the entry point to subscribe to the many services available from the central IT department and for users to list their currently active subscriptions and allocations. The personal projects have a minimal quota for a few cores and GBs of block storage so that users can easily follow the tutorial steps on using the cloud and create simple VMs for their own needs. The default image set is available on personal projects along with the standard ‘m’ flavors which are similar to the ones on AWS.

Shared projects can also be requested for activities which are related to an experiment or department. For these resources, a list of people can be defined as administrators (through CERN’s group management system e-groups) and a quota for cores, memory and disk space requested. Additional flavors can also be asked for according to particular needs such as the CERNVM flavors with a small system disk and a large ephemeral one.

The project requests go through a manual approval process, being reviewed by the IT resource management to check the request against the pledge and the available resources. An adjustment of the share of the central batch farm is also made so that the sum of resources for an experiment continues to be within the pledge.

Once the resource request has been manually approved, the ticket is then passed to the cloud team for execution. Rundeck provides us with a simple tool for performing high privilege operations with good logging and recovery. This is used in many of our workflows such as hardware repair. The Rundeck procedure reads the quota settings from the ticket and executes the appropriate project creation and role allocation requests to Keystone, Nova and Cinder.

Need #1 : CPU performance based allocation and scheduling

(As with all requests, there is a mixture of requirements and implementation. The Needs are stated in an absolute fashion according to our current understanding. There may be alternative approaches or compromises which would address these needs in common with other user requirements).

The resource manager for the cloud in a high throughput/performance computing environment allocates resources based on performance rather than pure core count.

A user wants to request a processor of a particular minimum performance and is willing to have a larger reduction in his remaining quota.

A user wants to make a request for resources and then iterate according to how much performance related quota they have left to fill the available quota, e.g. give me a VM with less than a certain performance rating.

A resource manager would like to encourage the use of older resources which are often idle.

A quotas for slower and faster cores is currently the same (thus users create a VM, delete it if it is one of the slower type) so there is no incentive to use the slower cores.

As a resource manager preparing an accounting report, the faster cores should have a higher weight against the pledge to ensure continued treatment of slower cores.

The proposal is therefore to have an additional, optional quota on CPU units so that resource managers can allocate out total throughput rather than per core capacities.

The alternative approach of defining a number of flavors for each of the hardware types and quota. However, there are a number of drawbacks with this approach:
  • The number of flavors to define would be significant (in CERN’s case, around 15-20 different hardware configurations multiplied by 4-5 sizes for small, medium, large, xlarge, ...)
  • The user experience impact would be significant as the user would have to iterate over the available flavors to find free capacity. For example, trying out m4.large first and finding the capacity was all used, then trying m3.large etc.
There is, as far as I know, no per-flavor quota. The specs have been discussed in some operator feedback sessions but the specification did not reach consensus.

Extendible resource tracking seems to be approaching the direction with ‘compute units’ (such as here) as defined in the specification. Many parts of this are not yet implemented so it is not easy to see if it addresses the requirements.

Need #2 : Nested Quotas<o:p></o:p>

As an experiment resource co-ordinator, it should be possible to re-allocate resources according to the priorities of the experiment without needing action by the administrators. Thus, moving quotas between projects which are managed by the experiment resource co-ordinators within the pledge allocated by the WLCG.

Nested keystone projects have been in the production release since Kilo. This gives the possibility for role definitions within the nested project structure.

The implementation of the nested quota function has been discussed within various summits for the past 3 years. The first implementation proposal, Boson, was for a dedicated service for quota management. However, there were concerns raised by the PTLs on the impacts for performance and maintainability of this approach. The alternative of enhancing the quotas in each of the projects has been followed (such as Nova). These implementations though have not advanced due to other concerns with quota management which are leading towards a common library, delimiter, which is being discussed for Newton.

Need #3 : Spot Market

As a cloud provider, uncommitted resources should be made available at a lower cost but at a lower service level, such as pre-emption and termination at short notice. This mirrors the AWS spot market or the Google Pre-emptible instances. The benefits would be higher utilization of the resources and ability to provide elastic capacity for reserved instances by reducing the spot resources.

A draft specification for this functionality has been submitted along with the proposal which is currently being reviewed. An initial implementation of this functionality (called OpenStack Preemptible Instances Extension, or opie) will be made available soon on github.

Need #4 : Reducing quota below utilization<o:p></o:p>

As an experiment resource co-ordinator, quotas are under regular adjustment to meet the chosen priorities. Where a project has a lower priority but high current utilization, further resource usage should be blocked but existing resources not deleted. The resource co-ordinator can then contact the user to encourage the appropriate resources to be deleted.

To achieve this function, one approach would be to allow the quota to be set below the current utilization in order to give the project administrator the time to identify the resources which would be best to be deleted in view of the reduced capacity.

Need #5 : Components without quota<o:p></o:p>


As a cloud provider, some inventive users are storing significant quantities of data in the Glance image service. There is only a maximum size limit with no accumulated capacity leaves this service open to non-planned storage.

It is proposed to add quota functionality inside Glance for
  • The total capacity of images stored in Glance
  • The total capacity of snapshots stored in Glance
The number of images and snapshots would be an lower priority enhancement request since the service risk comes from the total capacity although the numbers could potentially also be abused.

Given that this is new functionality, it could also be a candidate for the first usage of the new delimiter library.

Acknowledgements

There are too many people who have been involved in the resource management activities to list here. The teams contributing to the description above are:
  • CERN IT teams supporting cloud, batch, accounting and quota
  • BARC, Mumbai for collaborating around the implementation of nested quota
  • Indigo Datacloud team for work on the spot market in OpenStack
  • Other labs in the WLCG and the Scientific Working Group










by Tim Bell (noreply@blogger.com) at May 02, 2016 07:57 AM

Opensource.com

A look back at the Austin OpenStack Summit

Catch up on the latest OpenStack happenings in this special post-Summit edition of our weekly OpenStack news.

by Jason Baker at May 02, 2016 06:59 AM

OpenStack Superuser

Use cases identified for building applications on OpenStack

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/TvnJr2JYFpY" width=""></iframe>

David Flanders, community wrangler at the OpenStack Foundation and Craig Peters took a break at the Austin Summit last week to sit down with Superuser TV and discuss the different ways applications can be built on OpenStack, where containers fit into the picture and how different ecosystem companies like Mirantis, CoreOS and Google are getting involved.

From web-native apps to the mobile apps bringing OpenStack into the world of the Internet of things (IoT) and containers, Flanders and Peters discuss the evolution of the use cases and what kinds of applications benefit the most from OpenStack.

by Superuser at May 02, 2016 03:31 AM

May 01, 2016

RDO

Technical definition of done

Before releasing Mitaka, we agreed on a tecnical definition of done1. This can always evolve to add more coverage, but this is what we currently test when deciding whether a release is ready from a technical perspective:

  • Packstack all-in-one deployments testing the 3 upstream scenarios
  • TripleO single contoller and single compute deployment validated with Tempest smoke tests
  • TripleO three controller and single compute deployment with pacemaker validated using a Heat template based ping test

These same tests are used to validate our trunk repos2.

by John Trowbridge at May 01, 2016 07:34 PM

Elizabeth K. Joseph

OpenStack Summit Days 1-2

This past week I attended my sixth OpenStack Summit. This one took us to Austin, Texas. I was last in Austin in 2014 when I quickly stopped by to give a talk at the Texas LinuxFest, but I wasn’t able to stay long during that trip. This trip gave me a chance (well, several) to finally have some local BBQ!

I arrived Sunday afternoon and took the opportunity to meet up with Chris Aedo and Paul Belanger, who I’d be on the stage with on Monday morning. We were able to do our first meetup together and do a final once through of our slides to make sure they had all the updates we wanted and we were clear on where the transitions were. Gathering at the convention center also allowed to pick up our badges before the mad rush that would come the opening of the conference itself on Monday morning.

With Austin being the Live Music Capital of the World, we were greeted in the morning by live music from the band Soul Track Mind. I really enjoyed the vibe it brought to the morning, and we had a show to watch as we settled in and waited for the keynotes.

Jonathan Bryce and Lauren Sell of the OpenStack Foundation opened the conference and gave us a tour of numbers. The first OpenStack summit was held in Austin just under six years ago with 75 people and they were proud to announce that this summit had over 7,500. It’s been quite the ride that I’m proud to have been part of since the beginning of 2013. In Jonathan’s keynote we were able to get a glimpse into the real users of OpenStack, with highlights including the fact that 65% of respondents to the recent OpenStack User Survey are using OpenStack in production and that half of the Fortune 100 companies are using OpenStack in some capacity. It was also interesting to learn how important the standard APIs for interacting with clouds was for companies, a fact that I always hoped would shine through as this open source cloud was being adopted. The video from his keynote is here: Embracing Datacenter Diversity.

As the keynotes continued the ones that really stood out for me were by AT&T (video: AT&T’s Cloud Journey with OpenStack) and Volkswagen Group (Driving the Future of IT Infrastructure at Volkswagen Group.

The AT&T keynote was interesting from a technical perspective. It’s clear that the rise of mobile devices and the internet of things has put pressure on telecoms to grow much more quickly than they have in the past to handle this new mobile infrastructure. Their keynote shared that they expected this to grow an additional ten times by 2020. To meet this need, the networking aspects of technologies like OpenStack are important to their strategy as they move away from “black box” hardware from networking vendors and to more software-driven infrastructure that could grow more quickly to fit their needs. We learned that they’re currently using 10 OpenStack projects in their infrastructure, with plans to add 3 more in the near future, and learned about their in house AT&T Integrated Cloud (AIC) tooling for managing OpenStack. When the morning concluded, all their work was rewarded with a Super User award, they wrote about here.

The Volkswagen Group keynote was a lot of fun. As the world of electric and automated cars quickly approaches they have recognized the need to innovate more quickly and use technology to get there. They still seem to be in the early days of OpenStack deployments, but have committed a portion of one of their new data centers to just OpenStack. Ultimately they see a hybrid cloud future, leveraging both public and private hosting.

The keynote sessions concluded with the announcement of the 2017 OpenStack Summit locations: Boston and Sydney!

Directly after the keynote I had to meet Paul and Chris for our talk on OpenStack Infrastructure for Beginners (video, slides). We had a packed room. I lead off the presentation by covering an overview of our work and by giving a high level tour of the OpenStack project infrastructure. Chris picked up by speaking to how things worked from a developer perspective, tying that back into how and why we set things up the way we did. Paul rounded out the presentation by diving into more of the specifics around Zuul and Jenkins, including how our testing jobs are defined and run. I think the talk went well, we certainly had a lot of fun as we went into lunch chatting with folks about specific components that they were looking either to get involved with or replicate in their own continuous integration systems.


Chris Aedo presenting, photo by Donnie Ham (source)

After a delicious lunch at Cooper’s BBQ, I went over to a talk on “OpenStack Stable: What It Actually Means to Maintain Stable Branches” by Matt Riedemann, Matthew Treinish and Ihar Hrachyshka in the Upstream Development track of the conference. This was a new track for this summit, and it was great to see how well-attended the sessions ended up being. The goal of this talk was to inform members of the community what exactly is involved in management of stable releases, which has a lot more moving pieces than most people tend to expect. Video from the session up here. It was then over to “From Upstream Documentation To Downstream Product Knowledge Base” by Stefano Maffulli and Caleb Boylan of DreamHost. They’ve been taking OpenStack documentation and adjusting it for easier and more targeted for consumption by their customers. They talked about their toolchain that gets it from raw source from the OpenStack upstream into the proprietary knowledge base at DreamHost. It’ll be interesting to see how this scales long term through releases and documentations changes, video here.

My day concluded by participating in a series of Lightning Talks. My talk was first, during which I spent 5 minutes giving a tour of status.openstack.org. I was inspired to give this talk after realizing that even though the links are right there, most people are completely unaware of what things like Reviewday (“Reviews” link) are. It also gave me the opportunity to take a closer, current look at OpenStack Health prior to my presentation, I had intended to go to “OpenStack-Health Dashboard and Dealing with Data from the Gate” (video) but it conflicted with the talk we were giving in the morning. The lightning talks continued with talks by Paul Belanger on Grafyaml, James E. Blair on Gertty and Andreas Jaeger on the steps for adding a project to OpenStack. The lightning talks from there drifted away from Infrastructure and into more general upstream development. Video of all the lightning talks here.

Day two of the summit began with live music again! It was nice to see that it wasn’t a single day event. This time Mark Collier of the OpenStack Foundation kicked things off by talking about the explosion of growth in infrastructure needed to support the growing Internet of Things. Of particular interest was learning about how operators are particularly seeking seamless integration of virtual machines, containers and bare metal, and how OpenStack meets that need today as a sort of integration engine, video here.

The highlights of the morning for me included a presentation from tcp cloud in the Czech Republic. They’re developing a Smart City in the small Czech city of Písek. He did an overview of the devices they were using and presented a diagram demonstrating how all the data they collect from around the city gets piped into an OpenStack cloud that they run. He concluded his presentation by revealing that they’d turned the summit itself into a mini city by placing devices around the venue to track temperature and CO2 levels throughout the rooms, very cool. Video of the presentation here.


tcp cloud presentation

I also enjoyed seeing Dean Troyer on stage to talk about improving user experience (UX) with OpenStackClient (OSC). As someone who has put a lot of work into converting documented commands in my book in an effort to use OSC rather than the individual project clients I certainly appreciate his dedication to this project. The video from the talk is here. It was also great to hear from OVH, an ISP and cloud hosting provider who currently donates OpenStack instances to our infrastructure team for running CI testing.

Tuesday also marked the beginning of the Design Summit. This is when I split off from the user conference and then spend the rest of my time in development space. This time the Design Summit was held across the street from the convention center in the Hilton where I was staying. This area of the summit takes us away from presentation-style sessions and into discussions and work sessions. This first day focused on cross-project sessions.

This was the lightest day of the week for me, having a much stronger commitment to the infrastructure sessions happening later in the week. Still, went to several sessions, starting off with a session led by Doug Hellmann to talk about how to improve the situation around global requirements. The session actually seemed to be an attempt to define the issues around requirements and get more contributors to help with requirements project review and to chat about improvements to tests. We’d really like to see requirements changes have a lower chance of breaking things, so trying to find folks to sign up to do this test writing work is really important.

I had lunch with my book writing co-conspirator Matt Fischer to chat about some of the final touches we’re working on before it’s all turned in. Ended up with a meaty lunch again at Moonshine Grill just across the street from the convention center, after which I went into a “Stable Branch End of Life Policy” session led by Thierry Carrez and Matt Riedemann. The stable situation is a tough one. Many operators want stable releases with longer lifespans, but the commitment from companies to put engineers on it is extremely limited. This session explored the resources required to continue supporting releases for longer (infra, QA, etc) and there were musings around extending the support period for projects meeting certain requirements for up to 24 months (from 18). Ultimately by the end of the summit it does seem that 18 months continues to be the release lifespan of them all.

I then went over to the Textile building across from the conference center where my employer, HPE, had set up their headquarters. I had a great on-camera chat with Stephen Spector about how open source has evolved from hobbyist to corporate since I became involved in 2001. I then followed some of the marketing folks outside to shoot some snippits for video later.

The day of sessions continued with a “Brainstorm format for design summit split event” session that talked a lot about dates. As a starting point, Thierry Carrez wrote a couple blog posts about the proposal to split the design summit from the user summit:

With these insightful blog posts in mind, the discussion moved forward on the assumption that the events would be split and how to handle that timing-wise. When in the cycle would each event happen for maximum benefit for our entire community? In the first blog post he had a graphic that had a proposed timeline, which the discussions mostly stuck to, but dove deeper into discussing what is going on during each release cycle week and what the best time would be for developers to gather together to start planning the next release. While there was good discussion on the topic, it was clear that there continues to be apprehension around travel for some contributors. There are fears that they would struggle to attend multiple events funding-wise, especially when questions arose around whether mid-cycle events would still be needed. Change is tough, but I’m on board with the plan to split out these events. Even as I write this blog post, I notice the themes and feel for the different parts of our current summit are very different.

My session day concluded with a session about cross-projects specifications for work lead by Shamail Tahir and Carol Barrett from the Product Working Group. I didn’t know much about OpenStack user stories, so this session was informative for seeing how those should be used in specs. In general, planning work in a collaborative way, especially across different projects that have diverse communities is tricky. Having some standards in place for these specs so teams are on the same page and have the same expectations for format seems like a good idea.

Tuesday evening meant it was time for the StackCity Community Party. Instead of individual companies throwing big, expensive parties, a street was rented out and companies were able to sponsor the bars and eateries in order to throw their branded events in them. Given my dietary restrictions this week, I wasn’t able to partake in much of the food being offered, so I only spent about an hour there before joining a similarly restricted diet friend over at Iron Works BBQ. But not before I picked up a dinosaur with a succulent in it from Canonical.

I called it an early night after dinner, and I’m glad I did. Wednesday through Friday were some busy days! But those days are for another post.

More photos from the summit here: https://www.flickr.com/photos/pleia2/albums/72157667572682751

by pleia2 at May 01, 2016 04:28 PM

Matt Dorn

An Exploration of the OpenStack Metadata Service

Here's a video of my workshop on the OpenStack Metadata Service at the Austin OpenStack Summit.

Although it was a live hands-on workshop, the first 20-minutes provide the historical context for all current public/private clouds' metadata services.

Correction: I mention that the OpenStack config-drive is turned on by default in Nova. This is not true. One must ensure the following flag is set in /etc/nova/nova.conf on the nova-api server:

force_config_drive = True

If config-drive is set to false or not set at all, one can force it by using the --config-drive flag on boot:

nova boot --config-drive true --image my-image-name --flavor my-flavor myinstance

I will be posting all my cloud-init content from this workshop in some upcoming posts.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/YGUG8vU5KuQ" width="560"></iframe>

by Matt Dorn at May 01, 2016 03:08 AM

April 30, 2016

Red Hat Stack

OpenStack Summit Austin: Day 4

 

Hello again from Austin, Texas where the fourth day of the main OpenStack Summit has come to a close. While there are quite a few working sessions and contributor meet-ups on Friday, Thursday marks the last official day of the main summit event. The exhibition hall closed its doors around lunch time, and the last of the vendor sessions occurred later in the afternoon. As the day concluded, many attendees were already discussing travel plans for OpenStack Summit Barcelona in October!

Before we get ahead of ourselves however, day 4 was still jam-packed with a busy agenda. Like the first three days of the event, Red Hat speakers led quite a few interesting–and well attended–sessions.

To start, Al Kari, Kambiz Aghaiepour, and Will Foster combined to give a talk entitled Deploying Microservices Architecture on OpenStack Using Kubernetes, Docker, Flannel and etcd. The hands-on lab provided a step by step demonstration of how to deploy these services in a variety of environments.

Lars Herrmann, General Manager of Red Hat’s Integrated Solutions Business Unit then led a talk called Orchestrated Containerization with OpenStack. In the session, Lars explored how to leverage container standards, like Kubernetes, in implementing hybrid containerization strategies. He also discussed a variety of architectural designs for hybrid containerization and revealed how to use Ansible in these scenarios.

Ihar Hrachyshka then teamed with Kevin Benton and Sean Collins from Mirantis, as well as Matthew Kassawara from IBM in a presentation entitled The Notorious M.T.U. (Maximum Transmission Unit). The presenters examined impacts of improper MTU parameters on both physical and virtual networks, neutron MTU problems, and how to properly configure neutron MTU in various environments.

Just before lunch, in his presentation on CephFS, Greg Farnum, a long-standing member of the core Ceph development group, detailed why CephFS is more stable and feature-rich than ever. He then summarized which key new functions were introduced in the recent Jewel release, and also provided a glimpse of what’s to come in future iterations.

Later, Sridhar Gaddam joined with Bin Hu from AT&T and Prakash Ramchandran from Huawei Technology to discuss IPv6 capabilities in Telco environments. Among other things, the trio examined scenarios enabled by the IPv6 platform, its current state, and future expectations.

And finally, Miguel Angel Ajo, a Red Hat developer focused on Neutron, collaborated with Victor Howard from Comcast and Sławek Kapłoński from OVH in a presentation called Neutron Quality of Service, New Features, and Future Roadmap. The presenters detailed the Quality of Service (QoS) framework introduced in the Liberty release, and how it serves to provide QoS settings on the Neutron networking API.  They also covered DSCP rules, role based access control (RBAC) for QoS policies, and much more.

As you probably can imagine, it was a busy final day at OpenStack Summit. Like all OpenStack Summits, it was an extremely informative event, and also lots of fun! If you missed our previous daily recaps, we encourage you to read our blog posts from Day 1, Day 2, and Day 3. And for those who were present, we hope you enjoyed the event and found time to visit the Red Hat booth, as well as network with friends and colleagues from around the world. Like you, we’re already counting down the days until the next OpenStack Summit in Barcelona!

For continued Red Hat and industry cloud news, we invite you to follow us on Twitter at @RedHatCloud or @RedHatNews.

by Gordon Tillmore, Sr. Principal Product Marketing Manager at April 30, 2016 05:18 PM

April 29, 2016

IBM OpenTech Team

z/VM’s Cloud Manager Appliance for OpenStack Enablement – Vendor options

This is a guest post by Steve Shultz (shultzss@us.ibm.com)

The new z/VM Cloud Manager Appliance offers vendors several different choices for how to support OpenStack on z/VM:

  1. Get z/VM OpenStack plug-ins for Nova, Neutron and Ceilometer at:
    • https://github.com/openstack/nova-zvm-virt-driver
    • https://github.com/openstack/networking-zvm
    • https://github.com/openstack/ceilometer-zvm

    These are continually kept current with the OpenStack community and, as a result, can be built into any OpenStack-based tooling. They allow z/VM compute and networking resources to be leveraged directly from the aforementioned tooling. In this case, no OpenStack code runs within the z/VM product itself. All the OpenStack code runs within the OpenStack-based tooling. This the the way that SUSE OpenStack Cloud 6 is enabled to manage z/VM resources.
    Use z/VM OpenStack Plugins

  2. Drive the OpenStack Controller such that the Nova compute node code runs within the z/VM product. All other OpenStack code runs within the OpenStack-based tooling. This enables z/VM to participate in a hybrid environment, where multiple compute nodes on potentially multiple architectures run within a single OpenStack cloud or region.

    The Nova code within z/VM is at the Liberty level of OpenStack and supports all the Nova API that can leverage z/VM resources. In addition, the Nova code within z/VM transparently provides enterprise market compute functionality that cannot be accessed via the z/VM OpenStack plug-ins. This allows customers with existing z/VM infrastructures to leverage that infrastructure within an OpenStack context.
    Use Nova code on z/VM

  3. Drive the OpenStack API for all of the following OpenStack projects such that all code for these projects runs within the z/VM product:
    • Ceilometer
    • Cinder
    • Glance
    • Horizon
    • Heat
    • Keystone
    • Nova
    • Neutron

    Use all OpenStack code on z/VM

    All of these projects are at the Liberty level of OpenStack and support all the API that can leverage z/VM resources. Utilizing z/VM OpenStack enablement in this way transparently maximizes the amount of enterprise market functionality that can be exploited within an OpenStack context. This is the way that VMWare’s vRealize Automation is enabled to manage z/VM resources.

  4. Drive Remote Keystone Support. This enables z/VM to participate in a multi-region configuration, where each CMA (or other fully enabled OpenStack core) constitutes a single cloud or region. In this case, OpenStack-based tooling includes and runs the OpenStack Keystone/Horizon code, while driving the OpenStack API for all the other OpenStack projects listed above. All the code for the OpenStack projects except for Keystone/Horizon runs within the z/VM product. Utilizing z/VM OpenStack enablement in this way allows the OpenStack-based tooling to maintain control of identity/security, while allowing z/VM to otherwise maximally leverage enterprise market functionality transparently within an OpenStack context. This is the way that IBM Cloud Orchestrator is enabled to manage z/VM resources.

    Use all OpenStack code on z/VM except Keystone and Horizon

  5. For further information on all these options (and lots more information about z/VM’s OpenStack enablement!) see this page.

by Emily Hugenbruch at April 29, 2016 06:45 PM

OpenStack Superuser

OpenStack Community Contributor Awards recognize unsung contributors

AUSTIN, Texas-- On the last day of the Austin Summit, the Community Contributor Awards gave a special tip of the hat to those who might not be aware that they are valued.

These awards are a little informal and quirky but still recognize the extremely valuable work that everyone does to make OpenStack excel. These behind-the-scenes heroes were nominated by other community members.

There were three main categories: those who might not be aware that they are valued, those who are the active glue that binds the community together and those who share their knowledge with others. OpenStack community manager Tom Fifield handed out the honors after the Summit feedback session.

alt text here

You are our heroes! Shiny, shiny OpenStack Contributor medals.

The Infinite Rebase Shield

This trophy goes to individuals who, working on what's necessary, rather than what's necessarily cool, put in monumental amounts of effort. Always doing right by the community process - gaining consensus, then completing the technical work.

Winner: Victor Stinner
"In the face of endless rebases, Stinner has tirelessly pushed OpenStack toward Python 3. I get the impression that it has been hard at times to get the necessary reviews for his Python 3 porting work; especially for the projects he's not well known on, and I hate to see him discouraged. We're lucky to have such a prolific expert leading the Python 3 porting effort for OpenStack, and I'd like him to know that his work is appreciated."

The Duct Tape Medal

Recipients of this medal (and its mandatory accompanying duct tape and MacGyver wig) appear to be able to fix anything, somehow.

Joint Winners: Gauvain Pocentek, Andreas Jaeger, Christian Berendt, Atsushi Sakai, Kato Tomoyuki.

"These few work really, really hard in an area that is seriously needed, but isn't really high profile or very enjoyable: the tool chain that powers the documentation and its automation. Thanks to them we have complete config option references, consistent templates, CLI references, etc. Tireless, tireless contributions over a long period."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The Don't Stop Believin' Cup

Reserved for some of our newest users, who've made an impact already, but might benefit from the encouragement of this some-time OpenStack anthem.

Joint Winners: Jaivish Kothari, Pawel Koniszewski, Vikram Choudhary

"Kothari has been an active contributor in OpenStack and is contributing in different projects in the past year and I think he needs some motivation and maybe some appreciation."

"Live migration testing in our continuous integration (CI) system is woefully lacking (non-voting, only tests some basic happy paths with libvirt). Koniszewski discovered and fixed at least three major live migration regressions in Mitaka around release candidate time. Also, Pawel is in UTC+1 so was putting in long hours working with the Nova core review team in the US."

"Choudhary is not a Neutron core. But the second most prolific reviewer in Neutron for the last six months. He reads every single line and often catches issues that other reviewers have not caught. His reviews demonstrate a holistic knowledge about how Neutron works combined with a focus on detail that rivals anyone's. I and others have joked that our code isn't ready until it passes the 'Vikram test' - that +1 is in my experience the most difficult to reach, the most satisfying to achieve, and the most important indicator that my code is really ready. I am sure there are many who find his ability to find issues frustrating, but I find that I have learned more from his -1s that anyone else's, and I have grown more as a coder because of them."

The Simple-to-Implement Prize

For the person who has gone above-and-beyond to reduce complexity.

Winner: Dean Troyer
"Troyer has solved one of the most cited problems of OpenStack, inconsistency in the user experience by creating the OpenStack Client. It presents an easy-to-use, consistent, and powerful interface to replace the soup of models and options presented by the other clients. He worked against odds and popular opinion, took a long-term view of a problem and delivered a tool that will impact nearly every OpenStack user in a positive way. Every person I've spoken to about the OpenStack Client feels the same way, and is grateful for his efforts."

The Nathanial Perez Prize for Behind-the-Scenes efforts

Is their hair blue, or pink? We don't know exactly, but they work in the shadows fixing problems we didn't even know existed.

Joint Winners: Tony Breeds, Tristan de Cacqueray, Rocky Grober, Carol Barrett

"Even the most cursory of glances at openstack-dev will show that Breeds consistently volunteers for just about anything that needs doing. Perhaps a little euphemistically I've heard him described as being willing to be thrown under any bus the community might require of him. He is equally adept at wrangling deep OpenStack code as he is wrestling with gate issues or engaging in discussions about the community's well-being."

"De Cacqueray works behind the scenes to keep all OpenStack users secure, by leading the Vulnerability Management Team. It is a lot of coordination work, and by its very nature it is very invisible. As if it wasn't enough, he is also involved in organizing our governance elections. This is also a thankless job -- nobody congratulates you when everything works but everyone jumps the gun if there are any glitches in the process."

"Grober's been accurately described as a Swiss-army knife - technical but community-oriented, she's active in DefCore, Refstack, the Product Work Group, Logging Work Group, Women of OpenStack, Diversity Work Group etc. She's able to talk bare metal and fast cars with the same ease and always on point, always friendly and a great connector in the community."

"Barrett has been involved in the OpenStack working group communities for several years and has been instrumental in helping different needs/segments in the community gain a shared voice. She has given significant contributions to several initiatives including the Enterprise WG, Diversity WG, Product WG.
Joint Winners: Tony Breeds, Tristan de Cacqueray, Rocky Grober, Carol Barrett

The When-do-you-sleep? Trophy

For the person we're pretty sure sends emails every hour of a twenty-four hour period. Don't you ever sleep?

Winner: Akihiro Motoki
"Motoki is simultaneously a core reviewer on Neutron and Horizon, and seriously works hard on both. But he also finds time to be a leading figure in the internationalization team, fixing its tooling and translating a lot of OpenStack into Japanese and he contributes actively to the documentation project!"

The Bug Czar Award

For the individual who does the most to deal with the bugs.

Winner: Markus Zoeller
Zoeller has been the Nova bug czar for several months now (at least all of Mitaka). He runs an (under-attended) weekly bug meeting in different time zones, writes tooling for working with Nova bugs (and launchpad in general), has created a dashboard for nova bugs, mentors others (since he's trying to rotate the triager duty), and has driven positive process change back to the community (he's working on changing the nova 'wishlist' bug / RFE process with the operators community). It should be noted that Nova frequently has over 1,000 unresolved bugs, and people that take on bug czar lead generally burn out quickly (weeks). Markus has shown great commitment in this critical but under-appreciated area. Besides bugs, he's also driven the effort since Mitaka to cleanup the glut of configuration options in Nova to actually make operators aware of what each option does, how it's used, what the valid values are and risks/inter-dependencies. This is another mostly thankless job but greatly improves the usability of Nova."

The "Does anyone actually use this?" Trophy

It turns out that not only do people use OpenStack, some of those who do put in effort to assist with the sanity of the collective.

Joint Winners: Kris Lindgren, JJ Asghar, Matt Fischer, Matt Jarvis

"Asghar cares about building communities of invested stakeholders. His efforts in trying to get OSOps off the ground is something that I have not seen any OpenStack Operator ever run with this far or this fast. Hes an asset to the community for sure! Great colleague."

"Lindgren is actively involved in operator discussions and working groups. His input input is valuable in both helping others with problems and with feedback on where features need improved."

"Jarvis is an active contributor to the operators community. He's also actively involved in testing upcoming features and reporting feedback to both operators and developers. He's done a great job bringing together not just the OpenStack community across the North West of England (specifically Manchester), but Europe in general by stepping up and helping to organize the first EU Operators Meetup. He's a fantastic ambassador for OpenStack in general, and a great engineer to boot."

Mentor of Mentors

For serious efforts in sharing knowledge with others.

Winner: Victoria Martinez de la Cruz and Emily Hugenbruch
"Martinez de la Cruz is coordinating the Outreachy program. She put lots of energy into it and she's very good at it. She knows how to welcome and mentor newcomers, that thanks to her feel at ease. I really think she deserves an award."

"For months, Hugenbruch has been leading community members to build from the ground up the OpenStack Mentoring program. She has been working tirelessly to setup the initially processes for the program, pairing technical mentor/mentees together, as well as setting up events at the Austin summit for people to learn how to give back to the community together."

Bonsai Caretaker

These people keep pressing the button to feed the Tamagochi.

Joint Winners: Vilobh Meshram, Ihar Hrachyshka and Miguel Angel Ajo

Meshram is leading the cross-project quota effort to solve the Quota enforcement challenge. Its a challenge because Quotas are key piece of OpenStack and are poorly implemented in every project who makes use of Quotas. Apart from that Vilobh is working on the DLM (Distributed Lock Manager) effort which got approved in Tokyo summit by introducing Consul driver in Tooz. Vilobh is the first one to introduce Nested Quota in OpenStack. Vilobh is working on many important projects like Utilization Based Scheduling, solving scheduling problems at the scale of Yahoo!, empowering people to contribute Openstack and spread knowledge by giving talks in OS Bug Smash at New York and many more. Vilobh is a core contributor for Magnum. Within his short span of 2 years has contributed, helped the community by providing a new perspective by solving important real world problems across various OpenStack projects like Nova, Cinder, Magnum, Oslo, Keystone, Glance.

"Ajo and Hrachyshka have been the unheralded leaders of the QoS section of the Neutron project. While overall the Neutron project has a high knowledge barrier to entry and can be difficult to start with, they have made sure that the QoS mini-community is very welcoming and encouraging. for programmers new to Openstack. In addition to guiding and encouraging, they have helped ensure success for initiatives in the QoS space, thinking ahead about the dependencies that will be needed long before they are, and setting those supporting initiatives in place in advance. I feel personally indebted to them, and I think they exemplify the ideals of contributors to Openstack."

Stay tuned for news on when the next nominations open!

by Superuser at April 29, 2016 06:28 PM

Creating a smart conference with OpenStack and containers

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/GaqfeCiUUUk" width=""></iframe>

Jakub Pavlik's team created a Smart City in Pisek, using sensors in the cobblestone streets in the town to measure metrics like temperature. Pavlik and his team brought the project all the way to the Austin Summit, using it to measure CO2 flows of the 7,500 conference goers.

Superuser TV sat down with Pavlik to hear how the project got off the ground, what was done for the Summit and what this could mean for the future of the Internet of things.

You can also read more about it in the print edition of Superuser magazine, available at the Summit.

by Superuser at April 29, 2016 04:27 PM

Red Hat Stack

OpenStack Summit Austin: Day 3

 

Hello again from Austin, Texas where the third day of OpenStack Summit has come to a close. As with the first two days of the event, there was plenty of news, interesting sessions, great discussions on the showfloor, and more. All would likely agree that the 13th OpenStack Summit has been a Texas-sized success so far!

Similar to day 1 and day 2 of the event, Red Hat had several exciting announcements to pass along. The first press release to hit the wire detailed additional customer traction with Red Hat OpenStack Platform. Yesterday, it was announced that Verizon, NASA’s Jet Propulsion Laboratory (JPL), and Cambridge University had all selected Red Hat OpenStack Platform as the backbone of their cloud initiatives. Today, we shared the news that several large organizations across Europe, including Fastweb, Paddy Power Betfair, and Produban, have deployed the technology as well and are experiencing great results for their businesses.  

In addition,Interactive Intelligence has moved its all-in-one omnichannel customer engagement cloud service, Communications as a Service (CaaS), to an OpenStack and Ceph-based private cloud from Red Hat. Based on its internal testing, Interactive Intelligence reports that the Red Hat OpenStack Platform and Red Hat Ceph Storage solution has cut its time to deploy from a few weeks to only minutes.

Alex Smelik, a Chief Architect at Interactive Intelligence, summed it up best:  

“We love the fact that OpenStack has become a more mature architecture. With Red Hat, OpenStack is stable and it works for our deployment. We now have a reliable CaaS architecture. When you put all the pieces together and look at the big picture—the price to keep computer, storage and networking together— the combination of Ceph and OpenStack technologies exemplifies the efficiency and cost-effectiveness of the Red Hat solution.”

In addition, nine of Red Hat’s thirty-five sessions at the conference occurred today.  Kicking everything off this morning, Ryan Brown and Victoria Martinez de la Cruz teamed to discuss Zaqar, OpenStack’s Messaging and Notification service. The two demonstrated how to leverage the technology to develop a cloud-based, distributed microservice architecture.

Will Foster and Kambiz Aghaiepour then gave a thorough overview of TryStack.org — a free way to experience and learn more about OpenStack and it’s capabilities in a large, public sandbox. The speakers provided a history of the technology as well as its future roadmap.  They also gave insight into its deployment architecture and underlying components.

Later, Tim Rozet, a Red Hat Senior Software Engineer, and Bin Hu, a Senior Product Manager from AT&T, deep dived into virtual network functions (VNF) and explained how enterprises can leverage a complex service that previously required multiple physical appliances. They also provided an analysis of both BGPVPN and NSH that can help enable many SFC use cases.

A short time later, Miguel Angel Ajo and Gorka Eguileor, Red Hat software engineers, detailed how Ansible and Oslogmerger can reproduce production issues, gather specific logs, and augment them with system probes that will be mixed into the logs themselves.

Jakub Libosvar, a Red Hat Product Manager, and Rodolfo Alonso, an engineer from Intel, then co-presented on Open vSwitch. In their presentation, entitled Tired of Iptables Based Security Groups? Here’s how to Gain Tremendous Speed with Open vSwitch Instead!, the pair provided an overview of current implementations of Neutron-Open vSwitch firewall drivers. They then demonstrated two approaches using OpenFlow: a security group firewall and a driver based on conntrack.

In another joint Red Hat and Intel session, Rimma Iontel from Red Hat, and Eoin Walsh from Intel, explored how Network Functions Virtualization (NFV) provides elasticity, cost savings, and increased efficiency. In their talk, Achieving Fine-Nine VNF Reliability in a Telco-Grade OpenStack Cloud, the presenters shared their experiences in deploying OpenStack-based Telco-grade services. They also examined the differences between VNFs and PNFs (Physical Network Function) and what is required in OpenStack an platform to support the former.

In one of the more interesting sessions of the day, Radhesh Balakrishnan, Red Hat’s GM of OpenStack, presented with Kyle Forrester from Big Switch Networks and Chris Emmons from Dell in a talk called Designing for NFV: Lessons Learned from Deploying at Verizon. As detailed in Verizon’s press release, the three discussed the OpenStack pod design for NFV that emerged after multiple iterations with onsite stakeholders.  They also covered a series of technical challenges that were overcome, and best practices that we learned by working together across traditional orchestration networking silos.

Finally, last but not least, Sean Cohen, Frederico Lucifredi, and Sebastian Han teamed together in a popular session entitled Protecting the Galaxy – Multi-Region Disaster Recovery with OpenStack and Ceph. The presenters explored the various architectural options and levels of maturity in OpenStack services for building multi-site configurations using the Mitaka release. They also discussed the latest capabilities for volume, image and object storage with Ceph as a backend storage solution. The three also covered future developments that the OpenStack and Ceph communities are pushing forward.

As you can tell, it was a busy day at the conference today. Although tomorrow will be the last day of the main conference, we’re looking forward to our continued interactions with the community and partners.

For continued Red Hat and industry cloud news, we invite you to follow us on Twitter at @RedHatCloud or @RedHatNews.

by Gordon Tillmore, Sr. Principal Product Marketing Manager at April 29, 2016 01:41 PM

Opensource.com

Master OpenStack with 5 new tutorials

Keep your OpenStack skills fresh and master the art of cloud building with this new collection of guides and tutorials.

by Jason Baker at April 29, 2016 07:01 AM

Aptira

OpenStack Australia Day – 1 week to go!

This time next week, we will be halfway through Australia’s first OpenStack Day. With almost 300 registered attendees, there’s now only a handful of tickets left. Be sure to secure your spot at: http://australiaday.openstack.org.au/ before it sells out!

Presentations will be split into two tracks:

The Main Track will be business oriented, including keynotes, business presentations and a panel. The Technical Track will cover more in depth technical subjects, workshops and demonstrations. A brief summary of the day is as follows, with the full agenda detailed on the website.

  • 8am – Attendee registration
  • 9am – Sessions
  • 10.30am – Morning Tea
  • 11am – Sessions
  • 1pm – Lunch
  • 2pm – Sessions
  • 3.30pm – Afternoon Tea
  • 4pm – Sessions
  • 6pm – Networking Drinks
  • 7pm – Close

There will be prizes and giveaways, as well as opportunities to network with industry leading figures from Australia and worldwide. For more information on this event, including full speaker bios, and presentation information, head to: http://australiaday.openstack.org.au/

See you all next week.

The post OpenStack Australia Day – 1 week to go! appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Tristan at April 29, 2016 01:21 AM

April 28, 2016

OpenStack Superuser

Rackspace talks running an OpenStack public cloud

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/vM6bgUmpWik" width=""></iframe>

Matt Van Winkle and Joel Preas of Rackspace talk what they've learned building a public cloud offering on OpenStack, how they've overcome challenges, and what OpenStack developments they're looking forward to.

by Superuser at April 28, 2016 08:10 PM

Funneling operator feedback to the OpenStack developer community

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/6qVQzsGfwco" width=""></iframe>

With experience as both an OpenStack developer and operator, Edgar Magana, cloud operations architect at Workday and member of the User Committee shares how OpenStack operators can provide feedback to developers working upstream. New to the community? Magana says that subscribing to the mailing list and attending the ops mid-cycle are the best ways to get started.

by Superuser at April 28, 2016 08:09 PM

Solinea

Taking Care of Business: Breaking Down Business Barriers to DevOps Transformation (third of three)

Author’s note: This is the third in a three-part series about why DevOps transformation can be so hard for an enterprise, Be sure to read Part 1: Overcoming Process Barriers to DevOps Transformation: A Herculean Task and Part 2: Winning at Whac-a-Mole: Overcoming Psychological Barriers to DevOps Transformation.

At its core, DevOps is a strategy for enhancing the speed and quality of business services to drive competitive differentiation in the marketplace. This strategy unites application development and IT operations into a cohesive unit focused on delivering the results the business needs.


Ironically, in our work with global enterprises we have found that business obstacles can get in the way of the DevOps mission of “Taking Care of Business.” Here I’ll provide suggestions for overcoming three business obstacles you may face in your DevOps Transformation:


1. Seek Environmental Alignment

Every enterprise has the usual complement of environments to develop, test and deploy their code. These have names that include Dev, DevTest, Integration, Load, Prod, etc.

Each of these environments serves a specific function to meet the needs of the business—from development work to integration to production, and everything in between. Unfortunately, even if these environments were set up the same way in the beginning, the lack of central management will guarantee that there is drift in these environments over time. The resulting discrepancies add complexity to the deployment process.


These differences often affect the non-functional requirements for an application, such as load balancer software / hardware, storage, security, cloud platform, and others. During a manual installation, most of these differences are just “known,” and people will account for them based on their knowledge. However, during a DevOps transformation, these environmental differences will become very evident (almost immediately) to the teams that are automating the deployment of any application.


Remember, one of the goals of a DevOps process transition is to create infrastructure that is immutable. To support this goal, the same infrastructure automation code should be utilized across all environments, starting with the developers. If you are writing code specific to an environment, your infrastructure is no longer immutable. (Granted, your code will need to account for things like IP Address changes and the like, but these fall in the category of configuration management and do not represent environmental differences that should impact the application.)


Getting these environments aligned presents a great challenge for many of our enterprise customers. This is especially true when the environments have different system owners and different budgets and may be deployed in different parts of the world. Fortunately, as we progress further into a world where infrastructure is becoming code—with API-driven infrastructures like AWS, OpenStack, Docker, SDN and others—standardization becomes easier to achieve.


2. Enhance Software Development Skills

If you look at the skill-set requirements of the enterprise infrastructure engineering, networking, and operations teams, you’ll notice that software development skills have not been a priority. Certainly members of these teams often have some level of scripting skills and basic bash, but a deeper software development skill set is needed in the modern DevOps support model.

These teams need to learn basic git workflows to succeed in an environment where infrastructure is maintained as code. I have seen a lot of resistance to hiring for these skills and to providing training to develop these skills, and that’s a huge mistake. Software development skills are crucial to the success of enterprise IT now and will only become more critical in the future.


3. Manage Applications Not Servers

Years ago when the concept of virtualization was just being introduced, many IT operators did not want to give up on bare metal. We called these folks “server huggers.” Most people have since moved past server hugging, but, now that we are transitioning to a cloud world, we have a new generation of IT operators resistant to moving beyond their comfort zone, and we call them “VM huggers.” In most large enterprises today, you will find remnants of both server huggers and VM huggers, still clinging on.

For DevOps transformation to succeed, we have to release those old mindsets and think differently about what our job really is:

The goal should be to transition from managing servers and virtual machines to managing applications.


Let me give you an example. We tend to focus all of our measurements on the host level. While this will give you insight into how the server is running, it is only part of the picture. Understanding how the application is performing and reacting to those details is generally more significant and will greatly improve the business case for the application being monitored.


Consider this: When an application becomes unresponsive, it will take more time to determine that a server has filled up its disk than to simply add more VMs. Once the application is performing to meet the business requirements, details on what went wrong can be evaluated out of band from any single incident. Then you can take the troubling VMs out of service, review the incident, and provide feedback to the development team for remediation. This greatly simplifies the process for everyone and reduces the work being done “off-hours” for the entire team. Businesses care about applications, not virtual machines. The added cost of these “additional” VMs during a “failure” state is minimal compared to the downtime of the service.


Over the course of this three-part series, I’ve identified a few of the common barriers enterprises may face in their DevOps journeys. I’ve given you some things to watch out for and some time-tested strategies for overcoming those obstacles. More importantly, I hope I’ve made three things clear:


1. All of these obstacles are surmountable
2. The DevOps journey is well worth the effort
3. and The Solinea team is here to help.


Share your thoughts on these topics with us. We’d love to hear feedback from others walking the path to enterprise DevOps Nirvana.

Author: Seth Fox

The post Taking Care of Business: Breaking Down Business Barriers to DevOps Transformation (third of three) appeared first on Solinea.

by Solinea at April 28, 2016 03:42 PM

RDO

What did you do in Mitaka? Javier Peña

We're continuing our series of "What did you do in Mitaka?" interviews. This one is with Javier Peña.

<iframe data-link="https://www.podbean.com/media/player/82a4c-5ec74f?from=yiiadmin" data-name="pb-iframe-player" frameborder="0" height="100" scrolling="no" src="https://www.podbean.com/media/player/82a4c-5ec74f?from=yiiadmin" width="100%"></iframe>

(If the above player doesn't work for you, you can download the file HERE.

Rich: Today I'm speaking with Javier Peña, who is another member of the Red Hat OpenStack engineering team. Thanks for making time to speak with me.

Javier: Thank you very much for inviting me, Rich.

R: What did you work on in the Mitaka cycle?

J: I've been focusing on three main topics. First one was keeping the DLRN infrastructure up and running. As you know, this is the service we use to create RPM packages for our RDO distribution, straight from the upstream master commits.

It's been evolving quite a lot during this cycle. We ended up with so much success that we're having infrastructure issues. One of the topics for the next cycle will be to improve the infrastructure, but that's something we'll talk about later.

These packages have now been consumed by the TripleO, Kolla, and Puppet OpenStack CI, so we're quite well tested. Not only are we testing them directly from the RDO trunk repositories, but we have external trials as well.

On top of that I have also been working on packaging, just like some other colleagues on the team who've you've already had a chance to talk to - Haikel, Chandan - I have been contributing packages both to upstream Fedora, and also RDO directly.

And finally, I'm also one of the core maintainers of Packstack. In this cycle we've been adding support for some services such as AODH and Gnocchi. Also we switched Mysql support from the Python side. We switched libraries, and we had to do some work with the upstream Puppet community to make sure that PyMysql, which is now the default Python library used upstream, is also used inside the Puppet core and we can use it in Packstack.

R: You mentioned briefly infrastructure changes that you'll be making in the upcoming cycle. Can you tell us more about what you have planned for Newton?

J: Right now the DLRN infrastructure is building 6 different lines of repositories. We have CentOS master, which is now Newton. Mitaka, Liberty, and Kilo. And we have two branches for Fedora as well. So this is quite a heavy load for VMs that we're running right now. We were having some issues in the cloud we were running that instance. So what we're doing now is we are migrating to the CentOS CI infrastructure. We are having a much bigger machine in there. And also what we are going to do is we will be publishing the resulting repositories using the CentOS CDN, which is way more reliable than what we could build with individual VMs.

R: Thank you again for your time. And we look forward to seeing what comes in Newton.

J: Thank you very much, Rich.

by Rich Bowen at April 28, 2016 01:45 PM

RDO BoF at OpenStack Summit, Austin

On Tuesday afternoon about 60 people gathered for the RDO BoF (Birds of a Feather) session at OpenStack Summit in Austin.

By the way, the term Birds of a Feather comes from the saying "Birds of a feather flock together", which refers to like-minded people gathering.

The topics discussed can be found in the agenda.

The network was friendly, and we managed to capture the whole session in a Google Hangout. The video of the session is HERE.

And there are several pictures from the event on the RDO Google+ page.

Thank you for all who attended. You'll see follow-up discussion on a number of the topics discussed on rdo-list over the coming weeks.

by Rich Bowen at April 28, 2016 01:45 PM

Red Hat Stack

Culture and technology can drive the future of OpenStack

Written By: E.G. Nadhan, Chief Technology Strategist (Central), Red Hat

 

“OpenStack in the future is whatever we expand it to”, said Red Hat Chief Technologist, Chris Wright during his keynote at the OpenStack Summit in Austin. After watching several keynotes including those from Gartner and AT&T, I attended other sessions during the course of the day culminating in a session by Lauren E Nelson, Senior Analyst at Forrester Research. Wright’s statement made me wonder about what lies in store for OpenStack and where would the OpenStack Community — the “we” that Wright referred to — take it to in the future. Several sessions in the Analyst track called out the factors that explain the increased adoption of OpenStack as well as the technological challenges encountered. But, Nelson’s session brought it all home for me — especially in the last slide of her presentation which is a call to action to the enterprises at large to take key steps entailing a cultural shift that would ease the adoption of OpenStack and the principles entailed. Live from the OpenStack Summit at the crossroads of Culture and Technology, let me explain how this intersection can take OpenStack to a new Frontier.

Red Hat sees great potential for technological advances for OpenStack. NASA’s Jet Propulsion Laboratory (JPL) has built an OpenStack based private cloud, saving significant time and resources spent on datacenters by modernizing its on-premise storage and server capacity, giving them the ability to support hundreds of JPL mission scientists and engineers. Red Hat has positioned OpenStack to be taken to a new frontier.

But it is not all technology.  

Culture matters — a message that came through in Nelson’s session.

Some of the steps that Nelson outlined may seem very obvious to those enterprises that have already embraced open source and the principles it comes with. After all, as someone declared recently, the State of Enterprise IT is open! However, the points Nelson made still ring true for those enterprises who may be considering making the shift or are exploring alternatives to the status quo.

Upstream your code.  Sharing your “proprietary” code is actually a good thing. Multiple experienced pairs of eyes continuously review the code and identify vulnerabilities, suggesting remediate measures as good citizens of the OpenStack Community. And enterprises get this without having to staff up projects dedicated to this effort within their four walls. At the very least, enterprises get new insight on different ways of doing things — a baby step towards innovation.  Nelson cautions that there is likely to be resistance within enterprises looking to make this shift because it is a fundamental change to the DNA of the enterprise bloodstream — or as we simply refer to it — its culture!

Don’t fight the community. “Communities to the Enterprise” was the title of one of the slides that Wright presented in his keynote. Nelson observed that it is better to “listen” and respect the underlying sentiment of the community at large rather than going against the community. After all, it is the community that is going to be the long term caretaker of the concepts that are being implemented in the code.

Don’t be alarmed by transparency of bugs. If you know something is not working, say so and share that information proactively. The OpenStack Community is all about transparency based upon its open source foundation. Sharing the knowledge about what is wrong can actually establish more credibility and trust than it would otherwise. Come to think of it, character matters too in an environment of open culture. Nelson says this may take some time to adjust especially where the “List of Known Bugs” is perceived to reflect negatively on the enterprise that originated the code.

Embrace the community. Nelson shared that the peer-to-peer interaction is one of the most important benefits that enterprises can get out of adopting OpenStack. It is a study group of sorts for developers where there is a free exchange of innovative ideas and best practices.

Go to the user events. Summits are great! There is no denying that. As a matter of fact, this blog is being written live from the Summit premises at the Austin Convention Center! However, user events are also a closed-group, informal setting where there is focus on a particular topic with varied perspectives. There is a more concentrated discussion on topics germane to the enterprise.

Developers need services. “Developer Experience matters”, says Nelson. It is not just about making the tools and technologies available to the developers. They need access to the supporting services that makes their life easier. Making such services available characterizes a culture that better appreciates the developer’s need and creates a productive environment overall.  

Those then are the thoughts that Nelson shared towards the end of her session. I have taken the liberty of adding my observations as well. What I like about these bullets is that they provide a prescriptive set of actions that enterprises can take — whether they are new or current users of OpenStack.

And if more enterprises take these steps, I like where OpenStack is likely to be in the future — for a very simple reason. The Community will drive it in the right direction. I simply defer to the Community. Far be it from me, to fight the Community — just like Nelson advocated.  

The Community will take OpenStack wherever it needs to go — even beyond outer space — its latest frontier thanks to the NASA Jet Propulsion Laboratory.

Just what Chris Wright said in his keynote.

What say you?  Are there other steps that enterprises can take to ease the adoption of OpenStack?  Any thoughts you want to share from other sessions you attended at the OpenStack Summit? How would you contrast the influence of culture versus technology on the evolution of OpenStack?

Please let me know.

by Jeff Jameson, Sr. Principal Product Marketing Manager at April 28, 2016 01:39 PM

April 27, 2016

OpenStack Superuser

How Cambridge University and Monash University leverage OpenStack for high-performance computing

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/neZvmr1nzFE" width=""></iframe>

Stig Telfer of Cambridge University and Blair Bethwaite of Monash University talk with Superuser TV about leveraging OpenStack for HPC and the OpenStack Scientific Research Working Group that launched at the OpenStack Summit this week.

by Superuser at April 27, 2016 10:07 PM

Red Hat Stack

OpenStack Summit Austin: Day 2

Hello again from Austin, Texas where the second busy day of OpenStack Summit has come to a close. Not surprisingly, there was plenty of news, interesting sessions, great discussions on the showfloor, and more.

Starting with some announcements, the University of Cambridge, one of the world’s oldest and most prestigious academic institutions, has announced they selected Red Hat to support its OpenStack-based high performance computing (HPC) initiative. In addition to deploying Red Hat OpenStack Platform for its HPC-as-a-Service offering, the University of Cambridge also plans to collaborate with Red Hat to bring HPC capabilities to the upstream OpenStack community. To keep the research institution at the forefront of large scale big-data science, the university turned to its longtime partners Dell and Intel to help it create one of the world’s most energy efficient datacenters. Initially, they deployed OpenStack on a community-supported Linux during the proof-of-concept phase, but found that they needed a more reliable, integrated and supported OpenStack platform for production deployment, leading them to Red Hat OpenStack Platform.

In other news, the Ceph Community, a Red Hat sponsored worldwide collection of developers working to build the popular Ceph software-defined storage project, announced the first implementation of an innovative large-scale Ceph storage cluster environment. This cost-effective, energy-efficient four watt converged microserver, developed by WDLabs™, a business growth incubator of storage solutions leader Western Digital Corporation, is the first demonstrated solution offering true flexibility of applications executing on a microserver.

We’re also excited to share that the Red Hat Consulting team launched the Red Hat Open Innovation Labs, a service that helps companies integrate people, methodology, and technology to solve business challenges in an accelerated fashion. As part of an Open Innovation Labs consulting engagement, customers will work collaboratively in a residency-oriented lab environment with Red Hat experts to jumpstart innovation and software development initiatives using open source technology and DevOps methods.

Like day one of the event, Red Hat speakers led or co-presented in several sessions.  Starting us off today, Martin Lopes hosted a session entitled Integrate Active Directory with OpenStack Keystone. He explored the changes needed in an Active Directory environment, including multi-domain configuration in keystone v3, high availability options, and, how to encrypt query traffic using LDAPS.

Sadique Puthen and Dustin Black then teamed up to present How to Troubleshoot OpenStack Without Losing Sleep. The pair discussed a variety of issues including troubleshooting load balancers, scaling the rabbitmq message bus, connection problems with MariaDB, and much more.

Thomas Cameron, Red Hat’s global solution architect leader, spoke to a large audience about Container Security. He covered kernel namespaces, security enhanced Linux (SELinux), Linux control groups, the Docker daemon, and more. He also demonstrated how each of these technologies affect security, while providing tips and tricks to implementing a more secure container environment.

Later in the day, Dustin Schoenbrun co-presented with Akshai Parthasarathy, a Technical Marketing Engineer from NetApp, in a session focused on Manila. The duo explained why management of file-shares is exploding with new features, use cases and deployers. They then explored Manila deployment with Red Hat OpenStack Platform Director, availability zones for deploying shares in multiple data centers, consistency groups, share replication, and more.

Adam Young, a Red Hat core developer on Keystone, and Henry Nash, an OpenStack architect from IBM, gave a talk entitled Advances in Keystone’s Role Based Access Control. The well-attended session covered a variety of RBAC topics including advanced customizations, domain and project administration capabilities as well as responsibility delegation.

In the late afternoon, Rich Bowen led the RDO Community Meet Up. RDO is a distribution of OpenStack, packaged for RPM-based Linux operating systems, such as Red Hat Enterprise Linux, CentOS, Fedora, and others. The RDO community provides packaging, testing, and community-based support of OpenStack on these platforms. In his talk, Rich covered community governance, packaging workflow, the RDO Manager deployment tool, and other topics of interest.

Following this, Dan Lambert, a Principal Software Engineer for Red Hat, and Alexander Adamov, an Engineer for Mirantis, explored OpenStack security in their presentation Using Open Source Security Architecture to Defend Against Targeted Attacks. The co-presenters explained how a network IPS can defend an OpenStack cloud against targeted attacks, after being enabled as a virtual network function (VNF). They also discussed defensive scripts, open source malware sandboxes, and general security best practices.

And finally, last but certainly not least, John Spray, a Red Hat engineer based in Edinburgh, capped today’s agenda with a talk about Ceph as a Service with OpenStack Manila. John covered a variety of topics including the new CephFS Native driver for Manila, how the driver can be extended to provide an NFS share service based on CephFS, and future developments expected with CephFS/Manila.

For continued Red Hat and industry cloud news, we invite you to follow us on Twitter at @RedHatCloud or @RedHatNews.

Looking forward to a busy day three!

by Gordon Tillmore, Sr. Principal Product Marketing Manager at April 27, 2016 09:07 PM

IBM OpenTech Team

Get your videos here – IBM Client Day session videos posted

Bannier

What an amazing day we had yesterday at the OpenStack Summit in ATX. Many sessions were completely packed out, it was great to see so how popular the talks where.  I guess it’s easy when there are over 7500+ attendees at the conference !!  You can get all the videos from our IBM Client Day Sponsor Track below. I want to give a special shout out to our guest speakers Duncan Johnston-Watt from Cloudsoft & Mark Shuttleworth from Canonical.

You can also check out IBM General Manager, Don Rippert’s session video here from Monday. Why IBM is Betting on OpenStack

If you are looking for more of the 40+ other sessions from IBM, you can find them all at their source on the OpenStack Foundation YouTube Channel.

11:15 OpenStack for Beginners Jesse Proudman • Shamail Tahir • Tyler Britten • Brad Topol
Video Link
12:05 The open cloud: A platform of possibilities – Local, Dedicated and Public use cases Jason McGee • Azmir Mohamed
Video Link
2:00 Modelling, Deploying and Managing Applications in IBM Blue Box with Cloudsoft AMP Duncan Johnston-Watt (Cloudsoft) • Hernan Alvarez
Video Link
2:50 Open without Limits — LinuxONE, Ubuntu 16.04 and OpenStack Mark Shuttleworth(Canonical) • Angel Diaz • Jessica Murillo • Kershaw Mehta
Video Link
3:40 One network to rule them all: Open, scalable, & integrated networking for Containers & VMs Kyle Mestery • Phil Estes
Video Link
4:40 Accelerating Mobile App delivery with IBM UrbanCode Deploy, Heat, and OpenStack Tyson Lawrie • Tim Pouyer • Glen Hickman
Video Link

by Nate Ziemann at April 27, 2016 04:32 PM

OpenStack Superuser

Collaborate or die: why no tech company is too big to fail

AUSTIN, Texas -- The numbers say it all: with 50 billion devices connected and 400 million servers by 2020, infrastructure must change with the times.

That's the main message from Mark Collier, OpenStack Foundation COO on the second day of the Summit. To manage that much infrastructure, new tools and patterns will be required which can handle bare metal, virtual machines, and containers. The opportunity is massive, but it will require collaboration across many open source communities, keeping in mind what users want and need, because it’s collaborate or die in the billion device, billion core era.

"We can’t do it alone," Collier said. He named some of the "big dogs" of the industry that use OpenStack -- AT&T, Walmart, State Grid, China Mobile and VW -- underscoring his point that to survive and thrive they need to be part of open community.

"There’s no such thing as too big to fail in technology," Collier said. OpenStack will be the integration engine for users embracing new technologies, since, much like the Republican primary, there's no clear favorite yet.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

What's next? Collier is the first to admit he doesn't know. He envisions there will be a LAMP stack for OpenStack but says the community should remove its blinders. "Whatever our LAMP stack is it will be OpenStack and__. I'm counting on you to help me fill in the blanks."

His message was echoed by Cisco's CTO extraordinaire Lew Tucker, whose talk started by taking a few giant steps backward for mankind noting that homo sapiens are the only animals who can cooperate at such high levels.

"Above all man is an inventor. We’re not content to accept what nature has provided us," he said. Human collaboration is helping us re-imagine the world, and at Cisco a good chunk of that is working with open communities. The historic Silicon Valley firm has been deeply involved with OpenStack since 2011 and Tucker outlined their work with communities such as Open Daylight and other software-defined networking projects.

alt text here

Tucker finished on a high note - reminding participants not to take themselves "too seriously" and hinting at the return of cloud rapping duo Dope'n'Stack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Brave new world

Two OpenStack users took the stage -- both of them called "brave" by Collier. Both are pushing the frontiers of working with OpenStack and containers.

First up was LivePerson's Koby Holzer. The company started with OpenStack way back in the Diablo release.

"We’ve come a long ways since then," Holzer said, adding that they are currently upgrading to Kilo and have a mind-warping 8,000 instances on 20,000 cores running in seven data centers worldwide. The current set-up has Kubernetes and Docker managing 150 microservices, deployed in VMs. What’s next for LivePerson’s OpenStack environment? Full-scale Kubernetes and expanding into public clouds."It's been a long and brave journey, and OpenStack enabled it."

It takes a different kind of brave to challenge the live demo gods, but Alex Polvi CoreOS CEO was up for it.

"I need you all to free your mind, OpenStack is an application," Polvi said. He teamed up with Google's Craig McLuckie to show fully containerized OpenStack running on top of Kubernetes. The powers-that-be were on their side - and he successfully demoed self-healing and rolling upgrades. You can find out more about the Tectonic project here and check out the full demo below.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/e-j9FOO-i84" width=""></iframe>
<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We are OpenStack

It's always good to see different sides of the OpenStack community take center stage. The L.I.K.A. team came all the way from Taiwan to showcase the app they made over a weekend at the first OpenStack Hackathon.

There were 38 teams competing for a grand prize trip to the Austin Summit, and over 200 people participated, their backgrounds ranging from private companies, government, universities and research institutions. The winning app uses OpenStack Sahara for big data and measures muscle movement for musicians--which the team showcased onstage with a live guitar player.

alt text hereThe L.I.K.A. team demos in Austin.

Collier announced the next Hackathon, slated for September in Mexico. Here's more on how your local user group could be next.

Smart Cities

The community gathered in a different way for the tcp cloud demo. The company is at work on a smart city project in Pisek, about about 60 miles south of Prague -- a project that employs OpenStack, Kubernetes, Hadoop and OpenContrail. But Jakub Pavlik, CTO, gave an example of the project that hit closer to home, using sensors at the Austin Convention Center to measure CO2 output. "You can tell when the keynotes ended, and when everyone went to lunch," Pavlik said over waves of applause from the 7,500 attendees. "It's a geek thing, I know." The API is public and available at https://austin.tcpcloud.eu/

alt text hereBreathe deeply: CO2 at the Austin Summit.

"The Internet of things isn't just a buzzword, it's not just tech pr or vendor solutions," Pavlik said. "It's about the power of the community and taking different pieces of open source and making it work."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

You can also read more about the smart city project in the print edition of Superuser magazine, available at the Summit.

Smarter scientists

In a talk called "Underhyped but Changing the World: OpenStack for Universities and Research" Dan Stanzione, the executive director of Texas Advanced Computing Center (TACC) dropped the knowledge on two large-scale projects, Chameleon and Jet Stream, using OpenStack. Chameleon is an experimental test bed for cloud architecture and applications that, much like its namesake, adapts to user needs. There are currently 700 users working on about 150 projects, ranging from classifying cybersecurity attacks to crunching data from the Large Hadron Collider (LHC.) Jetstream just rolled out to early users. It’s billed as the first user-friendly, scalable cloud environment for Extreme Science and Engineering Discovery Environment (XSEDE), a single virtual system where scientists interactively share computing resources, data and expertise. Stanzione said that users are at work on projects that span from psychology to an app that can tell you which kind of snake just bit you. (Collier's spoiler alert - if you're in Texas, it's a rattlesnake.) There's more about TACC in the print edition of Superuser magazine, available at the Summit.

What's moving the public cloud momentum

Capping the morning's user sessions, OVH's Maxime Hurtrel spoke of his company's "love story" with OpenStack. Launched in 1999 in an attic, the Internet service provider offers dedicated servers, shared and cloud hosting, domain registration and voice over IP services. The romance started in 2012 with Swift and has grown deeper with each passing year to arrive at 75 petabytes on Swift, 70,000 instances and 430,000 production servers in March 2016.

alt text here

Stay tuned for more from the Summit!

by Superuser at April 27, 2016 03:49 PM

Rob Hirschfeld

my 8 steps that would improve OpenStack Interop w/ AWS

I’ve been talking with a lot of OpenStack people about frustrating my attempted hybrid work on seven OpenStack clouds [OpenStack Session Wed 2:40].  This post documents the behavior Digital Rebar expects from the multiple clouds that we have integrated with so far.  At RackN, we use this pattern for both cloud and physical automation.

Sunday, I found myself back in front of the the Board talking about the challenge that implementation variation creates for users.  Ultimately, the question “does this harm users?” is answered by “no, they just leave for Amazon.”

I can’t stress this enough: it’s not about APIs!  The challenge is twofold: implementation variance between OpenStack clouds and variance between OpenStack and AWS.

The obvious and simplest answer is that OpenStack implementers need to conform more closely to AWS patterns (once again, NOT the APIs).

Here are the eight Digital Rebar node allocation steps [and my notes about general availability on OpenStack clouds]:

  1. Add node specific SSH key [YES]
  2. Get Metadata on Networks, Flavors and Images [YES]
  3. Pick correct network, flavors and images [NO, each site is distinct]
  4. Request node [YES]
  5. Get node PUBLIC address for node [NO, most OpenStack clouds do not have external access by default]
  6. Login into system using node SSH key [PARTIAL, the account name varies]
  7. Add root account with Rebar SSH key(s) and remove password login [PARTIAL, does not work on some systems]
  8. Remove node specific SSH key [YES]

These steps work on every other cloud infrastructure that we’ve used.  And they are achievable on OpenStack – DreamHost delivered this experience on their new DreamCompute infrastructure.

I think that this is very achievable for OpenStack, but we’re doing to have to drive conformance and figure out an alternative to the Floating IP (FIP) pattern (IPv6, port forwarding, or adding FIPs by default) would all work as part of the solution.

For Digital Rebar, the quick answer is to simply allocate a FIP for every node.  We can easily make this a configuration option; however, it feels like a pattern fail to me.  It’s certainly not a requirement from other clouds.

I hope this post provides specifics about delivering a more portable hybrid experience.  What critical items do you want as part of your cloud ops process?


by Rob H at April 27, 2016 03:30 PM

The Official Rackspace Blog

Container Day at the Rackspace Cantina [VIDEO]

Container technology has been one of the hot topics at this and recent past OpenStack Summits, so Rackspace hosted a “Container Day” of expert tech talks at the Rackspace Cantina on Tuesday. Experts from Google, Yodlr and others offered tips and tools to more easily use containers to develop and deploy applications, while Rackspace experts shared more information about Carina, our easy-to-use, instant-on, native container environment.

The post Container Day at the Rackspace Cantina [VIDEO] appeared first on The Official Rackspace Blog.

by Tracy Idell Hamilton at April 27, 2016 02:19 PM

April 26, 2016

Major Hayden

Talk Recap: Automated security hardening with OpenStack-Ansible

Today is the second day of the OpenStack Summit in Austin and I offered up a talk on host security hardening in OpenStack clouds. You can download the slides or watch the video here:

<iframe allowfullscreen="true" class="youtube-player" height="390" src="https://www.youtube.com/embed/q_uDtdpLmpg?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="640"></iframe>

Here’s a quick recap of the talk and the conversations afterward:

Security tug-of-war

Information security is a challenging task, mainly because it is more than just a technical problem. Technology is a big part of it, but communication, culture, and compromise are also critical. I flashed up this statement on the slides:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

In the end, the information security teams, the developers and the auditors must be happy. This can be a challenging tightrope to walk, but automating some security allows everyone to get what they want in a scalable and repeatable way.

Meeting halfway

The openstack-ansible-security role allows information security teams to meet developers or OpenStack deployers halfway. It can easily bolt onto existing Ansible playbooks and manage host security hardening for Ubuntu 14.04 systems. The role also works in non-OpenStack environments just as well. All of the documentation, configuration, and Ansible tasks are all included with the role.

The role itself applies security configurations to each host in an environment. Those configurations are based on the Security Technical Implementation Guide (STIG) from the Defense Information Systems Agency (DISA), which is part of the United States Department of Defense. The role takes the configurations from the STIG and makes small tweaks to fit an OpenStack environment. All of the tasks are carefully translated from the STIG for Red Hat Enterprise Linux 6 (there is no STIG for Ubuntu currently).

The role is available now as part of OpenStack-Ansible in the Liberty, Mitaka, and Newton releases. Simply adjust apply_security_hardening from false to true and deploy. For other users, the role can easily be used in any Ansible playbook. (Be sure to review the configuration to ensure its defaults meet your requirements.)

Getting involved

OpenStack securityWe need your help! Upcoming plans include Ubuntu 16.04 and CentOS support, a rebase onto the RHEL 7 STIG (which will be finalized soon), and better reporting.

Join us later this week for the OpenStack-Ansible design summit sessions or anytime on Freenode in #openstack-ansible. We’re on the OpenStack development mailing list as well (be sure to use the [openstack-ansible][security] tags.

Hallway conversations

Lots of people came by to chat afterwards and offered to join in the development. A few people were hoping it would have been the security “silver bullet”, and I reset some expectations.

Some attendees has good ideas around making the role more generic and adding an “OpenStack switch” that would configure many variables to fit an OpenStack environment. That would allow people to use it easily with non-OpenStack environments.

Other comments were around hardening inside of Linux containers. These users had “heavy” containers where the entire OS is virtualized and multiple processes might be running at the same time. Some of the configuration changes (especially the kernel tunables) don’t make sense inside a container like that, but many of the others could be useful. For more information on securing Linux containers, watch the video from Thomas Cameron’s talk here at the summit.

Thank you

I’d like to thank everyone for coming to the talk today and sharing their feedback. It’s immensely useful and I pile all of that feedback into future talks. Also, I’d like to thank all of the people at Rackspace who helped me review the slides and improve them.

<iframe allowfullscreen="allowfullscreen" height="348" mozallowfullscreen="mozallowfullscreen" src="https://www.slideshare.net/slideshow/embed_code/61387839" webkitallowfullscreen="webkitallowfullscreen" width="425"></iframe>

The post Talk Recap: Automated security hardening with OpenStack-Ansible appeared first on major.io.

by Major Hayden at April 26, 2016 09:19 PM

Mirantis

Protecting against cloudy targeted attacks

The post Protecting against cloudy targeted attacks appeared first on Mirantis | The Pure Play OpenStack Company.

Recently, we’ve seen a more sharpened focus on targeted attacks as a powerful tool of numerous cyber espionage campaigns affecting high-profile victims, including government organizations such as the White House and US State Department, which were attacked by CozyDuke APT.

The main problem with targeted attacks is that they can be difficult (if not impossible) to detect using a standard set of security solutions such as network and host Intrusion Detection/Prevention Systems because these systems rely on well known attack signatures, which requires a greater number of known infections than we typically see with these particular attacks. What’s more, targeted attacks can stay undetected for long periods while an attacker is building and maintain an espionage network. That is why they are called Advanced Persistent Threats.

Social engineering techniques and 0-day exploits are usually used to bypass intrusion detection scanners and install surveillance software.

Currently, attackers appear to have three main interests:

  1. Destroying facilities using a cyber weapon. For example, Stuxnet was used to sabotage the Iranian nuclear program by compromising SCADA servers and modifying behaviors of programmable logic controllers (PLCs) to change the rotating frequency of centrifuges for uranium enrichment, effectively taking them out of operation.
  2. Cyber espionage, mostly to set up surveillance over government and military organizations.
  3. Gaining financial profit, either by executing a cyber bank robbery (Carbanak APT), or by encrypting data using cryptolockers in order to force an organization into paying a ransom.

 

The cryptolocker attack can be loosely defined as targeted, because as it may cover a rather large set of targets. Cryptolockers are propagated the same way as regular targeted attacks – through phishing emails and hijacked websites daily visited by employees (otherwise known as a “watering hole attack”).

One example is the recently discovered Linux.Encode.1 targets Web hosting providers, encrypting the “apache2”, “nginx”, “mysql”, and “www” folders on Linux servers. Cryptolockers can be equipped with sophisticated passive and active self-protection methods. For example, the latest versions of TeslaCrypt use API call obfuscation to bypass antivirus protection and terminate monitors and configuration tools that malware analysts and forensics experts can use to diagnose an infection.

Another interesting aspect of these attacks is that clouds can become not only targets, but an actual component of the attack itself. Cloud infrastructure is used by attackers to upload stolen information and download backdoors’ updates. For example, CloudAtlas used the CloudMe public storage and Minidionis/CloudLook – Onedrive. Previously, Dropbox has been used to deliver new versions of malware by the NrgBot/DorkBot botnet.

If you want to learn more on how to protect your cloud against targeted attacks, please attend our OpenStack Summit talk “Using Open Source Security Architecture to Defend against Targeted Attacks” in Austin, TX. We’ll see you there!

The post Protecting against cloudy targeted attacks appeared first on Mirantis | The Pure Play OpenStack Company.

by Alexander Adamov at April 26, 2016 08:01 PM

OpenStack Superuser

Get an insider's look into Workday's OpenStack deployment and contributions

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/LZ1eW_ujNn8" width=""></iframe>

Edgar Magana, cloud operations architect at Workday discusses the Superuser Awards finalist's deployment, team and discusses the plans to have more than 650 physical servers running OpenStack by the end of the year.

You can read more about Workday and OpenStack in their Superuser Award finalist entry here.

by Allison Price at April 26, 2016 07:47 PM

Mirantis

OpenStack Summit, Day One: It’s all about disruption

The post OpenStack Summit, Day One: It’s all about disruption appeared first on Mirantis | The Pure Play OpenStack Company.

Like most OpenStack Summits, the Austin event opened today with a host of big-name keynote speakers. Unlike previous years, however, the talk was much less about the technology, and more about the world it inhabits and the people who affect it — and are affected by it.

First up was Donna Scott of Gartner Group, who gave what may be one of the better explanations of Bimodal IT.  In this model, Mode 1 are applications that are mission critical and therefore must be predictable and reliable, with little change and lots of governance. Mode 2, on the other hand, are your more innovative applications, which change frequently and don’t necessarily need to be so reliable because you’re not running your business on them.  

Typically, Mode 1 applications are your traditional applications, and Mode 2 are those that are on the cloud, but her point was that as Mode 2 applications typically need to communicate with Mode 1 applications, and also, as Mode 2 apps get greater adoption, they themselves need greater predictability and reliability, moving into Mode 1.  At any rate, it’s a model that calls for greater use of Mode 2 strategies for more than just Agile IT.

Next, Jonathan Bryce of the OpenStack Foundation talked about how OpenStack was at the center of the disruption of IT. It’s terrifying, he said, but it can also be a huge opportunity.  The important thing is for all of these technologies, and all of these priorities to work together, and that’s where OpenStack and its ability to handle diversity comes in.  He talked about the three keys to handling diversity:

  1. Embrace different technologies;
  2. Understand that new apps still need old apps to be useful;
  3. Culture is still more important than technology (This has been a big theme today, and we’ll be talking more about it in another post).

Mirantis’ Boris Renski also talked about disruption, but from a slightly different angle. He discussed the fact that in a survey Gartner did last year, 95% of private cloud implementations had problems — but only a tiny fraction of those problems had anything to do with the technology.  The rest were all about either people or processes.  To effectively use OpenStack, you must change the culture.

Next Sorabh Saxena, SVP, Software Development and Engineering of AT&T — which won this year’s SuperUser award — discussed the ways in which the company is using OpenStack to completely virtualize all of its services.  AT&T has had 150,000% bandwidth usage growth in the last few years (yes, you read that right!) and without OpenStack it would have been impossible for them to keep up with it.  AT&T has stood up 74 data centers in just a few months, and their goal is to virtualize and cloud enable 75% of their entire network architecture using the software centric approach by 2020.  

Nayaki Nayyar, the General Manager and Global Head of IoT and Innovation GTM at SAP, talked about how the company runs IoT on top of OpenStack, and how their customers care about outcomes, rather than technology. She gave the example of a company that sells compressed air canisters that has changed its business model so that really, what they’re selling is compressed air. A tire company is selling Mileage as a Service. UnderArmor is selling fitness, rather than fitness equipment.  The company has 23 different cloud architectures inherited from acquisitions, and is in the process of standardizing on OpenStack. More than that, however, the company’s customers are using OpenStack for their IoT projects (through SAP HANA), and Nayyar gave the example of a company that uses the OpenStack-based application to keep track of thousands of sensors in an automated factory.

Chris Wright, Chief Technologist at Red Hat, discussed OpenStack’s move to production, as companies finally start using it to solve real, business-critical problems.  He also brought out Chris Simons from Verizon to explain how the company is retooling their entire infrastructure on OpenStack.

The last keynote was Mario Müller from Volkswagen group, who discussed how VW is using OpenStack to fundamentally change how the automotive behemoth does business. In addition to streamlining workflow and reducing costs, the company will be using OpenStack in its plan to build not just connected cars, but autonomous, self-driving cars that can get you where you want to go without human intervention. You can read more about it here.

Tomorrow, the keynotes will be more focused on the growing use of OpenStack in public clouds. If you’re in Austin, be sure to visit us in booth A41 in the OpenStack Marketplace.

The post OpenStack Summit, Day One: It’s all about disruption appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at April 26, 2016 04:49 PM

Red Hat Stack

OpenStack Summit Austin: Day 1

We’re live from Austin, Texas, where the 13th semi-annual OpenStack Summit is officially underway! This event has come a long way from its very first gathering six years ago, where 75 people gathered to learn about OpenStack in its infancy. That’s a sharp contrast with the 7,000+ people in attendance here, in what marks Austin’s second OpenStack Summit, returning to where it all started!

The event kicked off in the morning with Jonathan Bryce, the Executive Director of the OpenStack Foundation welcoming the crowd to the largest OpenStack Summit to date! Shortly after, Red Hat’s chief technologist, Chris Wright, gave a great keynote presentation, discussing the overall success and impact OpenStack is having on real businesses and their bottom line. Mixed in there, Chris Emmons, director of network infrastructure at Verizon joined Chris Wright on stage for a quick summary of Verizon’s own success with OpenStack for network functions virtualization. Rounding out the keynotes were the Foundation’s Super User awards, with AT&T taking the winning spot.

Getting the day started, Red Hat had a few bits of news to share as well! In a press release on Monday, Verizon made it official and announced they have completed the largest known deployment of OpenStack, for network function virtualization (NFV) purposes. The deployment covers five of Verizon’s U.S. data centers, and began in 2015 based on a core and pod architecture. In addition, NASA’s Jet Propulsion Laboratory announces they are using Red Hat OpenStack Platform to process planetary exploration data, including all four Mars rovers. What an amazing use case!

As you might expect, there were quite a few Red Hat-led sessions as well. To start, Darrell Jordan-Smith led a panel of large telecom providers (Verizon, Telus, and AT&T) to talk about their success with OpenStack for NFV. Shortly after Jacob Liberman hosted an informative session with enterprise customers Paddy Power Betfair and Oak Ridge National Labs to talk about the importance of an integrated stack, compatible APIs, and their successful deployments.

Jonathan Gershater discussed pros and cons of OpenStack consumption models. Following that, Mark McLoughlin and Alexis Monville focused on ideas that influence open-source projects. In their talk Contributing to the Success of OpenStack, they discussed principles from Agile, DevOps, Continuous Delivery, and Learn movements.

Later, in a session with partners NetApp and Tesora, Sean Cohen discussed DBaaS Workloads with OpenStack Trove and Manila. Soon thereafter, Massimo Ferrari and Erich Morisse spoke at a well-attended session entitled ‘Elephant in the Room: What’s the TCO for an OpenStack Cloud.” Alexis Monville then led a session called “What Science knows about happiness that could transform OpenStack.”

Paul Belanger then teamed with partners Elizabeth Joseph, from Hewlitt Packard, and Christopher Aedo, from IBM, in a discussion entitled OpenStack Infrastructure for Beginners. Following lunch, Jullien Danjou, Christophe Sauthier, Stéphane Albert, Gauvain Pocentek, and Maximiliano Venesio presented in a workshop called “Hands on to configure your cloud to be able to charge your users using official OpenStack components.

Ihar Hrachyshka then teamed with Matt Riedemann, from IBM, and Matthew Treinish, from Hewlitt Packard in an interesting session entitled “OpenStack Stable: What it actually means to maintain stable branches.” Then, in his presentation “The omniscient cloud: bare metal inspection status update,” Dmitry Tantsur explored Ironic’s role in bare metal provisioning, its pluggable architecture, and how it provides smarter processing of inspection data.

Later in the afternoon, Scott McCarty focused on IaaS and PaaS convergence. His presentation, “OpenShift and OpenStack: Delivering Applications together” explored how traditional applications could be containerized and what additional workloads should also be considered.

And finally, Ihar Hrachyshka presented with partners from Intel and SUSE, in a session called “Deep Dive into the Neutron Upgrade Story.” The presenters focused on why it’s important to understand upgrade implications.

It was a packed first day at OpenStack Summit here in Austin and we’re looking forward to a full week ahead! Remember to check back here for our daily blog recaps or follow us at #redhatcloud.

by Gordon Tillmore, Sr. Principal Product Marketing Manager at April 26, 2016 04:10 PM

Meet Red Hat OpenStack Platform 8

Last week we marked the general availability of our Red Hat OpenStack Platform 8 release, the latest version of Red Hat’s highly scalable IaaS platform based on the OpenStack community “Liberty” release. A co-engineered solution that integrates the proven foundation of Red Hat Enterprise Linux with Red Hat’s OpenStack technology to form a production-ready cloud platform, Red Hat OpenStack Platform is becoming a gold standard for large production OpenStack deployments. Hundreds of global production deployments and even more proof-of-concepts are underway, in the information, telecommunications, financial sectors, and large enterprises in general. Red Hat OpenStack Platform also benefits from a strong ecosystem of industry leaders for transformative network functions virtualization (NFV), software-defined networking (SDN), and more.

From Community Innovation to Enterprise Production

The path for delivering a production-ready cloud platform, starts in the open source communities that can typically innovate far more effectively than traditional R&D labs. At Red Hat we bring customers, partners, and developers into communities of purpose to solve shared problems together. Red Hat also contributes a lot of code to the OpenStack project to help drive more community development that generally results in a higher feature velocity that enterprise customers need, with a faster time to market compared to proprietary software. When useful OpenStack technology emerges, we test it, harden it, and make it more secure and reliable.

However, enterprise grade is not limited to the OpenStack platform testing and hardening it also requires that any hardware or software vendors plugins you will connect to it will work properly, while maintaining the production stability. The Red Hat OpenStack Platform certification program makes sure that our broad ecosystem of hardware and software providers are successfully tested and verified for production use.

When we look at what it takes to support OpenStack production-ready customers globally, it is not just limited to the vendor’s ability to stand behind the software code and fix critical bugs or security vulnerabilities throughout the software stack. It also requires driving innovation that corresponds to our customer’s use cases, influence strategy and direction of the project, as well as enable partner collaboration. To listen to the needs of our customers and drive open innovation in the upstream community is one of the key benefits to Red Hat’s subscription value proposition.

The good news is that this model is not new to us at Red Hat – in fact we’ve been following this model now for nearly 20 year in open source. All of these efforts allow us to swiftly move from project to product, and create a production-ready distribution with a certified ecosystem, enterprise lifecycle, and world-class support that customers expect from trusted technology partner.

Our Production Standards

OpenStack-based private clouds are rapidly becoming the standard in scalable enterprise computing, but what about production standards? Can users really go ahead and deploy in production any new OpenStack service or feature that has just landed in the latest upstream as is? Does the fact that a new API passed the upstream continuous integration gates really mean it passed the “enterprise-readiness” production bar?

Not quite. For instance, some of the OpenStack new features take more than one cycle to complete, so a basic API may get introduced in a new release and then modified in following releases until it finishes the readiness line. For example, a new API was introduced in Cinder, but without cinder-client support. So the feature cannot be really used by customer, let alone properly tested as an end to end feature. Some features are introduced in one OpenStack service but may depend on the implementation in another OpenStack service to actually mark the feature as completed (such is the case of supporting booting an instance from encrypted volume in Nova that may be blocked until a proper support is in place in Cinder, or the ability to attach a single volume to multiple hosts that was introduced in Cinder in the Kilo release and is still gated by Nova to support Cinder’s multi-attach capability.

Among the big changes in the OpenStack community which occurred during the Liberty release cycle, is a shift from the integrated release model to a “Big Tent” model including more and more cloud projects under the OpenStack “umbrella”, providing many different types of capabilities. That said, what about the production-level standard of new big tent projects?

Thanks to the OpenStack foundation, we now have a new tagging system to help indicate stability and sustainability of the projects, however when we have to graduate new projects in our distribution, we have set a very strict process to verify that projects and features are production-grade before we add them to our products. Our process first requires introducing projects in RDO, our community-based distribution of OpenStack. Then, once integration is complete, we ensure that it meets our maturity criteria for features such as security compliance, proper services high-availability and upgradability, before announcing them in technology preview in Red Hat OpenStack Platform. This way our users can help us verify our assessment, before making them fully supported.

Meet Red Hat OpenStack Platform 8

The latest release, Red Hat OpenStack Platform 8, is a great example of all of these efforts that Red Hat puts into each release. Version 8 is packed with hundreds of new features and fixes, with many new additional functionality updates.Logotype_RH_OpenStackPlatform_RGB_Blue

 

Here is a taste of some of this release top new features:

  • Management

    • Red Hat OpenStack Platform director now includes
      • Automated upgrades and updates
        • Red Hat OpenStack Platform 8 is the first version offering that supports an in-place upgrade from version 7 to version 8 as well as, in the future, from version 8 to version 9. The new Red Hat OpenStack Platform director release also supports automated live updates to allow users to update to new minor releases (e.g. 8.0 → 8.1). It automatically performs the necessary system-wide updates to both the core OpenStack services, as well as the director tool itself, helping to deliver a healthy and stable OpenStack cloud while minimizing downtime.
      • SSL support for Red Hat OpenStack Platform components deployed on nodes in your cloud.
      • IPv6 support for the “undercloud” (the deployment component of director), as well as the production “overcloud” (single stack)
      • Broader network vendors support such as Cisco N1KV, Nexus 9K and UCSM ML2 plugins as well as Big Switch Networks ML2 plugin, LLDP, and bonding support.
    • Includes hybrid cloud management with Red Hat CloudForms
      • Use Red Hat CloudForms for lifecycle and operational management over OpenStack infrastructure and workloads. CloudForms can manage Linux and Windows workloads running on top of OpenStack, including lifecycle management, usage monitoring and reporting, multi-node orchestration, governance and policy-based access control, and more.
  • Network

    • Network quality of service (QoS): providing an extensible API and reference implementation for dynamically defining per-port and per-network QoS policies. This enables OpenStack tenant administrators to offer different service levels based on application needs and available bandwidth.
    • Role-based access control (RBAC) for networks: provides more fine-grained permissions for sharing networks between tenants. Historically OpenStack networks were either shared between all tenants (public) or not shared at all (private). Liberty now allows a specific set of tenants to attach instances to a given network, or even to disable tenants from creating networks – instead limiting access to pre-created networks corresponding to their assigned project(s).
    • Rapid Spanning Tree Protocol Support (IEEE 802.1D-2004), allowing faster convergence after topology changes.
  • NFV

    • Version 8 adds several new and critical technology preview (unsupported) features focused on improving Network Virtualization Functions. With this release there is more assured predictable latency with real-time KVM, improved network I/O performance with DPDK- accelerated Open vSwitch v2.4.0 release; and an OpenDaylight networking plugin for customers intending to build a software-defined network.
  • Compute

    • Improved Network Performance:The libvirt driver has been enhanced to enable virtio-net multiqueue for instances. With this feature on, workload is scaled across vCPUs, thereby allowing for increased network performance.
    • Disk QoS (Quality of Service) when using Ceph RBD (RADOS block device) storage. Among other things, sequential read or write limitation, and total allowed IOPS or bandwidth for a guest can be configured.
    • Mark host down API enhancements: supports external high-availability solutions, including pacemaker, in the event of compute node failure. This new API call provides improved instance resiliency by giving external tools a faster path to notifying OpenStack Compute of a failure and initiating evacuation.
  • Storage

    • Generic volume migration: adds the ability to migrate workloads from iSCSI to non-iSCSI storage back ends, with more drivers to perform migration including Ceph RBD.
    • Generic Image Cache – With this new feature backends are able to use cached glance images when creating volumes from images.
    • Volume Replication API: Cinder now allows block level replication between storage back ends. This simplifies OpenStack disaster recovery by allowing administrators to enable volume replication and failover.
    • Nondisruptive backups: Allows the backup of volumes while they are still attached to instances by performing the backup from a temporary attached snapshot. This eases backups for administrators and offers a less disruptive solution to end users.
    • Red Hat Ceph Storage integration: To support OpenStack scale-out infrastructure requirements, Red Hat’s massively scalable, software-defined storage solution, is now included with Red Hat OpenStack Platform, offering a permanent 64 terabytes of highly flexible object and block storage. The most popular storage solution for OpenStack clouds provides users with a single, efficient platform to support the demanding storage needs of an OpenStack-based cloud.
  • Security features and Identity management

    • New Image signing and encryption: helps to protect against image tampering by providing greater integrity with signing and signature validation of bootable images.
    • Better Identity management: Introducing a simplified web Single Sign On with new ability to specify individual identity providers (IDPs), while helping to prevent “man in the middle” attacks, as well as improved SAML assertion to allow identification of unique individual users.

And of course, this is only a sampling of the key features added to version 8. Be sure to read the press release, visit the web page for product details, or checkout the release notes for more details.

by Sean Cohen at April 26, 2016 02:48 PM

Mirantis

Volkswagen: Self-driving cars and the reinvention of IT infrastructure

The post Volkswagen: Self-driving cars and the reinvention of IT infrastructure appeared first on Mirantis | The Pure Play OpenStack Company.


Screen Shot 2016-04-25 at 8.37.53 PM

Two weeks ago Volkswagen announced that it would be standardizing its IT infrastructure around OpenStack. Today at the OpenStack Summit in Austin Mario Müller, corporate director of IT operational services and infrastructure technologies at Volkswagen Group, gave more details of both the “what” and the “why” of this initiative. According to Müller, it’s about cost savings, agility, and self-driving cars.

According to Gigaom, any time you’re spending more than $50,000/month on public cloud, you should think about your own private cloud — even if it is evident that “private cloud would require more work,” as Müller told TechCrunch earlier this month when the deal was publicly announced.

VW’s ongoing plan is an ambitious one. First, the company plans to build hybrid cloud using the Mirantis OpenStack distribution as a next step of its IT reinvention, as shown in the figure below.    

Screen Shot 2016-04-25 at 8.40.38 PM

 

VW has separated the process into three steps:

Step 1: Virtualization is already done. The idea was to build a Standard Application Platform (using x86 servers).
Step 2: OpenStack as an IaaS platform has also been implemented.
Step 3: PaaS implementation on OpenStack is in progress now, to be completed in June, 2016.

The Volkswagen Group includes such world-famous auto brands as Audi, Bentley, Bugatti, Ducati, Lamborghini, Man, Porsche, Scania, Seat, Škoda and Volkswagen itself. As an enterprise they made 202,5 billion Euros sales revenue in 2014, with 11.5 billion Euros spent on Research and Development. At least 600,000 employees are working in more than 167 locations. In 2014, the Group increased the number of vehicles delivered to customers to 10.137 million (from 9.731 million in 2013).

That kind of volume and complexity requires a great deal of agility, Müller explained.  Agility, he said, is about improvement of work streams, but it’s also about moving the infrastructure to the next level of automation. At the extreme it means users’ self-service for the IT infrastructure. So if workloads have been managed automatically and resources are delegated of their own accord (per user’s request basis) this leads to substantial IT budget savings and marked increase in efficiency.   

He also emphasized the importance of standardization of IT infrastructure. As VW has dozen of brands and more than one hundred and sixty locations, it needs some unified policies to manage infrastructure and to create and manage internal and consumer-facing applications for all of its companies.

As one of the main benefits of OpenStack is to accelerate the delivery of new applications and to improve IT responsiveness to the business, VW expects that renovated IT infrastructure will help them to speed up the delivery of IT to the business, and finally will push down expenses because of the reducing proprietary costs.

But VW is looking past reducing costs, and even looking past today’s vehicles.  “As the automotive industry shifts to the service economy,” said Müller in the official press release, “Volkswagen is poised for agile software innovation.” But what does it mean in particular?

Well, consider the fact that the average person spends 37,668 hours in a vehicle in their lifetime.  (That’s more than 4 years, if you’re keeping track.)  Would it be nice, Müller said, to simply go to sleep and wake up at your vacation home? That’s why Volkswagen is inspired by the idea of the autonomous driving. Until now, the engine was the heart of the vehicle and the human being was its soul (and mind). Now, he said, the time has come to change that, via cloud connected cars.  The company already has two completely electric, Internet-connected cars (one of which is a Porsche with some truly impressive acceleration numbers). The company’s future plans are for cars that are completely automated, doing the driving for you.

“The best way to predict the future,” Müller quoted Alan Kay, “is to invent it.” And Volkswagen seems bound and determined to do just that.

The post Volkswagen: Self-driving cars and the reinvention of IT infrastructure appeared first on Mirantis | The Pure Play OpenStack Company.

by Ilya Stechkin at April 26, 2016 03:06 AM

OpenStack Superuser

Women of OpenStack mentoring program launches at Austin Summit

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/cdYZ83vW_Bw" width=""></iframe>

Following the first speed mentoring session this morning with 150 attendees, Emily Hugenbruch, advisory software engineer at IBM sat down with SuperuserTV to discuss the new mentoring program that was launched at the OpenStack Summit Austin.

You can find more details on how to get involved here.

by Superuser at April 26, 2016 02:58 AM

And the Superuser Award goes to...

AUSTIN, Texas -- The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.

NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list--read their full application here.

alt text hereSorabh Saxena gives a snapshot of AT&Ts OpenStack projects during the keynote.

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

Interested in nominating a team to be recognized at the next Awards ceremony in Barcelona? Stay up-to-date at http://superuser.openstack.org/awards

by Superuser at April 26, 2016 02:49 AM

Make data center diversity work for you with OpenStack

AUSTIN, Texas -- It's been a long road home but an amazing journey.

The keynote kicked off with the OpenStack Foundation’s Todd Morey and Lauren Sell talking about the early days when some 75 people met for the first Summit in Austin back in 2010. Then they welcomed a packed house of 7,500 people to the Austin convention center today.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Many of the 2,336 developers from 345 companies who contributed to Mitaka, the 13th release, were in the house. OpenStack Foundation executive director Jonathan Bryce asked them to stand up for a round of applause for making the release, which focuses on release focused on manageability, scalability and user experience. "Times of disruption are the times of greatest opportunity - capture the value to accelerate your business," said Bryce, noting that OpenStack isn't about business as usual - it's part of a larger, ongoing disruption. "If you get the culture right and the process right being big is not a disadvantage in winning the disruption," he said.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Switching gears with Bimodal IT

Donna Scott, Gartner distinguished analyst, took the stage accompanied by “Long Shot Lady” played by local band Soul Track Mind who kept things lively with tunes coming out of the Stackers Barn onstage.

Scott spoke of companies working in two modes, predictable (Mode 1) and exploratory (Mode 2), and the arduous task of switching between them for growth. Bimodal is correlated to digital workplace performance -- by 2020, 41 percent of executives said that revenue would come from digital business, she said. This "bimodal" way of working is often seen as a nightmare, especially for those who find themselves caught in between. There are misconceptions, she said, about bimodal IT being bad."You need both," she said, adding that OpenStack is critical for Mode 2. "Where you don’t have the path cut yet, those projects are born in the cloud."

She made five recommendations:
● Invest in bimodal
● Know that it means more than agile development
● Incentivize collaboration between Mode 1 and Mode 2
● Scale Mode 2. When uncertainty diminishes and you need the predictability and stability that
● Start your OpenStack initiative with Mode 2. Then over time, refactor and onboard Mode 1.

Judging by the audience reaction, today's turnaround was a long time coming. As late as 2013, Gartner still said that OpenStack wasn't ready for prime time.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Embracing diversity

"OpenStack is a strategy for taking advantage of diversity in IT," says Bryce. The keys to embracing diversity include picking the right platform and remembering that new apps need old apps -- data from a system of record. As we look at the trends that are driving business value across many industries--big data analytics, containerization, continuous delivery, NFV, the Internet of Things, and more--it’s clear that our industry will continue to be an exciting, diverse and challenging environment. Bryce also outlined his thoughts in a Superuser magazine editorial, pick up a copy if you're at the Summit.

Vodka bear

If you like your keynotes shaken and stirred, Mirantis has got something for you. Instead of the usual soporific videos, they showed this clip. 

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/QDJVvhS1Ayo" width=""></iframe>

Then Boris Renski, co-founder and CMO at Mirantis, took the stage with a vodka-drinking bear. This is a good time to be in the cloud, he said, and not just because Gartner has come around. Amazon web services is a tiny portion of the full IT Market waiting to be "cloudified."He broke down a previous Gartner pie chart about why OpenStack often stumbles into easy-to-grasp sound bites: "success with OpenStack is one part tech and nine parts people and process."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Renski also spoke of reconciling data center diversity - putting it in terms of getting the private cloud DevOps ninjas to play nice with the VMware Sys admin bears that earned him a round of applause from the audience.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

A rainbow of Superusers

AT&T took the stage to talk about their ground-breaking journey with OpenStack, Sorabh Saxena started with some startling factoids including that mobile traffic grew 150,000 percent from 2007-2015.

He ended with a call to action and a warning by inviting every one present to help solve issues for large scale operators and make Openstack the standard for private clouds. "You either lead disruption by being in driver's seat or you're just a passenger," he said.

AT&T also took home the fourth Superuser Award, beating out teams from Dreamhost, Betfair and Workday.

A number of users also hit the stage to talk about OpenStack, including SAP, Volkswagen and Red Hat, whose Chris Wright also underlined the importance of diversity.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

If you missed the keynote, you can catch the videos here: https://www.openstack.org/videos Stay tuned for more news from Austin!

Cover photo: OpenStack Foundation, Johnathan Bryce on the keynote stage.

by Superuser at April 26, 2016 12:38 AM

Openstack Security Project

Lightweight Threat Analysis Process for OpenStack

Lightweight Threat Analysis Process for OpenStack

Following on from our previous posts on Threat analysis and Applying threat analysis to Anchor, we have been working on defining a lightweight process for threat analysis which can be applied to OpenStack projects. This blog post gives a first look at the draft process, the final location of which is to be decided, but it is likely to be in a wiki page, or possibly in the security docs project.

The materials are currently formatted in RST, due to their location as part of the security docs project, they can be cloned with:

git clone git://git.openstack.org/openstack/security-doc
git review -d 220712

We focus on four stages of the threat analysis process:

  • Preparing artifacts for review
  • Verifying readiness for a threat analysis review
  • Running the threat analysis review
  • Follow-up from the threat analysis review

Preparing artifacts for review

  • Complete the architecture page. The architecture page describes the purpose of the service, and captures the information that is required for an effective threat analysis review. A template for the architecture page is provided here and there is guidance on diagraming here. If further help or advice is needed, please reach out to the Security Project via the openstack-dev@lists.openstack.org mailing list, tagging your email [security].
  • The architecture page should describe a best practice deployment. If a reference architecture is available this may be a good example to use, otherwise the page should describe a best practice deployment, rather than the simplest possible deployment. Where reference architectures do not exist, it is possible that the architecture drawn for the threat analysis process can be used as a reference architecture.
  • The following information is required in the architecture page for review:

    1. A brief description of the service, its purpose and intended usage.
    2. A list of components in the architecture, their purpose, any sensitive data they persist and protocols they expose.
    3. External dependancies and security assumptions made about them.
    4. An architecture block diagram.
    5. Either a sequence diagram or a data flow diagram, describing common operations of the service

Before the review

  • Verify that the service’s architecture page contains all the sections listed in the Architecture Page Template.
  • The architecture page should include diagrams as specified in the Architecture diagram guidance.
  • Send an email to the openstack-dev@lists.openstack.org mailing list with a [security] tag to announce the up-coming threat analysis review.
  • Prepare a threat analysis review etherpad, using this template <tbd>.</tbd>
  • Print the architecture page as a PDF, to be checked in along with the review notes, as evidence of what was reviewed.

Running the threat analysis review

  • Identify the “scribe” role, who will record the discussion and any findings in the etherpad.
  • Ask the project architect to briefly describe the purpose of the service, typical uses cases, who will use it and how it will be deployed. Identify the data assets that might be at risk, eg peoples photos, cat videos, databases. Assets in flight and at rest.
  • Briefly consider potential abuse cases, what might an attacker want to use this service for? Could an attacker use this service as a stepping stone to attack other services? Do not spend too long on this section, as abuse cases will come up as the architecture is discussed.
  • Ask the project architect to summarize the architecture by stepping through the architecture block diagram.

Threat Analysis: Example Architecture Diagram

While reviewing the architecture, perform the following steps:

  1. For each interface between components, consider the confidentiality, integrity and availability requirements for that interface. Is sensitive data protected effectively to prevent information disclosure (loss of confidentiality) or tampering (loss of integrity)? Is there a requirement for availability which should be documented and added to reference deployments? In addition to considering the authenticity of the data in transit, consider how the authenticity of the sending and receiving nodes is assured.
  2. Consider the protocols used to pass data between interfaces. Is this an appropriate protocol, is it a current protocol, does it have documented vulnerabilities, is the implementation in use maintained? Is this protocol used as a security control to provide confidentiality, integrity or availability?
  3. Can this interface be used as an entry point to the system, can an attacker use it to attack a potentially vulnerable service? If so, consider what additional controls should be applied to limit the exposure.
  4. If an attacker was able to compromise a given component, what would that enable them to do? Could they stepping-stone through the OpenStack cloud?
  5. How is the service administered? Is this a secure path, with appropriate authentication and authorization controls?
  • Once the reviewers are familiar with the service, re-consider abuse cases, are there any other cases which should be considered and mitigated?
  • Step through sequence or dataflow diagrams for typical use-cases. Again consider if sensitive data is appropriately protected. Where an entry point is identified, consider how risks of malicious input data can be mitigated.
  • If any potential vulnerabilities are identified, they should be discussed with the project team, if they agree that it is an issue then a note should be made in the findings section of the etherpad, with a short title and summary of the issue, including a note of who found it. If the project team disagree, then the note should be made under the further investigation section.

Follow-up from the threat analysis review

  • Create a separate bug for each of the security findings listed in the TA Review notes.
  • Update the Threat Analysis Review Etherpad each of the new launchpad bug numbers.
  • Paste the contents of the Threat Analysis Review Etherpad into a text file in security-docs/security-ta/notes and push it to the security-review repo using gerit.
  • Distribute the Threat Analysis Review Notes via email to all who were present at the threat analysis. If anyone discovers errors or omissions in the notes, then make corrections.
  • On the threat analysis reviews wiki page create a new row in the reviews table, include a link to the master bug, the date of the review, the PTL and reviewers.

April 26, 2016 12:00 AM

Cloudify Engineering

Where AT&T Leads, Cisco Cannot Follow

Preface The Innovator’s Dilemma faced by vendors of proprietary networking stacks is well documented. The blogsphere and the trade media...

April 26, 2016 12:00 AM

April 25, 2016

OpenStack Superuser

What you need to know about OpenStack and container networking

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/R_8ZbrCah9w" width=""></iframe>

OpenStack Kuryr project team lead (PTL), Gal Sagie, and core contributor, Toni Segura Puimedon discuss the current state of container networking with OpenStack and how it works with container communities, like Kubernetes and Mesos.

by Superuser at April 25, 2016 09:41 PM

Hugh Blemings

Lwood-20160424

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 18 to 24 April 2016:

  • ~752 Messages (up about 19% relative to last week)
  • ~190 Unique threads (down about 2% relative to last week)

The overall Thread count was about the same, but a couple of quite long threads upped the Message count significantly.

Notable Discussions

Winding down the kolla-mesos project ?

It looks like the kolla-mesos project faces an uncertain future.  In a post earlier in the week Steve Dake notes that the major implementers of the project “don’t intend to continue its development within the Kolla project governance” and so development is being scaled back.

Sergey Lukanov clarifies things midweek in his post – seemingly efforts are now instead being put into a kubernetes based approach which will be developed with the relevant upstream projects.

Even if the specific projects involved aren’t particularly on your radar, Sergey’s post and the brief discourse that follows make an interesting read and a good example (IMHO) of pragmatic decision making.  Likewise the thread following Steve’s post provides a good summary of what standard practice is nowadays for a project being wound up.

Summit Preparations aplenty

As one might expect, a bunch of project specific posts flagging the whereabouts of relevant etherpads, formal and informal meetings and the like.

I’m not sure I’ve captured all of them, so I’d encourage you do your own quick search, but posts I did find included Heat, TripleO, Puppet, OpenStack-Operators and Glance.

I also note there were many cancellation notices for standing meetings – once again double check but whatever OpenStack project meeting you’d normally have, is probably off for this coming week at least!

Party Party Party!

Michael Krotscheck notes that HPE are looking for sponsors to continue the Core Party after Austin.

As the thread goes on to discuss, the Core Party has been a somewhat contentious event, seen by many as being at odds with the inclusionary nature of the OpenStack Community – this no reflection on the organisation sponsoring it more it’s existence at all as I read it.

The thread goes on to note that one desirable aspect of the parties has been the opportunity it presented for a quiet conversation.  Tom Fifield helpfully points out that there will be spaces at the Tuesday night “Street Party” that will facilitate this very thing with some quiet spaces where the music isn’t so loud.

Users Managing Users

Adrian Turjak wrote an introduction to StackTask – a project that will “allow users to self manage additional users and roles on their projects without being admin, but in future will grow to handle other normally admin restricted tasks.“

It’s all Open Source software, well tested (in use in production), uses Keystone underneath so isn’t re-inventing the wheel and, usefully, is easily pluggable/extensible.

The user management piece is up and running – and what the authors required primarily, but they look to now be shifting their efforts toward a richer feature set.  Worth a look :)

Proposal for a Massively Distributed Cloud Working Group

Adrien Lebre penned a proposal for the creation of a new Working Group to provide a forum to discuss massively distributed (or the so called Fog/Edge Computing paradigm) use cases.  The main distinction to current large scale deployment working groups looks to be the emphasis on geographical (WAN-wide) distribution of resources and the attendant issues this introduces.

There will be a presentation or two at the Summit this week in Austin as well as a proposed face to face session on Tuesday afternoon at same.

L3 High Availability testing at beyond desktop scale

Ann Kamyshnikova’s email announced the results of some interesting work she’s been doing on a 49 node system (3 controllers, 46 compute) looking at the performance of Neutron.

The ensuing thread discussed the results and also flagged the availability of a large (100’s of nodes) cluster that is available to all through OSIC.

Release Hiatus until 2 May

Doug Hellman reminds us that with the majority of the Release team travelling to Summit or about to, there will be no further releases until May 2nd unless anything dramatic crops up.

Upcoming OpenStack Events

Events wise, really “just” lots of discussion around the Austin summit :)

Don’t forget the OpenStack Foundation’s comprehensive Events Page for a comprehensive list that is frequently updated.

People and Projects

Core nominations & changes

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

This edition of Lwood brought to you from sunny Austin, Texas, USA, surrounded by my wonderful fellow OpenStack developers and users. No tunes other than what drifted up from 6th Street one evening when I was working on it a little earlier in the week :)

by hugh at April 25, 2016 08:06 PM

OpenStack @ NetApp

NetApp Announces Red Hat OpenStack Platform 8 on FlexPod

Hi Folks,

I’m excited to share with you the publishing of NetApp Technical Report (TR) 4506: Red Hat OpenStack Platform 8 on FlexPod! If you'd like to deploy the very latest version of Red Hat OpenStack Platform easily, and in an automated and highly available manner on FlexPod, this document is for you!

Red Hat OpenStack Platform 8 on FlexPod

We're also proud to contribute customized Heat templates and shell scripting code upstream to Github to help you deploy this solution faster and with less problems! This is the very same templates we've used to deploy Red Hat OpenStack Platform 8 on FlexPod in our lab!

All of the code is freely available here:

TR-4506 GitHub Code

About this document

This document represents an enterprise-grade open hybrid cloud foundation that helps you to deploy OpenStack on an enterprise-class converged infrastructure (FlexPod) built with NetApp® FAS and E-Series storage, Cisco® UCS servers and Cisco Nexus switches, and Red Hat's OpenStack Platform.

You can find the following and more in the TR:

  1. Context and Technology Overview as to why FlexPod represents the best enterprise-class converged infrastructure platform for OpenStack deployments.
  2. Detailed implementation instructions on how to configure a fresh out-of-the-box FAS8040 two-node pair redundantly and prepare it for usage for:
    • Cinder volumes and Glance images utilizing NetApp technology exposed to it through the NetApp unified Cinder driver and the NetApp Clustered Data ONTAP (cDOT) Operating System
    • Manila shares through the NetApp unified Manila driver and NetApp cDOT
    • Stateless computing via iSCSI boot LUNs to eventual Cisco UCS Service Profiles
  3. Detailed implementation instructions on how to bring up and configure an E5660 dual-controller storage system and prepare it as the backbone for a highly available and resilient OpenStack Object Storage deployment.
  4. Architectural diagrams and guidance on networking segmentation through 802.1Q VLANs to be utilized in the eventual OpenStack deployment
  5. Step-by-step instructions on deploying the Red Hat OpenStack Platform director, a lifecycle management and orchestration utility (based on the upstream TripleO project) used to:
    • discover and introspect via the OpenStack Ironic project the physical server blades to be used for OpenStack
    • provide an extensible framework based on Heat orchestration templates (written in Yaml) used to configure and launch an OpenStack deployment
    • deploy Red Hat Enteprise Linux 7.2 on bare-metal UCS server blades via iSCSI boot from SAN -- the entire root disk of all of the servers in this solution are stored on the NetApp FAS8040 to enable stateless computing, rather than using local disks in the servers themselves
    • install and customize all the necessary OpenStack related packages via Puppet in an automated and highly available deployment backed by Pacemaker for service-level HA
    • ensure that the newly created overcloud has Cinder volumes and Glance images backed by NFS on the NetApp FAS8040, Swift object and metadata information backed by the NetApp E5660, out of the box with no special user configuration or manual deployment required
  6. Post Deployment Instructions after the OpenStack deployment is finished:
    • Deploy the OpenStack Manila (File-share-as-a-service) project in a highly available, automated fashion with the NetApp Manila driver in the resulting overcloud
    • Add additional DM-multipath paths via iSCSI to the resulting overcloud deployment for additional high availability and redundancy

Comprehensive post-deployment validation instructions are provided to show the reader common operations in the resulting overcloud to do the following, all through the Horizon dashboard:

  • Create a tenant
  • Upload operating system images to the Image service (Glance)
  • Create flavors
  • Create a project and a user
  • Create a tenant network and router
  • Set a gateway for the tenant router, and create a Floating IP network
  • Create and boot a persistent instance (VMs) from volume
  • Associate a floating IP address with this newly created instance
  • Verfiy both inbound and outbound network connectivity to the instance
  • Provision a file share via the File Share service (Manila) and have the instance mount and write to the share
  • Upload objects and verify access to them via the Object Storage service (Swift)

Finally, we also take advantage of the OpenStack Rally project (Benchmark-as-a-service) to demonstrate the advantages and value proposition of using NetApp storage and NetApp software (our Unified Cinder driver for OpenStack) together for a higher performant, space efficient storage foundation for your OpenStack cloud.

Purpose built applicances for OpenStack like the FlexPod converged infrastructure platform contain best-of-breed physical infrastructure and software integrations in OpenStack that help you go further & faster to rolling out OpenStack for Production Deployments versus traditional, do-it-yourself deployments.

OpenStack Summit Austin

Austin Summit

If you’re going to be at the OpenStack Summit this week in Texas, here's a comprehensive listing of sessions that NetApp and SolidFire are speaking at here.

Also feel free to stop by the NetApp & SolidFire booth (#A38) and see me and my colleagues while you’re there at the Summit.

I hope this reference architecture and associated solution documentation is useful for you. Please feel free to leave comments in the section below!

For more info

Other FlexPod solution collateral

  • RHEL-OSP6 FlexPod CVD: Design Guide -- Design elements of deploying Red Hat OpenStack Platform 6 (based on OpenStack Juno) on FlexPod. This prevously released document includes integral design elements to the overall solution as well as why technology chosen in the architectural design represent the best choice for a deployment.

  • RHEL-OSP6 FlexPod CVD: Deployment Guide -- is a step-by-step, detailed implementation guide giving full steps on implementing Red Hat OpenStack Platform 6 on FlexPod. Also included is a subset of the Design Guide in order to provide necessary context for the Deployment Guide.

To learn more about building a business-critical cloud for your organization, contact your NetApp, Cisco, or Red Hat sales representative or visit the following links:

April 25, 2016 03:55 PM

Mirantis

Mirantis Training and the OpenStack Foundation Certification

The post Mirantis Training and the OpenStack Foundation Certification appeared first on Mirantis | The Pure Play OpenStack Company.

During the most recent OpenStack Summit in Tokyo, the OpenStack Foundation announced that it would be offering their first certification exam, Certified OpenStack Administrator (COA), in 2016. Since then, Mirantis has been working closely with the Foundation and the OpenStack community to develop the COA certification exam. We contributed training and certification expertise and played an integral part in the Item Writing Committee and the Job Task Analysis Committee. And today, after much anticipation, the COA certification exam is finally available.

Mirantis is proud of its contributions to the OpenStack Foundation’s Certified OpenStack Administrator (COA) exam, and we also realize that its debut raises new questions around Mirantis’ OpenStack certification offerings. In this blog, we’ll outline a few key areas that we think will matter most to current and prospective OpenStack students.

Foundation Certification Mirantis
Certification
Maturity Since Q2’2016 Since Q4’2013
Professionals certified N/A 3,000+
Vendor agnostic Yes Yes Yes
Development OpenStack community contributors Mirantis product development and training curriculum team
Certification Support N/A Dedicated Mirantis training team
Exams available COA Associate Level MCA100 Professional Level MCA200
Format Online, hands-on Online, written test Classroom & online, hands-on
Proctored Yes No Yes

Mirantis offers two certifications: Associate Level MCA100 and Professional Level MCA200. The MCA100 exam is a written test offered online, which allows test takers to save money on travel costs while scheduling the exam at their convenience. The MCA200 is a hands-on exam that is offered in person at our OpenStack bootcamps and now also available online, as a proctored exam. The difficulty level of both exams is high and passing them truly proves in-depth OpenStack knowledge.

Although its actual exam questions are different and its credibility is not yet proven, the COA has many similarities to Mirantis certifications. Both the COA and Mirantis’ certifications are vendor agnostic, testing knowledge of vanilla OpenStack, not a particular vendor distribution. This means that Mirantis Training bootcamps will continue to be an excellent investment for IT professionals around the globe advance their skills in OpenStack. It is the highest rated training for OpenStack, providing students with a vendor-agnostic, detailed understanding of all of the steps needed to deploy and operate OpenStack.

Both exams will also be updated as OpenStack technology matures – Mirantis with the largest vendor pool of OpenStack engineering talent in the world, and the COA by OpenStack’s excellent community contributors (some of which work at Mirantis). And like Mirantis’ MCA200 exam, the new COA certification exam is closely monitored by proctors via a webcam and remote screen viewing, maintaining the security and the integrity of the exam.

Whether you choose to become certified through COA, Mirantis, or another vendor is entirely up to you. What we can promise is that Mirantis courses and certifications are well known in the industry as a standard for proving one’s in-depth knowledge surrounding OpenStack. Over the past three years, 3,000 professionals have been certified for OpenStack by Mirantis, and Fortune 500 companies like Ericsson, AT&T, and GCI, have trusted our OpenStack certification to measure and prove their OpenStack engineers’ professional competency in OpenStack. We even have a popular certification portal to help connect Mirantis certified professionals and employers looking for verified OpenStack professionals.

To learn more about Mirantis bootcamps and certification exams for OpenStack, visit training.mirantis.com. We hope to see you in one of our classrooms soon.

The post Mirantis Training and the OpenStack Foundation Certification appeared first on Mirantis | The Pure Play OpenStack Company.

by Lana Zhudina at April 25, 2016 02:15 PM

Alessandro Pilotti

Hyper-V Failover Cluster in OpenStack

We’re very pleased to announce that starting with the Mitaka release we have introduced a new Openstack Nova Compute Driver, the Hyper-V Failover Cluster Driver, offering host based fault tolerance to tenants. The main advantage is that non high-available applications running in virtual machines can rely on the cluster for fault tolerance, a very common scenario in the enterprise IT world.

This feature is based on Windows Server Failover Clustering (WSFC) and is very useful when you need services that are always available, even if the Hyper-V compute hosts go offline due to planned maintenance or unplanned hardware or software issues. Thanks to this feature, the virtual machines are fault-tolerant and get automatically migrated to other nodes in the same cluster, where they continue to work unaffected.

All of this is now integrated with OpenStack, with the Hyper-V Compute Driver seamlessly notifying Nova about any failovers in the cluster, ensuring that instance status is properly up-to-date.

Windows Clustering is fully supported and automatically configured in our OpenStack Hyper-V hyper-converged product offer.

In the remaining part of this post, we’re showcasing the Failover Clustering feature in a quick demo that will work on any Mitaka OpenStack deployment. All you need is a minimum of two Hyper-V nodes (joined to an Active Directory domain) to add to your deployment.

Firstly, let’s boot some instances on the Hyper-V Cluster.

nova boot --image cirros_vhd --flavor m1.small test1
nova boot --image cirros_vhd --flavor m1.small test2
nova boot --image cirros_vhd --flavor m1.small test3

We can see the 3 instances with nova list:

nova_list

And the same 3 instances in the Failover Cluster Manager:

cluster_manager1

Now, we’re going to forcefully restart the HV12R202 Hyper-V host server where two of the three instances are hosted. Being a planned restart, the instances hosted on this node will live migrate to other Hyper-V nodes in the cluster without any downtime. This is the so-called “node drain” feature. If instead the Hyper-V server undergoes an unplanned shutdown, the instances will failover to other nodes.

PS > shutdown -r -t 0

The instances have been live migrated to the other nodes in the cluster as we can see in the Cluster Manager shown in the following snapshot:

cluster_manager2
A snippet from the Hyper-V Nova Compute logs, showcasing the occurring failover events.

2016-04-22 07:50:08.865 4108 INFO hyperv.nova.cluster.clusterops [-] Checking instance failover instance-00000001 to HV12R201 from host HV12R202.
2016-04-22 07:50:09.009 4108 INFO nova.compute.manager [-] [instance: 2991095f-9fc0-4a64-9b35-ef5637e1757a] VM Started (Lifecycle Event)
2016-04-22 07:50:09.586 4108 INFO nova.compute.manager [req-f9e2d6cd-8ad1-42d8-82e1-5e98bd052596 - - - - -] [instance: 2991095f-9fc0-4a64-9b35-ef5637e1757a] During the sync_power process the instance has moved from host HV12R202 to host HV12R201
2016-04-22 07:50:10.141 4108 INFO hyperv.nova.cluster.clusterops [-] Instance instance-00000001  failover to HV12R201.
2016-04-22 07:50:11.142 4108 WARNING oslo.service.loopingcall [-] Function 'hyperv.nova.cluster.clusterops._looper' run outlasted interval by 0.28 sec
2016-04-22 07:50:11.150 4108 INFO hyperv.nova.cluster.clusterops [-] Checking instance failover instance-00000003 to HV12R201 from host HV12R202.
2016-04-22 07:50:12.130 4108 INFO hyperv.nova.cluster.clusterops [-] Instance instance-00000003  failover to HV12R201.

The changes are consistent with Nova and the instances location has been updated.

instance1_migrated instance2_migrated

Of course, you can try out this feature yourself. We’re going to explain the setup in the following part.

Requirements

  • OpenStack Mitaka deployment
  • Active Directory
  • At least 2 Hyper-V compute nodes in the same Active Directory, running Windows Hyper-V / Server 2012 R2 or newer, including Nano Server
  • Install the OpenStack Hyper-V Compute Driver on the Hyper-V nodes
  • Optionally, you can add Scale-Out File Server storage nodes, also joined to the Active Directory domain and part of the same cluster.

Setup

  1. Run the following PowerShell script on a server in the Active Directory domain. This script will configure the Failover Clustering service on each Hyper-V compute node along with a SMB share, where the instances’ disks and configurations will be hosted on, since they will have to be accessible from each node. The shared storage used by Hyper-V can be either based on SMB or CSV, for this basic demo we’ll just use a simple SMB share, take a look at our OpenStack Hyper-V hyper-converged offer for a full Scale-Out File Server storage configuration.
    ########################
    # Windows Server 2012 R2
    # setting up server for share
    
    # please setup your own configurations here.
    $hyperv_nodes = @("HYPERV_NODE_01.DOMAIN", "HYPERV_NODE_02.DOMAIN")
    $cluster_name = "CLUSTER_NAME"
    $cluster_user = "DOMAIN\USER"
    $smb_path = "S:\SMB_SHARE"
    $smb_name = "SMB_SHARE"
    
    # install locally
    Install-WindowsFeature File-Services, FS-FileServer
    
    # this will install the features necessary, including the Failover Clustering feature.
    Invoke-Command -ComputerName $hyperv_nodes {Install-WindowsFeature File-Services, FS-FileServer, Failover-Clustering}
    
    New-Cluster –Name $cluster_name -Node $hyperv_nodes
    
    # Create share folder
    MD "$smb_path"
    # Create file share
    New-SmbShare -Name $smb_name -Path $smb_path -FullAccess $cluster_user
    # Set NTFS permissions from the file share permissions
    Set-SmbPathAcl –ShareName $smb_name
  2. On each Hyper-V compute node, we must configure the nova-compute service to use the Hyper-V Cluster Driver along with a shared instances configuration path. Update these configuration options with your own values and add them to their appropriate sections:
    # add extra configs in nova.conf
    [DEFAULT]
    compute_driver=hyperv.nova.cluster.driver.HyperVClusterDriver
    instances_path=\\SHARE_HOST_IP\SHARE_PATH\Instances
  3. (Optional) Install the OpenStack Mitaka Cinder Storage driver, enabling SMB3 block storage volumes on clustered storage with Windows Server Scale-Out File Server enabled.
  4. Done! Now you can create and use your OpenStack Hyper-V instances as usual, with the additional peace of mind provided by the Hyper-V Clustering feature working behind the scenes.

We hope you enjoy this feature and if you have any questions or suggestions, please let us know!

The post Hyper-V Failover Cluster in OpenStack appeared first on Cloudbase Solutions.

by Claudiu Belu at April 25, 2016 01:00 PM

The Official Rackspace Blog

Three Key Takeaways from the OpenStack User Survey

With the OpenStack Summit kicking off today in Austin (back where it all began six years ago), it’s the perfect time to assess where the open source project Rackspace helped co-found stands today.

And the perfect way to do that is by sifting through the bi-annual OpenStack User Survey, commissioned by the non-profit OpenStack Foundation, which provides a snapshot of the state of the project through the eyes of our most important constituents, the users of OpenStack.

The latest survey captured responses from 1,600 community members in 1,100 organizations. It shows the progress that’s been made, as well as areas that still need work. But overall, the signs are extremely encouraging, as OpenStack continues to become the platform of choice for building Clouds. You can download the full survey here.

As a long-time member of the community and part of the senior leadership team at Rackspace, I want to offer my perspective on three key takeaways from the survey, provide insight into what is ahead for OpenStack and discuss what remains to be accomplished.

The Power of Open Source and Open APIs

In the survey, nearly all respondents — 97 percent — said “standardiz(ing) on the same open platform and APIs that power a global network of public and private clouds” was one of their top five considerations.

OpenStack survey

This validates the decision Rackspace made with NASA six years ago to open source a cloud platform to the world. We always envisioned a global network of open clouds that would deliver the benefits of elastic self-service infrastructure to the masses. And now we’re beginning to see this global network become a reality, as the number of OpenStack deployments grows and users realize the benefit of a standardized open platform and open APIs.

This standardization is critical, because it really is the driver that enables users to meet the other top business drivers highlighted in the survey, like avoiding vendor lock-in, accelerating innovation, increasing operation efficiency and saving on infrastructure costs.

Rackspace continues to be at the forefront of this effort, by maintaining the largest OpenStack public cloud in the world, helping the community continue building on the open platform and open APIs and making it possible for any organization to consume their own OpenStack private cloud.

Most recently, we announced our OpenStack Everywhere offering, which makes it possible for customers to consume their private cloud from any data center in the world.

Containers Are the New Building Blocks for Applications

It’s impossible these days in the IT world to not hear the buzz around containers, and it’s no different in the OpenStack community. As in the previous survey, containers topped the list of emerging technologies of most interest to users.

Containers

It seems clear the IT community is starting to converge on containers as the building block for new applications, while users are asking, “What is the place of containers in an OpenStack world?”

Early on, Rackspace recognized the importance of containers and anticipated customer demand. We started an internal initiative to support containers in our public cloud, which spawned the Magnum project, making containers and container orchestration engines first class resources in OpenStack. That internal initiative also gave birth to Carina, our Containers as a Service offering, which is currently in beta.

As the adoption of containers continues to grow in public and private clouds, Rackspace will be able to provide our customers with a common experience as they deploy and manage containers alongside their virtual machines and bare-metal servers.

Managing OpenStack is Still Complex

The latest survey shows improvement in terms of the percentage of users who have a negative opinion of OpenStack and that is encouraging. However, those negative opinions have not translated into positive opinions — which means OpenStack still has a way to go in order to win more hearts and minds. The survey uses a Net Promoter score to measure user satisfaction with OpenStack, based on answers to the question, “How likely are you to recommend OpenStack to a friend or colleague?”

NPS OpenStack

Why are the majority of opinions about OpenStack negative or neutral?

A primary reason is that, despite progress, OpenStack is still difficult to deploy and operate. Rackspace has a great deal of experience in this area and we continue to work with the community to apply what we’ve learned. Rackspace also makes it easy for users to consume a public or private OpenStack cloud by taking on the burden of operations, so users can focus on using the cloud and not on untangling the complexities of operating a cloud. We deliver Fanatical Support through our various OpenStack-as-a-Service offerings.

The OpenStack User Survey is a good barometer for where the project stands today — and the future looks very bright. We are like proud parents watching our child blossom despite some growing pains. Rackspace is as committed to an open cloud platform and open APIs as we were six years ago, and we’re looking forward to leading the project forward into the future.

The post Three Key Takeaways from the OpenStack User Survey appeared first on The Official Rackspace Blog.

by John Engates at April 25, 2016 11:42 AM

Opensource.com

Kicking off the Summit, and more OpenStack news

Catch up on the latest OpenStack happenings in this special summit edition of our weekly OpenStack news.

by Jason Baker at April 25, 2016 06:59 AM

April 24, 2016

hastexo

HX201 is live! OpenStack training doesn't get better than this

Today, we released HX201 Cloud Fundamentals for OpenStack, our brand new self-paced OpenStack course with fully interactive labs, all running on an open-source platform.

In content, this course is quite similar to our instructor-led HX101 course. It tells you everything you need to know to operate an OpenStack compute cloud, and is an excellent preparation for the Certified OpenStack Administrator (COA) exam administered by the OpenStack Foundation. You'll learn about Keystone, Glance, Neutron, Nova, and Cinder, and learn how all of OpenStack's components fit together.

In addition, you will learn exactly how to deploy an OpenStack cloud, starting from nothing but a handful of Ubuntu boxes. You'll bootstrap one of your nodes (which we call the deploy node) as a Juju master, and then point your additional nodes to that master node. From there, you will rapidly advance to the point of having your own OpenStack environment deployed from scratch.

HX201 Screenshot: Juju GUI output A realistic immersive lab environment, spread over 5 machines

That means that in order to get started with OpenStack, you can do so now, as in straight away and with zero delay. No need to wait for lab hardware to arrive, no need to set up virtual appliances, nothing. Just log in and go, using nothing but a web browser.

Learn at your own pace

Few things are as frustrating as being locked into a rigid time schedule when learning. We want to learn at our own pace. Some of us learn best when we're able to concentrate on a single topic for a full day. Others prefer to learn perhaps an hour or two at a time. With HX201, not a problem! If you want to take a break, there's nothing you need to do, not even click a button. As soon as you're idle for some time, your environment will suspend, and resume once you come back — allowing you to continue exactly where you left off.

HX 201 Screenshot: Learning about the Nova command line client First you learn...

HX 201 Screenshot: Running nova boot ... then you try it out ...

HX 201 Screenshot: Horizon overview ... and verify your results.

Learn together, and share with others!

As you would expect from any learning management system worth its salt, hastexo Academy comes with a built-in discussion board and wiki, so for every course you are taking, you have the opportunity to interact and exchange ideas with your fellow learners — and, of course, with our hastexo instructors.

Get started!

The price for this course is € 699, however here's an awesome kicker. From May 1 – May 15, we are offering a neat little bundle: when purchased with an OpenStack Foundation Certified OpenStack Administrator exam voucher for € 299, the course comes at a special rate of € 400 flat discounted. Yep, that means you'll get the course and exam for what is normally the price of the exam alone.

We're always available for you if you have questions. Drop us a line!

by florian at April 24, 2016 12:00 AM

April 23, 2016

Aptira

OpenStack Days India – Sponsorship & Speaking Opportunities

OpenStack Days India

Held in India from July 8-9, OpenStack Days India is set to be the region’s largest, and India’s best, conference focusing on Open Source cloud technology. Gathering users, vendors and solution providers OpenStack Days India is an industry event to showcase the latest technologies and share real-world experiences of the next wave of IT virtualisation.

Hosted by Aptira and the OpenStack Foundation, the conference has a range of sessions on the broader cloud and Software Defined Infrastructure ecosystem including OpenStack, containers, PaaS and automation. The conference also features keynote presentations from industry leading figures, workshops and a networking event for a less formal opportunity to engage with the community.

Speakers include the OpenStack Foundation’s Executive Director, Jonathan Bryce, and our very own CTO, Kavit Munshi. We are looking for additional speakers and sponsors – if you’d like to be involved, please visit http://openstackdays.in/. Note that sponsorships close on the 5th of June, and speaker submissions are due by the 10th of June.

The post OpenStack Days India – Sponsorship & Speaking Opportunities appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Jessica Field at April 23, 2016 05:07 AM

OpenStack Superuser

Breaking down the Austin Summit for OpenStack operators

Operators, welcome to Austin!

I'm delighted to describe some of the exciting activities that you can participate in this week. That's right - the whole week has something for you :)

We start on Monday with the flagship event the Ops Summit.

The Ops Summit is a day comprised of collaborative working sessions for people who are operating OpenStack clouds (and the contributors who want to hear from them.) The purpose is to share knowledge and best practices among cloud operators, as well as provide feedback based on experience running OpenStack. There are no formal presentations and a moderate level of knowledge around running OpenStack is assumed.

We'll be discussing what went well in Liberty and Mitaka (and upgrades), working out what should go in a new version of the OpenStack Operations Guide, and getting deep into running containers on OpenStack and OpenStack on containers.

The working groups, like the Large Deployments Team, OSOps, Ops Tags Team and the Scientific Working Group will be having their working sessions, as well as an important meeting for non-ATC recognition.

As usual, there'll be the lightning talks aimed at sharing architectures and interesting deployment tips.

Beyond Monday, there are a number of Fishbowl and Working sessions in which to participate. Here's a quick list:
Tuesday
●Scientific WG

Wednesday
●User Committee
●Large Deployment Team
●Ansible
●Puppet
●Designate: Talk to the Devs
●Horizon: Operator / Plugin Feedback
●Swift: Ops Feedback

Thursday
●Ansible
●Puppet
●Chef
●Neutron: User feedback track: health checking and troubleshooting
●Neutron: User feedback track: end user and operator pain points

After the successful impromptu informal meetup in Tokyo, we will have a room reserved on Friday for unstructured discussions from 9 a.m.--5:30 p.m. Turn up to Salon E and see what happens.

Make sure to stick around for the Summit feedback and Community Contributor Awards session on Friday lunch!

by Tom Fifield at April 23, 2016 01:41 AM

April 22, 2016

Mirantis

OpenStack Community App Catalog Review: Issue #4

The post OpenStack Community App Catalog Review: Issue #4 appeared first on Mirantis | The Pure Play OpenStack Company.

Welcome back to the Community App Catalog Digest! Today we’ll try to help you with your summit planning.

Summit planning

Interested in building the Application Ecosystem within OpenStack? Please join us Thursday, April 28 for three Design Summit slots. We are starting at 3:10 PM (local time) in room MR 408 with the project Overview, Status and Plans.

Then we are moving to Boardroom 403 for the next two working sessions that start at 4:10 PM and 5:00 PM (local time) respectively. For the working session agendas click here. Basically, we’ll be talking about integration with Murano and Magnum, and about some additional asset types, such as TOSCA.

You can also find the list of all related sessions here.

Story of the month: TOSCA

1While we’re on the subject of TOSCA, Sahdev Zala from IBM has added the ability to use TOSCA assets to the App Catalog. The project started in January with defining the metadata for TOSCA assets. Then in March, the “hello world” template was added. Finally, a couple of days ago, on April 20, the CSAR example and important assets for Wordpres and the ELK stack were added. So TOSCA is the story of the month, thanks to Sahdev Zala.

Sahdev Zala has been working for IBM for 12 years. He is a co-founder and architect of two projects: OpenStack Heat-Translator and OpenStack TOSCA-Parser. We are glad to have such an active contributor on board!

Further reading: Partners and the Mirantis Unlocked validation program — Part 1: Plugins

Last time we mentioned that Mirantis has an app Validation Program that’s easy to participate in. Now we are able to tell you more about this initiative — including why it’s important, and why vendors are interested in it.

The validation program for hardware vendors such as Dell, SuperMicro, Quanta, and so on helps those vendors to show their clients that their hardware is tested and works with MOS (Mirantis OpenStack). We offer validated hardware configurations, which include a bill of materials (BoM) to show the recommended hardware unit (or model) for a particular role (such as a compute node, controller node, or storage node).

At the infrastructure level (for example, the integration of EMC VNX or other appliances, or ScaleIO to build SDS) we offer OpenStack Driver validation for manual integration and Fuel Plugin validation for automated deployment.

Finally, at the application level, we offer application validation for those who create products that run on OpenStack VMs. This program is most interesting to partners who create products to run in virtual environment, such as Virtual Network Functions (VNFs). These products are organised as Murano packages or Glance images and we strongly recommend that you publish these solutions in the Community App Catalog to ease the validation process — and strengthen the catalog. Complimentary applications such as Zabbix or Talligent, that are using OpenStack API, can also be validated as part of this program.

To read the full story click here.

Thank you for staying with us and have a safe trip to Austin!

The post OpenStack Community App Catalog Review: Issue #4 appeared first on Mirantis | The Pure Play OpenStack Company.

by Ilya Stechkin at April 22, 2016 10:05 PM

Partners and the Mirantis Unlocked validation program — Part 1: Plugins

The post Partners and the Mirantis Unlocked validation program — Part 1: Plugins appeared first on Mirantis | The Pure Play OpenStack Company.

Business partnerships are so often about common marketing activities such as co-branding, cross-promotion and the publication of joint press releases, but Mirantis is first and foremost a technology company, and very serious about the technical and engineering solutions we produce. That means that the Mirantis Unlocked Partnership Program is about creating common technical solutions.

Of course, because we’re involved in so many different areas of OpenStack, we’ve found that these solutions covered a lot of ground. In all, we have identified three levels of integration with OpenStack: the hardware level, the infrastructure level and the application level — and each needs its own validation program. In this series of posts, we’re going to discuss those different programs and what they mean for both vendors and end users.

Three types of OpenStack validation

The validation program for hardware vendors such as Dell, SuperMicro, Quanta, and so on helps those vendors to show their clients that their hardware is tested and works with MOS (Mirantis OpenStack). We offer validated hardware configurations, which include a bill of materials (BoM) to show the recommended hardware unit (or model) for a particular role (such as a compute node, controller node, or storage node).

At the infrastructure level (for example, for Cinder it can be an integration with EMC VNX or other appliances, or with SDS (Software Defined Storages) like ScaleIO) we offer OpenStack Driver validation for manual integration and Fuel Plugin validation for automated deployment.

Finally, at the application level, we offer application validation for those who create products that run on OpenStack VMs. This program is most interesting to partners who create products to run in virtual environment, such as Virtual Network Functions (VNFs). These products are organised as Murano packages or Glance images and we strongly recommend that you publish these solutions at the Community App Catalog to ease the validation process. Complimentary applications such as Zabbix or Talligent, that are using OpenStack API, can also be validated as part of this program.

Now let’s discuss, in detail, the benefits of each program, starting with OpenStack plugin validation.

OpenStack plugin validation

We started with OpenStack plugin validation in 2014 because at that moment a huge number of vendors interested in OpenStack at the infrastructure level. They have software, hardware or hybrid solutions under OpenStack’s governance, such as storage that serves as a backend for Cinder OpenStack component. For example, NetApp, and Solidfire as a part of NetApp, need a Cinder driver, Juniper Contrail needs a Neutron driver (or a plugin), and so on. But still there is a question: why are vendors interested in validation?

Imagine you are and end user. You might be thinking, “I’m going to build a cloud, but I don’t know what storage backend I should choose. Should I choose a hardware appliance or a SDS (software-defined storage)? Or both?” So who do you ask?

Well, your cloud provider would be a good choice, especially if they are vendor-neutral. For example, Mirantis will advise you based on your particular use case and the data we’ve obtained from the various validation programs.

Now of course, if you’re a vendor and you have a number of tested solutions for different use-cases, you have a better chance of being the recommendation from the cloud-provider. So if you are a vendor, you are interested in the validation program to get the client.

What is the OpenStack plugin validation process?

As we mentioned before this validation is designed for partners integrating with MOS on the infrastructure level.

The prerequisite for this process is an OpenStack driver or a plugin, that is adopted by OpenStack community. DriverLog is a source of truth for plugins and drivers for different OpenStack components.

Ok, a plugin is developed, upstreamed and published in DriverLog. What’s next?

The exact process varies according to the actual situation, of course. But in general the integration between the partner’s technology and MOS is done manually in the lab.

Let’s say we have a partner with a SDS solution (per example, Solifire). They would like to demonstrate that instead of the standard storage back-end that Fuel can deploy MOS can work with their product.

First step is to design a joint RA (Reference Architecture). Mirantis Partner team will work with a partner to identify a target use cases and target user personas and help a partner to define a solution that they will validate.

Before moving to the next step a partner should make sure they have all necessary hardware that’s configured properly. Mirantis Unlocked program can provide a partner with a lab if they don’t have their own (here is how).

Next step is to download the latest version of MOS from here and deploy a new environment with Fuel (we recommend to go with an HA (High Availability) as this is something Mirantis customers have in production).

Next step is to manually replace one solution with another. In case of our SDS partner they might need to manually replace LVM as a Cinder storage back end with their product. Which will mean that they need to install their product, connect it to MOS environment and reconfigure it, so Cinder with use this SDS instead of Ceph (in most cases it means that a partner needs to install a Cinder driver and to change some Cinder config files). And the most difficult part is done.

As soon as everything is installed, a partner runs tests (using Tempest and Rally or any other test tool that they have), and then prepares a runbook, which is a complete step-by-step manual that enables any MOS user to get the same results. This document should be related to a particular Mirantis OpenStack release.

Experience shows us that the lifetime of each release is not longer than two years, but we ask our partners to keep their solutions relevant for each new release. That means that it’s preferred that you test your solution every six months, and you must renew the validation once a year.

Next time, we’ll talk about the Fuel plugin validation program.

The post Partners and the Mirantis Unlocked validation program — Part 1: Plugins appeared first on Mirantis | The Pure Play OpenStack Company.

by Evgeniya Shumakher at April 22, 2016 09:51 PM

Your Passport to OpenStack Success: Play #OSUnlocked

The post Your Passport to OpenStack Success: Play #OSUnlocked appeared first on Mirantis | The Pure Play OpenStack Company.

To celebrate five years of OpenStack releases and our community, we’ve designed stickers for each release of OpenStack—from Austin all the way through Ocata. At the OpenStack Summit in Austin, where it all began, we have plenty of stickers—and some choice swag— to give away.

The #OSUnlocked game works like this: earn points, collect swag. To start, visit the Mirantis booth (A41) to pick up a passport and your first sticker (and point). Use the passport as a guide.

For each Mirantis partner you visit, you can collect a sticker. Each partner has a unique sticker; supplies are limited. For each sticker, you’ll get a point, and you can unlock extra points via social media challenges. Redeem your points at the Mirantis booth.

Collect 6 points, earn a hat

Collect 10 points, earn a RUN MOS t-shirt (redeem on Wednesday)

Collect 16 points, and choose from:

  • An “OpenStack in Action” book
  • A high quality flask
  • AND you will be entered to win a free OpenStack certification by Mirantis Training!

Visit these booths to collect unique OpenStack release stickers:

Arista Networks (will have a technical team at the summit to speak with customers about deployment of OpenStack with its cloud networking platforms for automated provisioning and configuration of the network elements using its ML2/neutron integrations.
Supermicro® will be showcasing its portfolio of OpenStack Cloud Solutions, including live demonstrations of their Unlocked Cloud Appliance and Scalable Reference Architecture based on the Mirantis OpenStack Platform.
Cumulus Networks will be talking about #NetDevOps, which enables cloud admins, network engineers, and system administrators to speak the same language using standard DevOps tools like Ansible, Puppet, Chef, and Git. Also, using Layer 3 networking in the entire rack all the way on the host, eliminating VLANs and IP address bookkeeping in an OpenStack solution.
Quanta QCT is showcasing QCT QxStack Mirantis Unlocked Appliance for Cloud Native Applications. Learn the architecture design of the appliance .
FusionStorm will be showing how to simplify your OpenStack deployment with an OpenStack Appliance from Dell, Juniper, Supermicro, Quanta, Arista, and others.
Juniper will show a short video, “Bring your own OpenStack with Contrail.”
NetApp and Solidfire are now one company, and they’re talking about how CI/CD workflows, containers, and enterprise applications are all accelerated with NetApp technology.
EMC’s focus is SDS/Hardware on a highly available platform, and will be showing how it enables OpenStack-based SDDC by providing Integrated Solutions for IaaS & PaaS.
Talligent will be showing its billing and capacity management solution for Openstack, and will be showing for the first time a new feature to compare AWS costs alongside Openstack private cloud cost.
During the Summit, PLUMgrid will cover production deployments, micro-segmentation, Docker container networking, group-based policy for microservices, SDN monitoring, LBaaS with F5 Networks, virtual CPE, and policy-based service insertion such as port mirroring. PLUMgrid will demonstrate its latest portfolio, Open Networking Suite 5.0 and CloudApex™, and offer free consulting engineering sessions its booth.
Big Switch Networks will be talking about next generation networking with Open SDN Fabrics. Visit Big Switch Networks to learn more about the latest in OpenStack, VMware private clouds and NFV deployments at Verizon and others.
Intel is talking about the OpenStack Innovation Center (OSIC). Go to osic.org/contest for your chance to win a cool mini-rack in the Bounty for Big Ideas developer contest.
MidoNet is the leading open-source network virtualization overlay, integrated with Mirantis Fuel.  Visit the Midokura booth to see a demo of MEM Insights, its suite of operational tools for network management, including flow tracing, flow history,  usage reports, and traffic counters.
Stop by Tesora’s booth to learn more about the Tesora DBaaS Platform— its enterprise-hardened version of OpenStack Trove adds the scalability, security, and availability capabilities that enterprises and cloud service providers need. Tesora is the leading contributor to the OpenStack Trove project.
Groundwork is showing off a fully-functional copy of GroundWork Monitor installed on an industrial fanless PC created by Logic Supply. This copy of GroundWork is fully supported for one year from the date of the show, and will monitor anything, including Mirantis Open Stack deployments of up to 50 instances.

To get extra points via social media, use the hashtag #OSUnlocked on Twitter or Instagram

  • Post a photo of a Mirantis Unlocked Appliance
  • Post a selfie with someone from Mirantis Training (also tag with #OpenStackTraining)
  • Post a photo of AT&T or Volkswagen presenting
  • Post a photo of a Mirantis sticker in the wild
  • Post a photo of your passport/stickers
  • Wildcard (TBA by @MirantisIT or Mirantis Unlocked Partners on Twitter)

Keep an eye on #OSUnlocked on Twitter to see what’s happening in the game!

The post Your Passport to OpenStack Success: Play #OSUnlocked appeared first on Mirantis | The Pure Play OpenStack Company.

by Jodi Smith at April 22, 2016 09:07 PM

OpenStack Superuser

Top four user talks at the OpenStack Austin Summit

In just a few days, stackers from around the world will meet up at the Austin Summit. There are a record 26 Summit tracks plus hundreds of talks, workshops and happenings, including some great case study sessions from the likes of AT&T, Workday and the National Science Foundation.

If your head is already spinning, we’ve got you covered. Superuser is coming to Austin with a special print edition that highlights some of the key themes outside those tracks (plus lots of fun content about the Summit and the city).

alt text here

Can’t make it to Austin? No worries. Bookmark Superuser (or check out the OpenStack Foundation YouTube page) to watch the videos of Summit sessions.

Here are our picks for some of the user talks that you shouldn’t miss.

British Tax Authority HMRC's OpenStack Journey

The British Tax Authority (HMRC) recently introduced an Openstack cloud as the second hosting provider to its Digital Tax Platform. This open source platform-as-a-service (PaaS) project has revolutionized the government’s ability to interact with British taxpayers and facilitated a rate of change unprecedented in the government. The rapid introduction of DataCentered’s Openstack Cloud has allowed the Tax Platform to create a multi-active platform which spans both VMware and Openstack clouds and multiple suppliers. Tim Britten, product owner for web-operations on Openstack, will tell how his organization was able to spin up a second active side providing the organization with geographic, supplier and technological resilience ahead of the tax return peak, taking in over £343 million (about USD$487 million) in 24 hours.

Tuesday, April 26 • 4:40 p.m.-- 5:20 p.m. • Austin Convention Center - Level 4 - MR 18 A/B

Open Source NFV: Lessons learned from end users

Many of the world’s largest communications companies and enterprise network operators—including Orange, Docomo, AT&T and China Mobile—are embracing NFV via open source projects like OPNFV and OpenStack. Come hear leading telecoms discuss emerging NFV use cases, proof-of-concept and testing and what results they’re achieving.
Wednesday, April 27 • 5:20 p.m. - 6:00 p.m. • Austin Convention Center - Level 4 - Ballroom D

How Burton Snowboards is carving down the OpenStack trail

In this session, Mario Blandini, SwiftStack, VP marketing and Jim Merritt, senior systems administrator, will share how Burton discovered OpenStack object storage as a solution and the criteria that went into choosing to deploy it for both immediate and future needs. It will cover how they have utilized our private cloud after the initial deployment. Like Burton, you may also have applications running in your environment that already support object storage APIs like Swift and S3.
Thursday, April 28 • 1:30 p.m. - 2:10 p.m. • Austin Convention Center - Level 1 - Ballroom C

Tanks in the clouds

G-Core, a global IT solutions provider, moved its G-Core Innovations IT infrastructure (DCs) to OpenStack from VMware. With Warga.m.ing as a key client, this solution will support a multi-million user base, including “World of Tanks” and “World of Warships.” The case study describes the basic architectural solutions of the project. Thursday, April 28 • 2:20 p.m. - 3:00 p.m.• Austin Convention Center - Level 4 - Ballroom G

Cover Photo by StartTheDay // CC BY NC

by Superuser at April 22, 2016 05:03 PM

Solinea

Deploying Kubernetes with Ansible and Terraform

<script src="https://cdn.rawgit.com/google/code-prettify/master/loader/run_prettify.js"></script>

Let’s talk Kubernetes. I’ve recently had some clients that have been interested in running Docker containers in a production environment and, after some research and requirement gathering, we came to the conclusion that the functionality that they wanted was not easily provided with only the Docker suite of tools. These are things like guaranteeing a number of replicas running at all times, easily creating endpoints and load balancers for the replicas created, and enabling more complex deployment methodologies like blue/green or rolling updates.

As it turns out, all of this stuff is included to some extent or another with Kubernetes and we were able to recommend that they explore this option to see how it works out for them. Of course, recommending is the easy part, while implementation is decidedly more complex. The desire for the proof of concept was to enable multi-cloud deployments of Kubernetes, while also remaining within their pre-chosen set of tools like Amazon AWS, OpenStack, CentOS, Ansible, etc.. To accomplish this, we were able to create a Kubernetes deployment using Hashicorp’s Terraform, Ansible, OpenStack, and Amazon. This post will talk a bit about how to roll your own cluster by adapting what I’ve seen.

Why Would I Want to do This?

This is totally a valid question. And the answer here is that you don’t… if you can help it. There are easier and more fully featured ways to deploy Kubernetes if you have open game on the tools to choose. As a recommendation, I would say that using Google Container Engine is by far the most supported and pain-free way to get started with Kubernetes. Following that, I would recommend using Amazon AWS and CoreOS as your operating system. Again, lots of people using these tools means that bugs and gotchas are well documented and easier to deal with. It should also be noted that there are OpenStack built-ins to create Kubernetes clusters, such as Magnum. Again, if you’re a one-cloud shop, this is likely easier than rolling your own.

Alas, here we are and we’ll search for a way to get it done!

What Pieces are in Play?

For the purposes of this walkthrough, there will be four pieces that you’ll need to understand:

  • OpenStack – An infrastructure as a service cloud platform. I’ll be using this in lieu of Amazon.
  • Terraform – Terraform allows for automated creation of servers, external IPs, etc. across a multitude of cloud environments. This was a key choice to allow for a seamless transition to creating resources in both Amazon and OpenStack.
  • Ansible – Ansible is a configuration management platform that automates things like package installation and config file setup. We will use a set of Ansible playbooks called KubeSpray Kargo to setup Kubernetes.
  • Kubernetes – And finally we get to K8s! All of the tools above will come together to give us a fully functioning cluster.

Clone KubeSpray’s Kargo

First we’ll want to pull down the Ansible playbooks we want to use.

  • If you’ve never installed Ansible, it’s quite easy on a Mac with brew install ansible. Other instructions can be found here.

  • Ensure git is also installed with brew install git.

  • Create a directory for all of your deployment files and change into that directory. I called mine ‘terra-spray’.

  • Issue git clone git@github.com:kubespray/kargo.git. A new directory called kargo will be created with the playbooks:

Spencers-MBP:terra-spray spencer$ ls -lah
total 104
drwxr-xr-x  13 spencer  staff   442B Apr  6 12:48 .
drwxr-xr-x  12 spencer  staff   408B Apr  5 16:45 ..
drwxr-xr-x  15 spencer  staff   510B Apr  5 16:55 kargo
  • Note that there are a plethora of different options available with Kargo. I highly recommend spending some time reading up on the project and the different playbooks out there in order to deploy the specific cluster type you may need.

Create Terraform Templates

We want to create two terraform templates, the first will create our OpenStack infrastructure, while the second will create an Ansible inventory file for kargo to use. Additionally, we will create a variable file where we can populate our desired OpenStack variables as needed. The Terraform syntax can look a bit daunting at first, but it starts to make sense as we look at it more and see it in action.

  • Create all files with touch 00-create-k8s-nodes.tf 01-create-inv.tf terraform.tfvars. The .tf and .tfvars extensions are terraform specific extensions.

  • In the variables file, terraform.tfvars, populate with the following information and update the variables to reflect your OpenStack installation:

node-count="2"
internal-ip-pool="private"
floating-ip-pool="public"
image-name="Ubuntu-14.04.2-LTS"
image-flavor="m1.small"
security-groups="default,k8s-cluster"
key-pair="spencer-key"
  • Now we want to create our Kubernetes master and nodes using the variables described above. Open 00-create-k8s-nodes.tf and add the following:
##Setup needed variables
variable "node-count" {}
variable "internal-ip-pool" {}
variable "floating-ip-pool" {}
variable "image-name" {}
variable "image-flavor" {}
variable "security-groups" {}
variable "key-pair" {}

##Create a single master node and floating IP
resource "openstack_compute_floatingip_v2" "master-ip" {
  pool = "${var.floating-ip-pool}"
}

resource "openstack_compute_instance_v2" "k8s-master" {
  name = "k8s-master"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${openstack_compute_floatingip_v2.master-ip.address}"
}

##Create desired number of k8s nodes and floating IPs
resource "openstack_compute_floatingip_v2" "node-ip" {
  pool = "${var.floating-ip-pool}"
  count = "${var.node-count}"
}

resource "openstack_compute_instance_v2" "k8s-node" {
  count = "${var.node-count}"
  name = "k8s-node-${count.index}"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${element(openstack_compute_floatingip_v2.node-ip.*.address, count.index)}"
}
  • Now, with what we have here, our infrastructure is provisioned on OpenStack. However, we want to get the information about our infrastructure into the Kargo playbooks to use as its Ansible inventory. Add the following to 01-create-inventory.tf:
resource "null_resource" "ansible-provision" {
  depends_on = ["openstack_compute_instance_v2.k8s-master","openstack_compute_instance_v2.k8s-node"]
  
  ##Create Masters Inventory
  provisioner "local-exec" {
    command =  "echo \"[kube-master]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" > kargo/inventory/inventory"
  }

  ##Create ETCD Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[etcd]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" >> kargo/inventory/inventory"
  }

  ##Create Nodes Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[kube-node]\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", openstack_compute_instance_v2.k8s-node.*.name, openstack_compute_floatingip_v2.node-ip.*.address))}\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> kargo/inventory/inventory"
  }
}

This template certainly looks a little confusing, but what is happening is that Terraform is taking the information for the created Kubernetes masters and nodes and outputting the hostnames and IP addresses into the Ansible inventory format at a local path of ./kargo/inventory/inventory. A sample output looks like:

[kube-master]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[etcd]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[kube-node]
k8s-node-0 ansible_ssh_host=xxx.xxx.xxx.xxx
k8s-node-1 ansible_ssh_host=xxx.xxx.xxx.xxx

[k8s-cluster:children]
kube-node
kube-master

Setup OpenStack

You may have noticed in the Terraform section that we attached a k8s-cluster security group in our variables file. You will need to set this security group up to allow for the necessary ports used by Kubernetes. Follow this list and enter them into Horizon.

Let ‘Er Rip!

Now that Terraform is setup, we should be able to launch our cluster and have it provision using the Kargo playbooks we checked out. But first, one small BASH script to ensure things run in the proper order.

  • Create a file called cluster-up.sh and open it for editing. Paste the following:
#!/bin/bash

##Create infrastructure and inventory file
echo "Creating infrastructure"
terraform apply

##Run Ansible playbooks
echo "Quick sleep while instances spin up"
sleep 120
echo "Ansible provisioning"
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
-i kargo/inventory/inventory -u ubuntu -b kargo/cluster.yml

You’ll notice I included a two minute sleep to take care of some of the time when the nodes created by Terraform weren’t quite ready for an SSH session when Ansible started reaching out to them. Finally, update the -u flag in the ansible-playbook command to the user that has SSH access to the OpenStack instances you have created. I used ubuntu because that’s the default SSH user for Ubuntu cloud images.

  • Source your OpenStack credentials file with source /path/to/credfile.sh

  • Launch the cluster with ./cluster-up.sh. The Ansible deployment will take quite a bit of time as the necessary packages are downloaded and setup.

  • Assuming all goes as planned, SSH into your Kubernetes master and issue kubectl get-nodes:

ubuntu@k8s-master:~$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-0   Ready     1m
k8s-node-1   Ready     1m

The post Deploying Kubernetes with Ansible and Terraform appeared first on Solinea.

by Spencer Smith at April 22, 2016 04:56 PM

OpenStack Superuser

OpenStack Summit veterans share their best tips for newbies

How will you transform your first OpenStack Summit into an accelerator for your career and your projects in development? And how will you make it happen without getting lost among thousands of other community members?

Here are some great tips from OpenStack Foundation senior marketing manager Heidi Joy Tretheway and three Summit veterans: Shilla Saebi, an OpenStack operations engineer at Comcast, Niki Acosta, OpenStack evangelist at Cisco, and Matt Fischer, principal engineer at Time Warner Cable.

If there’s just one thing every newbie should do at their first Summit, it’s…

Acosta: Meet people! It’s easy—and often more comfortable—to just hang out with the people your work with, but push yourself to go meet new people.

Tretheway: I make meeting people a game so that I don’t just meet people who seem easy to approach. I set goals such as meeting everyone wearing a messenger bag, or everyone wearing cool shoes. My opening is as simple as, “Hi, I wanted to come over here and introduce myself because I’m new to OpenStack.”

Fischer Take the time to thank someone who fixed a bug for you or better yet buy them a beer. You cannot underestimate the value of having a beer with someone you’ve only previously met online. I cannot emphasize this enough.

How can a newbie make the most out of their Summit experience for their own professional development and career?

Saebi: While at the conference, look for takeaways that you can share with your team when you return. Turn them into something actionable. Use social media to connect with others and bring business cards for networking.

Fischer: Say hi to the speakers after they talk. I did this at my first Summit when I heard a talk on using LDAP with Keystone and now I talk to those guys all the time. Business cards are especially helpful here, because speakers might have 10 people waiting. Say, “Hey, thanks for the talk. I’d love to email you about _____ if that’s OK.”

Tretheway: Don’t just listen for how someone can help you—listen for how you can help them. Offer to introduce them to someone you know, to email them a helpful link, or to grab them a soda when you go to get one. People respond to warmth and generosity in amazing ways.

Any warnings, blunders or suggestions on what not to do?

Fischer: Don’t make assumptions about people’s roles at work or in the OpenStack community based on how they look or dress. Ask!

Acosta: Pace yourself. With activities and events all day and all night, it’s not uncommon to want to sleep through the important stuff.

This article originally appeared Superuser's print edition distributed at the Tokyo Summit. The interviewees organized a panel at the Summit that you can catch on YouTube.

Cover Photo // CC BY NC

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/I34ce_6vl5s" width=""></iframe>

by Heidi Joy Tretheway at April 22, 2016 04:52 PM

How to get the most from your first Design Summit

Is this your first OpenStack Design Summit?  Unlike most conferences, you're invited to participate and play an active role. But...where to start?

On Tuesday, right after the keynotes there's a session for that. Come to Design Summit 101 at 11:15 a.m. at the Hilton Austin - MR 400. It's an introduction to the format followed by a lively presentation of the most common situations and how to behave when facing them--a mini experience of the Design Summit. There are various details you should know, including who will be attending, different tracks and goals, which sessions/talks are the most suitable for beginners and how you can participate. This is also your chance to get answers and understand how to get the most out of it!

If you can't make it, or want to jump into the Ops portion of the Summit on Monday, here are some tips to get you started.

Event goals

The design summit is where the Open Design process happens.

It's a collection of tracks where upstream developers gather to discuss requirements for the next release of OpenStack, debate the implementation details and connect with other community members about cross-project issues.

What it is definitely not is classic conference track with speakers and presentations! If you're just starting out, the beginner presentations at the conference will be much more user friendly :)

Event structure

The agenda is collaboratively reviewed and then scheduled by the project technical leads (PTLs) for each project.

The main project team tracks are on Wednesday and Thursday, with "cross project" sessions, which affect multiple projects on Tuesday. Feedback from cloud operators is an integral part of the design process, so on Monday the ops summit has the same approach of collaborative discussions.

On Friday there are contributor meetups, where team members from OpenStack projects--from Astara all the way to Trove-- sit down together and get things done. Though it might be tempting to go and pester a project team since they're all in one place, these sessions are very important to accelerate the development of the next release, so we ask that the teams are able to work in relative peace from "my cloud is broken" questions :)

alt text here

Session types

There are two main types of sessions: Fishbowl sessions and Work sessions, each with their own standard format and protocol.

Fishbowl sessions, with chairs arranged in a circle to enable discussion, are where the big debates happen and broader feedback is conducted. They start with a brief recap of the session topic and a definition of the objective--the best participants in these sessions come prepared with knowledge of the topic. Notes are collaboratively taken on an Etherpad (see logistics), and you can volunteer to take the minutes.

One of the key points to note about this room layout is that the people in the front row are the most active in the discussion: if you have something to add, move to the front.

The session lead is expected to keep the discussions focused and ensure the meeting is inclusive, but not answer all of the questions. When only a few minutes are left, they will to get concrete actions out of the discussion : i.e. X will do Z and record them in the Etherpad.

Work sessions are for team members to make concrete progress on a particular topic, and not for newcomers to ask questions. They are in smaller rooms, for contributors (including newcomers) to make fast/efficient progress to the projects.

Logistics

As noted before, it's very rare to see presentations at the Design Summit. Instead of the usual slides, you'll see an Etherpad, a text document powered by our favorite collaborative text editor. Anyone can edit the Etherpad, and take notes simultaneously - simply open the link in your browser. Each person writing on the document is assigned a color.

If the discussion is difficult to break into, or you don't feel comfortable speaking in public, putting your comments directly on the Etherpad is a great way to make sure you are still heard.

A list of Etherpads can be found at: https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads

Rooms at the Design Summit come equipped with microphones, to be used as needed. However, note that there's an important balance between being able to hear to discussion and the speed of the discussion - passing a microphone around breaks up discussion and takes time. Design summit sessions are for active participation rather than watching, so whether these are used will depend heavily on the team--some groups like it, some groups hate it.

Tips&Tricks

Preparation is what is going to let you get the most out of the session. At the very least, read the Etherpad for the session beforehand.

H/T to Thierry Carrez, director of engineering at the OpenStack Foundation, chair of the Technical Committee and release manager for OpenStack.

Cover Photo Shari Mahrdt // CC BY NC

by Tom Fifield at April 22, 2016 04:51 PM

OpenStack Magnum on the CERN production cloud

European particle physics laboratory CERN is on a mission to push the boundaries of knowledge in physics. Tim Bell, CERN infrastructure manager and OpenStack board member, recently brought to light another kind of innovation the research lab is fronting.

“My first bays [are] running on CERN production cloud with Magnum,” he reported on OpenStack Successes.

The goal of using the OpenStack API service was to provide “agnostic container orchestration” for its 2,000 users as they crunch numbers in the quest to understand the mysteries of the universe. Researchers at four particle detectors (ATLAS, CMS, LHCb, ALICE) generate a mind-blowing 30 petabytes of data annually in the Large Hadron Collider (LHC.)

CERN and OpenStack go way back. CERN’s IT department started working on OpenStack in late 2011, moving most of its IT infrastructure to OpenStack into virtual machines in 2014. CERN has been using OpenStack in production since summer 2013. To stay on the cusp of new technologies, Bell and his team have been working to enable Magnum for container native applications since late 2015.

So what are the takeaways from CERN about containers for more pedestrian OpenStack users? It turns out that even the experts at CERN find it challenging to parse the various options (Kubernetes, Docker Swarm, Mesos, plus Atomic, CoreOS and Rocket) to recommend the right ones for their users.

Superuser talked to Bell about how his team explores new technologies, the difficulty level for adopting Magnum and how operators can get involved in OpenStack.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.google.com/maps/embed?pb=!1m0!3m2!1sen!2sus!4v1460325157367!6m8!1m7!1sCfXFtgDH_XTnKkVrad0hSw!2m2!1d46.23254140005882!2d6.04599690437945!3f305.5167913063668!4f3.4592712661082317!5f0.4000000000000002" style="border:0" width=""></iframe>

A peek inside CERN's data center.

Can you sum up your experience deploying Magnum?

With a cloud the size of the CERN installation, we use Puppet to ensure the configurations of thousands of servers are consistent and can be changed regularly with new configuration options or software updates. The CERN approach is to use upstream Puppet modules where available from the Puppet-OpenStack project. In the case of Magnum, we worked with the Puppet community to create the puppet-magnum configuration now available at https://github.com/openstack/puppet-magnum/. With the Puppet configuration, we can easily deploy at scale, such as with the other 11 puppet components we use in the CERN cloud. We also worked with the RDO distribution and the RPM Packaging project to deploy the code itself.

Did you find documentation?

When working with a new OpenStack project, there are often parts of the documentation which need enhancing to cover installation and configuration. The existing developer documentation at http://docs.openstack.org/developer/magnum/ and assistance on the #openstack-containers IRC channel was a great help but given that our focus is on production deployment, some work was needed to understand the options applicable to our cloud. Given our experience, we are now working on the installation guide to help others deploy more easily.

What’s the difficulty level?

It is important to match the skills of the Organization to the demands of each OpenStack component. The project navigator can help to assess each OpenStack project and make an assessment if it is compatible with the skills in the team.

We had Heat already available as part of the CERN cloud service catalog but for Magnum, we have added Neutron and Barbican to our cloud to handle the networking and secrets. We expect both of these components to be used in future for other cloud activities so sharing existing OpenStack projects is a benefit for the service. Many of these components are relatively newly packaged but the support from the community has been good when we encountered problems.

Technically, the largest challenges have been around understanding how containers would be used rather than deploying Magnum. With many options, such as Kubernetes, Docker Swarm and Mesos, along with Atomic, CoreOS and Rocket, it takes some time to understand the best approaches to recommend to the CERN users. Using Keystone’s endpoint filtering, we can expose a pilot service for specific projects only and use this to validate the approach and improve documentation before making it generally available.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

How long did it take?

With over 2,000 users of the CERN self-service cloud, our community is often exploring new technologies to solve the extreme computing challenges of the Large Hadron Collider and other CERN experiments. Several of our advanced users, such as http://tiborsimko.org/docker-on-cern-openstack.html, had been exploring Docker in 2015 using VMs so these made good pilot testers. The EU Indigo Dataclouds project (https://www.indigo-datacloud.eu/) was interested to develop an open source data and computing platform targeted at scientific communities, deployable on multiple hardware platforms and provisioned over hybrid, private or public, e-infrastructures. Rackspace, collaborating with CERN openlab, have been exploring how to use containers for high throughput computing, building on our previous Rackspace collaboration on OpenStack cloud federation. For the production cloud, we started to look at Magnum in the second half of 2015 and have now started work with pilot users in the first half of 2016. We would expect to go into production for general users during 2016.

Any other things operators should be aware of?

As with CERN’s previous Rackspace collaborations, many EU projects and the OpenStack community, the CERN team works using the upstream, open design processes. This allows easy sharing with other High Energy Physics laboratories using OpenStack such as IN2P3 in Lyon, SLAC in California, DESY in Germany and INFN in Italy. This code can be developed and available to other user communities within the open source umbrella. Operator meetups such as the recent one in Manchester, UK, allow the operator community to share experiences on deploying the new components and give input to decisions on when and what applications to consider a container deployment within their organization.

You can learn more about how CERN is deploying Magnum in this recent talk from CERN’s Bertrand Noel, Ricardo Brito Da Rocha and Mathieu Velten on containers and orchestration in the CERN cloud and hear from the team in sessions at the upcoming Austin Summit.

Cover Photo of an an automated magnetic tape vault at CERN computer center. Credit: Claudia Marcelloni and Maximilien Brice, copyright CERN.

by Nicole Martinelli at April 22, 2016 04:50 PM

Tesora Corp

Short Stack: 10 Reasons for Enterprises to Adopt OpenStack, Red Hat’s New OpenStack and Cloud Platform Suite Products, and 53 New Things to Look for in OpenStack Mitaka

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

10 Reasons for Enterprises to Adopt OpenStack | eWeek

Darryl Taft explored ten reasons why OpenStack is so appealing to enterprises. Because OpenStack is the dominant platform for the private cloud, Taft stated that it is subsequently becoming the the standard for cloud platforms in the enterprise. While continuing to gain adoption across multiple industries, OpenStack’s global presence is undeniable.

Red Hat Continues Cloud Transformation with new OpenStack and Cloud Platform Suite Products | TechCrunch

This week, Red Hat announced the release of OpenStack Platform 8 and the Red Hat Cloud Suite. The cloud suite offers an integrated package with cloud, DevOps and container tools in a single solution with the kind of management layer expected in such a suite. The suite combines Red Hat OpenStack with Red Hat OpenShift, its container environment and CloudForms for overall management as well as the ability to add self-service in a private cloud setting.

User Driven: The Relentless March of Open Source into the Data Center | OpenStack Superuser

This past week, Mesosphere announced they will be releasing major components of DC/OS under the same Apache 2.0 license as OpenStack. This is good news for OpenStack users who run Mesos on OpenStack. OpenStack Foundation COO, Mark Collier, suggested that this move by Mesosphere shows that users have a stronger voice relative to vendors in their current demand for open source.

53 New Things to Look For in OpenStack Mitaka | Mirantis

Nick Chase reported his observations from last week’s Mitaka release. Chase reiterated the Mitaka release’s three general themes: an improved user experience, better manageability, and scalability. Among the most notable updates were improvements to both core and peripheral projects, which added new features in the last development cycle. With this release, there are also many new projects under the “OpenStack Big Tent”.

How to become an advanced contributor to OpenStack | Opensource.com

In an interview with Ildikó Váncsa, Jason Baker discussed her upcoming OpenStack Summit talk titled “How to Become an Advanced Contributor”. Váncsa said the biggest challenge for newcomers to open source is to understand that learning is a step by step process which requires patience with both oneself and the community. The OpenStack community provides a tremendous amount of documentation and tools to help contributors reduce obstacles and understand the less obvious best practices.

The post Short Stack: 10 Reasons for Enterprises to Adopt OpenStack, Red Hat’s New OpenStack and Cloud Platform Suite Products, and 53 New Things to Look for in OpenStack Mitaka appeared first on Tesora.

by Alex Campanelli at April 22, 2016 04:13 PM

IBM OpenTech Team

Enhancing Usability in OpenStack Mitaka

When looking over a new release, the focus is almost always on new features: the more, the better! Well, that’s not always the best way to see things. We get caught up so often in adding new stuff that we don’t notice that what’s already there could use some improvement. That’s the way I see the Mitaka release of OpenStack. Sure, we added support for some new features, but most of my time was spent making OpenStack Nova better for the deployers and maintainers of OpenStack systems.

The first area I focused on was the Nova API documentation. It was, shall we say, sorely lacking. In fact, a comment we heard was that users had to go through the code to understand what calls were available, and what exactly those calls did. So those of us on the Nova API subteam declared that Mitaka would be a documentation cycle: we would only focus on improving the docs for Nova API instead of working to add new API features. We spent months identifying the holes in the documentation, and writing content to fill those holes. One thing that interesting is that a good portion of the subteam that was writing these docs were not native English speakers, so some of the content didn’t read very clearly. So I volunteered to re-word the content to make it sound more like native English. If you’d like to see the improvement in the docs that was made during Mitaka, compare the state of the docs last October to the current version of the docs.

The other main area of improvement was in the cleanup of the Nova configuration options. You can really tell that OpenStack is the product of several different teams working for different companies, each of which has its own use case for deployment, by the sheer number of configuration options that exist: there were over 500 at the beginning of Mitaka! The thinking was that any time a new feature was added that could possibly be handled differently for different situations, the solution was: add another config option! For the most part these were necessary, to allow operators to tweak their deployment to their needs, but all of these degrees of freedom came at the cost of increasing complexity and making OpenStack that much more difficult to deploy and manage. In the code, too, it was a sort of Wild West approach, with options declared all over the code base, and often with only a cryptic description of the option that may have made sense only to the author. There was little or no documentation about the effect of the option, how other options might interact with an option, what the allowed values or defaults were, etc. So when Markus Zoeller of IBM proposed a project to clean this up, I jumped on board. Together we came up with a strategy of centralizing all of the options in a new nova/conf/ directory, along with a format for the help text for each option. To illustrate, let’s look at one option: scheduler_weight_classes. Here’s all the information you got for that in Liberty and earlier:

cfg.ListOpt('scheduler_weight_classes',
            default=['nova.scheduler.weights.all_weighers'],
            help='Which weight class names to use for weighing hosts')

That option was declared in the nova/scheduler/host_manager.py file, which, while not completely obscure, still means that you would have to search the code to find it. Now that option, along with other scheduler-related options, are declared in nova/conf.scheduler.py, which is a bit more obvious. The new declaration also gives a lot more information on what this option does:

cfg.ListOpt("scheduler_weight_classes",
        default=["nova.scheduler.weights.all_weighers"],
        help="""
This is a list of weigher class names. Only hosts which pass the filters are
weighed. The weight for any host starts at 0, and the weighers order these
hosts by adding to or subtracting from the weight assigned by the previous
weigher. Weights may become negative.

An instance will be scheduled to one of the N most-weighted hosts, where N is
'scheduler_host_subset_size'.

By default, this is set to all weighers that are included with Nova. If you
wish to change this, replace this with a list of strings, where each element is
the path to a weigher.

This option is only used by the FilterScheduler and its subclasses; if you use
a different scheduler, this option has no effect.

* Services that use this:

    ``nova-scheduler``

* Related options:

    None
""")

Imagine you were an operator responsible for deploying OpenStack: which version would you prefer?

The work on configuration options was largely completed in Mitaka, but due to the sheer size of the effort, a lot of work remains updating the help text for these options, and this will continue in Newton.

by EdLeafe at April 22, 2016 03:01 PM

Hugh Blemings

Lwood-20160417

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 11 to 17 April 2016:

  • ~634 Messages (up about 3% relative to last week)
  • ~194 Unique threads (up nearly 3% relative to last week)

Steady as it goes this week on the list and a particularly short Lwood as it turns out.

Notable Discussions

Gaming Stackalytics Stats – continued

The thread kicked off last week by Davanum Srinivas and reported in last weeks Lwood continued a little this week.  If it was of interest I’d commend spending a few minutes to read the remainder – one option being canvassed informally was to revert to the default Stackalytics view being based on Commits for a while.

PTL Communications

Most of us aren’t PTLs but if you’re curious about some of the expectations placed on the folk that perform this valuable work for the OpenStack community, Doug Hellman wrote a summary of the expectations around release related communications for PTLs that you might find interesting.

Design Summit Preparation – Etherpads

As one would expect – a few emails this week in preparation for the upcoming summit in Austin.  Most were project specific but Matt Riedman helpfully advised that he’d stubbed out the Newton Design Summit Etherpad pages here.

Upcoming OpenStack Events

Events wise, really “just” lots of discussion around the summit in Austin later this month.

Don’t forget the OpenStack Foundation’s comprehensive Events Page for a comprehensive list that is frequently updated.

People and Projects

Core nominations & changes

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news

This edition of Lwood brought to you by the sounds of a hammer drill in the hotel room next to mine and miscellaneous noises of the fair city of Melbourne, Australia.

by hugh at April 22, 2016 02:47 PM

RDO

What did you do in MItaka? Haïkel Guémar

In this installment of the "What Did You Do in Mitaka" series, I'm speaking with Haïkel Guémar

<iframe data-link="http://www.podbean.com/media/player/z8zks-5ea7c6?from=yiiadmin" data-name="pb-iframe-player" frameborder="0" height="100" id="audio_iframe" scrolling="no" src="https://www.podbean.com/media/player/z8zks-5ea7c6?from=yiiadmin" width="100%"></iframe>

(If the above player doesn't work for you, you can download the audio HERE.)

Rich: Thanks for making time to speak with me.

Haïkel: Thank you.

R: So, tell me, what did you do in Mitaka?

H: I work on the RDO engineering team - the team that is responsible for the stewardship of RDO. For this cycle, I've been focusing on our new packaging work flow.

We were using, for a long time, a piece of infrastructure taken from Fedora, CentOS, and GitHub. This didn't work very well, and was not providing a consistent experience for the contributors. So we've been working with another team at Red Hat to provide a more integrated platform, and one that mirrors the one that we have upstream, based on the same tools - meaning Git, Gerrit, Jenkins, Nodepool, Zuul - that is called Software Factory. So we've been working with the Software Factory team to provide a specialization of that platform called RPMFactory.

RPMFactory is a platform specialized for producing RPMs in a continuous delivery fashion. It has all the advantages of the old tooling we have been using, but with more consistency. You don't have to look in different places to find the source code, the packaging, and stuff like that. Everything is under a single portal.

That's what I've been focusing on during this cycle, on top of my usual duties, which is producing packages and fixing it.

R: And looking forward to the Newton release, what do you think you're going to be working on in that cycle?

H: While we've been working in the new workflow, we've been setting a new goal, that is to decrease the gap between upstream releases and RDO releases down to 2 weeks. Well, we did it on the very same day for Mitaka! So my goal would be for Newton to do as good as this time, or even better. Why not under 2 hours? Not putting on ourselves more pressure, but to try to release almost at the same time as upstream GA, RDO. And also with the same quality standards.

One of the few things that I was not happy with during the Mitaka cycle was mostly the fact that we didn't manage to release some packages in time, so I'd like to do better. Soon enough I will be asking for people to fill in a wish list on packaging, so that we are able to work on that earlier. And so we could release them on time with GA.

R: Thanks again for taking the time to do this.

H: Thank you Rich, for the series.

by Rich Bowen at April 22, 2016 01:52 PM

Opensource.com

How the science of happiness can improve OpenStack teams

In this interview, OpenStack Summit speaker Alexis Monville explains how contributors can increase their happiness on individual and team levels, and he offers a few resources for building healthier teams.

by Rikki Endsley at April 22, 2016 07:03 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
May 02, 2016 08:13 AM
All times are UTC.

Powered by:
Planet