July 01, 2016

Aptira

OpenStack India Days – Agenda Announced!

With a little under a week to go, preparations for OpenStack India Days are well underway!

Day 1 will consist of workshops, including an OpenStack workshop, OpenStack Neutron workshop and a Developer workshop.

Day 2 will be split into a main track, and a technical track. The main track will consist of business style talks from the OpenStack Foundation, Red Hat, OPNFV, Dell and more. The technical track will feature technical talks from Cloud Enabled IBM among others. The tech track will also host a series of OpenStack Neutron Workshops.

For more information and the full agenda, visit http://openstackdays.in/

Tickets are on sale, but spaces are limited so get in quick! To purchase, head to the OpenStack India Days website.

We hope to see you all there!

The post OpenStack India Days – Agenda Announced! appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Jessica Field at July 01, 2016 04:52 AM

Mirantis

Mirantis Training Blog: What’s the best architecture for multiple OpenStack component databases?

The post Mirantis Training Blog: What’s the best architecture for multiple OpenStack component databases? appeared first on Mirantis | The Pure Play OpenStack Company.

Welcome to Mirantis OpenStack Training’s monthly Q&A section. Here our instructors field questions about all aspects of OpenStack, and every month we’ll be sharing some of those answers with you here on the blog. If you have a question that you would like a Mirantis technical instructor to answer, feel free to post your comments in the section below. We will do our best to cover your question in next month’s post.

What’s the best architecture for multiple OpenStack component databases? Should they be co-located or can they be on separate nodes?

Most OpenStack components store state in an SQL database. By design, the databases do not have to be on the same database server. Each component is designed independently of other components. This allows for various components to point to a separate physical database, or to a database server that is hosting the database for other components. However, for operational efficiency the recommended best practice is that the databases should be hosted on the same database server.

Let’s take a closer look at the details of what all that means.

Typically OpenStack components store their respective state in an SQL database and they access the database using the OpenStack Oslo library. The Oslo library, in turn, uses the python SQLAlchemy library. In theory, then, OpenStack can support any SQL database that SQLAlchemy supports.

Because the components are independent projects, they have their own configuration files, such as /etc/neutron/neutron.conf, /etc/nova/nova.conf, and so on, and the database locations are defined in these individual files files.

For example, the database entry in nova.conf might look similar to the following:

[database]
connection =mysql+pymysql://user:nova@<ip-address-of-database>//nova?charset=utf8

While the entry in cinder.conf might look similar to:

[database]
connection =mysql+pymysql://user:cinder@<ip-address-of-database>//cinder?charset=utf8

The database location is specified by the IP address. Because each database is specified separately, each component can point to a different location. You can also use different kinds of databases for each component. For example you might have a situation in which Neutron uses SQLite, Nova uses MySQL, and Cinder uses PostgreSQL.

For practical purposes however, it is best to use a single database node or cluster and configure the components to point that database. This is advantageous from an operations and maintenance point of view, because it gives you fewer database servers to manage. The advantage is even more evident when using a database cluster to provide high availability, rather than a single server.

The most common database used by OpenStack deployment tools is MySQL/MariaDB. Most deployment tools will also install a database cluster, usually with 3 servers. In this case, the primary HA component of the cluster is Galera, a tool that works with a MySQL/MariaDB cluster to provide data synchronization between database servers.

You’ll also need other tools such as Pacemaker/Corosync to present a single IP address, a virtual IP (VIP), to access the database cluster. A component accesses the database via the VIP and stores the data in whichever database the VIP points to at that moment, then Galera copies the data to the other db servers.

Are you required to do it this way? Of course not; OpenStack is designed to be flexible and modular so it can work with your own specific situation.  But current best practices recommend using a single database server or a cluster of database servers to provide high availability, enabling you to start with the most stable, easiest to manage architecture and take advantage of the greater flexibility the OpenStack design allows if the need arises over time.

The post Mirantis Training Blog: What’s the best architecture for multiple OpenStack component databases? appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at July 01, 2016 03:03 AM

June 30, 2016

Solinea

DockerCon 16: Takeaways and Must-sees

This is part 3 in a 6-part series on containers, microservices, and container orchestration to help tech and business professionals of all levels understand and embrace this transformative technology and the best practices processes to maximize your investment in it.

Here is an outline:

  1. Intro: Why Containers and Microservices Matters to Business – Executive thought leadership perspective – Coming Soon!
  2. Getting started with Kubernetes – How to start with a POC, weave k8s into your existing CI/CD pipelines, build a new pipeline
  3. Intermediate level post – Ready to kick the tires? Key Takeaways from Dockercon 2016,  K8s, Ansible, Terraform media/entertainment enterprise case study
  4. Advanced tips and tricks – Take things further with “Tapping into Kubernetes Events“, “Posting Kubernetes Events to Slack“, and “Chatbots for Chatops set up on Gcloud w Container engine
  5. Scaling Docker and Kubernetes in production to thousands of nodes on Kubernetes in one of the largest consumer web properties – Coming Soon!


Well, here we are. In the afterglow of yet another tech conference. But this time it’s a bit strange for me, because it’s not The OpenStack Summit.  It’s DockerCon. And man, was that just a wildly different vibe and a totally different focus when it came to sessions. I was able to attend DockerCon with another Solinea colleague and I wanted to take some time to document what I thought were some very interesting takeaways, talk about where I feel the container ecosystem is heading, and finally, link to some must-see sessions that sum up the most interesting parts of the show for me.

It was a whirlwind two days in Seattle and, admittedly, between talking to so many folks, browsing the expo floor, and attending sessions it’s already a bit tough to recall all of the things that we saw. As I refer back to my bulleted list I’ve been keeping on my laptop I realize just how much went down. Here’s some of the keys.

Takeaways:


1. Docker for Mac/Win is ready for prime time: The beta release of Docker for Mac and Docker for Windows has been out for quite a while. And although I was lucky enough to get into the beta early, the software was certainly a little touch and go at first. However, after a constant stream of updates, it seems far more stable for me day to day and others must be feeling the same. As such, during the first day’s keynote, Solomon Hykes announced that the beta period was over and this software would be generally available. This is certainly a blessing for us here at Solinea, since our training classes were previously using Docker Toolbox, which always seemed a bit kludgey and we always had a problem student or two.

2. The new Docker engine is pretty slick: Immediately after announcing the GA for Docker for Mac/Win, Solomon proceeded to change course to something folks have been anticipating for a while. Built-in orchestration for Docker. New with the v1.12 release of Docker engine, orchestrating containers is a built-in (but optional) part of the package. This means that I, as a new user, could take a few machines, create a swarm with them, and launch services against them in a matter of minutes. This also includes the ability to scale those services up and down easily and includes some features like network overlay and service discovery that have historically been a secondary part of the swarm environment. 

3. Seriously, the new Docker engine is pretty slick: As somewhat of an extension to the previous point, the new Docker engine supports DABs (no, not that goofy dance). DABs are Docker Application Bundles. One can now take an existing docker-compose definition, and create a bundle out of it. This allows the bundle to be used and distributed evenly against the new swarm. Once each tier exists on the swarm, they can then be scaled up and down just like any other service. This certainly offers a pretty interesting deployment path from a dev laptop all the way to a production swarm.

4. There’s always an app store: Finally, one of the interesting announcements came on the second day of keynotes. This was this Docker Store. The Docker Store sounds like a single source for known good Docker deployments and images. It appears to be an extension to the previous “official” images that were in Docker Hub. This seems like a much needed extension as the official designations were never really all that clear and having the unofficials unable to mix in to the Store is a good thing. It also appears to include a paid tier for Docker images, similar to what one may see in the Amazon AWS marketplace. The Docker Store is currently in closed beta, but you can apply for access here.

General Thoughts:


With the takeaways covered, it’s now time for my unsolicited two cents. After spending two days neck deep in Docker-land, I have a few gut feelings/hunches/whatever that may or may not be even remotely true, as well as a some general observations that I thought were interesting at the time.

1. The community is still confused: What I mean here is not that community members do not understand Docker and its benefits. They clearly do and are very excited about them. What I mean is that there’s still a lot of scrambling around in regards to the “one true way” to do Docker, what tools to use, which pipeline tool to use to enable deployments, etc.. I think some of this stems from the fact that Docker itself never offered some of these tools before now and them doing so definitely threw a grenade of confusion into the crowd. As an extension…

2. The battle for container orchestration is officially here: If you have used Docker on more than on your laptop, you’ve realized that orchestrating lots of containers is hard. The de facto answer for a large scale, production cluster has been Kubernetes up to this point. Now, with the new Docker Swarm, it will be interesting to see what folks decide to do and if the built-ins are “good enough” for their production use cases. There’s been a lot of companies that have bet big on Kubernetes, so it will be an exciting time to see where people land and how the K8s team answers.

3. MSFT has bet big on Docker: Microsoft was everywhere at DockerCon. From keynotes, to booths, to sessions. It’s clear that they see Docker as a huge opportunity and it seems that they have put in a lot of work to make Docker on Windows a real contender. As anyone in the enterprise knows, there’s still a lot of Windows Server apps out there in the world. Being able to Dockerize them could be compelling for some companies.

4. Hybrid cloud makes a lot more sense: Hybrid cloud has always been a compelling case for business. However, historically, it’s been easier said than done when there are intricacies and differences that occur between, say, OpenStack on-prem and Amazon. That said, a lot of the application level aspects seem to be getting easier with the new Docker Swarm allowing any Docker engine to join whatsoever, as well as the increased interest in the Ubernetes project. I’m looking forward to this being a solved problem for sure! Give me a cluster, allow me to throw containers at it. It should be as easy as that, regardless of where the nodes in the cluster live.

Must-sees:

  • The first keynote of course is a must-see. The announcements around orchestration and the new and improved Docker Swarm is worth watching:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/vE1iDPx6-Ok?feature=oembed&amp;wmode=opaque" width="640"></iframe>

  • The Day 2 Keynote was a great session, with highlights like container image security scanning, a great example of cross-cloud Swarm (with AzureStack!):

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/KC9tJ7b3dww?feature=oembed&amp;wmode=opaque" width="640"></iframe>

The post DockerCon 16: Takeaways and Must-sees appeared first on Solinea.

by Spencer Smith at June 30, 2016 11:41 PM

Red Hat Stack

Who is Testing Your Cloud?

Co-Authored with Dan Sheppard, Product Manager, Rackspace

 

With test driven development, continuous integration/continuous deployment and devops practices now the norm, most organizations understand the importance of testing their applications.

But what about the cloud those applications are going to live on? Too many companies miss this critical step, leading to gaps in their operations, which can lead to production issues, API outages, inability to upgrade, problems when trying to upgrade and general instability of the cloud.

It all begs the question: “Do you even test?”

At Rackspace, our industry leading support teams use a proactive approach to operations, and that begins with detailed and comprehensive testing, so that not only your applications but your cloud is ready for your production workload.

Critical Collaboration

For Rackspace Private Cloud Powered by Red Hat, we collaborate closely with Red Hat; we test the upstream OpenStack code as well as the open sourced projects we leverage for our deployment, such as Ceph and Red Hat OpenStack Platform Director. This is done in a variety of ways, like sharing test cases upstream with the community via Tempest, creating and tracking bugs, and creating bug fixes upstream.

The Rackspace and Red Hat team also work together on larger scale and performance tests at the OpenStack Innovation Center, which we launched last year in conjunction with Intel to advance the capabilities of OpenStack.

Recent tests have included performance improvements in relation to offloading VXLAN onto network cards, scaled upgrade testing from Red Hat OpenStack Platform version 7 to version 8, and testing of scaled out Ceph deployments. Data from this testing will be made available to the community as the detailed results are analyzed.

Building on the upstream testing, the Rackspace operations team leverages Rally and Tempest to execute 1,300 test cases prior to handing the cloud over to the customer. This testing serves as a “1,300 point inspection” of the cloud to give you the confidence that your cloud is production ready and a report of this testing is handed over to you with a guide to help you get started with your new cloud. These test cases serve to validate and demonstrate the functionality of the OpenStack APIs, with specific scripts testing things such as (just to name a few):

  • administration functions
  • creating instances and cinder volumes
  • creating software defined networks
  • testing keystone functions and user management

Upgrades Made Easy

One of the key requirements for enterprises is the ability to upgrade software without impacting the business.

These upgrades have been challenging in OpenStack in the past, but thanks to the Rackspace/Red Hat collaboration, we can now make those upgrades for the Rackspace Private Cloud Powered by Red Hat.

To deliver this, the Rackspace team runs the latest version of OpenStack code through our lab and executes the 1,300 point inspection. When we are satisfied with that, we test upgrading our lab to the latest version and execute our 1,300 point test again, thus confirming that the new version of OpenStack meets your requirements and that the code is safe for your environment.

Our team doesn’t stop there.

So that the code deploys properly to your cloud, our operations team executes a 500-script regression test at the start of a scheduled upgrade window. Then our team upgrades your cloud and executes the regression test again. The final step in the scheduled upgrade window is to compare our pre- and post-regression test results to validate that the upgrade was successful.

Since the launch of Rackspace Private Cloud Powered by Red Hat, the Red Hat and Rackspace team has been working to refine that process by incorporating Red Hat’s Distributed Continuous Integration project.

graphic1

Distributed Continuous Integration User Interface

Extended Testing

With Distributed Continuous Integration, Red Hat extends the testing process related to building Red Hat OpenStack Platform to Rackspace’s in-house environment. Instead of waiting for a general availability release of Red Hat OpenStack Platform to start testing Rackspace scenarios, pre-release versions are delivered and tested following a CI process. Test results are automatically shared with Red Hat’s experts and, along with Rackspace, new features are debugged and improved taking into consideration new scenarios.

Using DCI to test pre-released versions of Red Hat OpenStack Platform helps ensure we’re ready for the new general release just after launch. Why? Because we have been running incremental changes of the software in preparation for general availability.

DCI also helps existing Rackspace private cloud customers, by allowing the Rackspace operations team to test code changes from Red Hat while they’re being developed, allowing us a shorten the feedback loop back to Red Hat engineering, and giving us a supported CI/CD environment for your cloud at a scale not possible without a considerable investment in talent and resources.

So, if you are one of the 81 percent of senior IT professionals leveraging or planning to leverage OpenStack, ask your team, “How do we test our OpenStack?” — then give Rackspace a call to talk about a better way.

 

 

 

by Maria Bracho, Senior Product Manager OpenStack at June 30, 2016 05:03 PM

Solinea

Solinea’s Ken Pepple Presents at OpenStack Day Ireland

Solinea’s Ken Pepple Presents at OpenStack Day Ireland.

Solinea had the great pleasure of attending and presentation at today’s OpenStack Days Ireland, the first major OpenStack event in Dublin. The sold-out affair was held in Dublin’s Silicon Docks district at the Marker Hotel.

The day began with messages from the sponsors of the day: The OpenStack Foundation, Intel, SUSE, Intel and Ammeon. Jonathan Bryce gave the kickoff that presented the emerging trends in cloud and the state of OpenStack. This was followed with discussions about the OpenStack roadmap, modelling applications and updates from the latest OpenStack Summit.


Website-Openstack-Ireland-Walmart


Solinea’s CTO, Ken Pepple, spoke to the crowd about the challenges of “Managing OpenStack in the Enterprise”.

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>


The afternoon brought a focus on networking covering NFV, Kubernetes integration, DPDK and performance. In addition, Workday presented the case study of their OpenStack deployment and operations.


Website-Openstack-Ireland-Reference-Architecture



See Ken’s entire presentation here:

<iframe align="center" allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/_nBRk5oukec" width="640"></iframe>


Solinea specializes in 3 areas: 

  • Cloud architecture and infrastructure design and implementation, with a specific focus on OpenStack – We have been working with OpenStack since its inception, wrote the first on how to deploy OpenStack, and have built numerous private and public cloud platforms based on the technology.
  • DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with docker and kubernetes implementations – containerizing applications and orchestrating the containers in production We offer consulting and training services to help customers. 

The post Solinea’s Ken Pepple Presents at OpenStack Day Ireland appeared first on Solinea.

by Solinea at June 30, 2016 02:17 PM

Anne Gentle

Influencing community documentation contributions

After a week with leaders in the OpenStack community, taking leadership training, I’m inspired to write up ideas from Influencer: The New Science of Leading Change. For me, who needs to make the most out of community efforts, the idea that no one likes being told what to do was a familiar phrase. Rather, compel people to pick up your vision and add to it.

As an example, people often ask me, how do you motivate people to write documentation for open source projects? Or write for any software project?

 

Get details about the behavior you want to see

Using the framework this book offers, you first want to identify the behavior you want to see. Their examples often revolve around healthcare, such as hand washing. But you can get very specific about hand washing, such as where, when, and how. For documentation, you may say the behavior is “write” but I want to get more specific than that. Where should they write? Is the behavior “write a personal blog post?” Or is it “write in the official docs?”

Also, when should they write? Ideally as close to when the technical detail is still fresh as possible. The “when” could be at the end of a cycle when people are less distracted by feature additions. Or write documentation before the code is written and revise it often.

As for “how” do we want contributors to write, well, we may need to have templates and frameworks for the “how” — such as source formats, build jobs, and in which repo.

Looking at the behavior we want to see, getting super detailed about it, we find that we also want to encourage the behavior of code reviewers to read and review related docs.

Identify a crucial moment

Now, when a bit of code changes that makes a difference in the docs, that’s a crucial moment for influencing a particular behavior. The behavior we want to see is writing documentation while the code is fresh in your mind.CC Jonathan Cohen

Another crucial moment to engage is when a user first tries the feature; their fresh eyes may provide an update to the docs that others might not see. The “Edit on GitHub” feature of creating a pull request provides that outlet for fresh eyes to make a quick edit to the documentation.

So we have an idea of the behaviors we want to see, and a sense of when we want to see them. Now we can begin to ask what’s preventing the behavior.

Why don’t people contribute?

Let’s talk about: what’s painful about writing documentation? For example, if you speak English as a second language, it may be painful to write for others to review and the criticism might be more than you can bear. Kind, empathetic coaching, respect, and a culture of acceptance helps with this barrier.

Provide guidance and energy

Also, people associate boredom with docs. They look at a blank screen and can’t come up with words to describe their feature. They yawn and check their Twitter feed instead of writing docs. This pain point is where templates can help. People who don’t know what to write might need guidance, suggestions, or strict form-based templates.

Avoid context switches

It’s painful to have a doc tool set that’s extremely different from what you write code in — the context switch even to a web-based editor may be too big a barrier to jump over. Make the docs like code whenever you need to compel developers to write.

Get some influencers who believe in the vision

Without actual peer pressure that says “yes, we write the docs” developers may not create a culture that requires documentation. Start with the influencers in their peer group (which might not be you). For example, when a seed researcher wants to introduce a new hybrid seed corn, he goes straight to the local diner where the most experienced and influential farmer gets breakfast on Saturdays. It’s better to have the farmer in his pickup truck understand and believe in the benefit of changing to a new hybrid seed corn than for the researcher in his late-model Volvo.

Offer deliberate practice sessions

Also consider “deliberate practice” where you set aside time to get better at a skill. If the skill is writing, then have office hours or coaching sessions online, and at conferences make sure you can meet with people who want to become better writers to show them how to practice writing through drills, exercises, and with fun collaborative efforts such as doc sprints. Record a video or host an online hangout, showing all the steps for contributing deliberately and strategically.

Thanks to many coworkers who helped me discuss and think more about these ideas along the way. What are some additional ways to influence a desired outcome?

by annegentle at June 30, 2016 01:08 PM

June 29, 2016

AppFormix

Seattle Abuzz after DockerCon

Seattle was abuzz last week. Docker created buzz with their announcement of adding orchestration to Docker 1.12, creating confusion and tension in the partner community by competing against Kubernetes and Mesos.

by Jennifer Allen (jenallen@appformix.com) at June 29, 2016 09:29 PM

Enriquez Laura Sofia

A complete guide to start contributing

Thanks to my Outreachy’s partner Nisha !!

Though, there are developer and wiki guides on how to get started, I found them bit overwhelming as a beginner. After reading various docs and taking help from my mentor who is a core-contributor in OpenStack, I came up with the following easy-to-follow guide. If you still face any problems while setting up your environment, […]

via How to set up work environment to become an OpenStack Developer? — The Girl Next Door <3


by enriquetaso at June 29, 2016 07:22 PM

OpenStack Superuser

Making OpenStack production ready with Kubernetes and OpenStack-Salt - Part 1

This tutorial introduces and explains how to build a workflow for life cycle management and operation of an enterprise OpenStack private cloud coupled with OpenContrail SDN running in Docker containers and Kubernetes.

The following blog post is divided into five parts, the first is an explanation of a deployment journey into a continuous DevOps workflow. The second offers steps on how to build and integrate containers with your build pipeline. The third part details the orchestration of containers with a walkthrough of Kubernetes architecture, including plugins and prepping OpenStack for decomposition. In the fourth part, we introduce the tcp cloud theory of a “single source of truth” solution for central orchestration. In the fifth and final step we bring it all together, demonstrating how to deploy and upgrade of OpenStack with OpenContrail.

We decided to divide the process into two blog posts for better reading. This first post covers creating a continuous DevOps workflow and containers build.

OpenStack deployment evolution

At first glance you might ask, "Why would I add additional tooling on top of my existing deployment tools?" It's important to explain why anyone should consider Docker and/or Kubernetes as tools for running OpenStack in production. At tcp cloud, we have deployed and operate a growing number of enterprise private cloud in production on OpenStack. Each OpenStack deployment is different (storage, network, modules) and any number of service combinations exist, given the varying needs of customers. There is one thing all cloud deployments have in common, however: deployment phases and initial goals. These have become evident and all customer journeys lead to an individual cloud maturity model. Let's divide these phases of evolution into three sections.

RagTag

Characterized as untidy, disorganized, inharmonious. This is always the first step and sometimes the last for anyone who tries OpenStack. Every company or individual considering OpenStack as a private cloud solution has a common first goal: deploying OpenStack!

This journey typically starts on openstack.org and lands on deployment tools like Puppet, Chef, Ansible, TripleO, Helion, Fuel, etc. It is almost impossible to identify the right way to get OpenStack up and running without any previous experience. Even though all of them promise a simple and quick setup of the whole environment, you will probably end up with the following logical topology of a production environment. This diagram shows a typical production service-oriented architecture of a single region OpenStack deployment in High Availability. As you can see, this diagram is very complex.

alt text here

The next thing you'll discover is that current deployment tools cannot cover a complete production environment (bonding, storage setups, service separation, etc.). This is when the cloud team starts to ask themselves, "Do we need to really to setup an environment in one day, deploy in five minutes on a single machine or through a nice clickable graphical user interface (GUI)?" and "Are these really the key decision points that determine the right choice of our future production environment?" Standing up a stack is easy and deployment tools are one-offs? You cannot run them twice or are they repeatable? What about life cycle issues, like patching, upgrades, configuration changes, etc. This brings us back to statement that no one can choose the right solution without the experience of “day two operations.”

Ops

The second phase, called Ops, or as we mentioned “day two operations.” Here's a typical example: OpenStack is up and running, then you get an email from your security team that says, “Please upgrade or reconfigure RabbitMQ to prevent security vulnerability.” How can you do it with confidence? Your deployment tool cannot be used again. Now you're dealing with day-to-day operations, which is more difficult than the deployment itself.

This led us to define the following criteria for everyday operations like patching, upgrades, business continuity, disaster recovery, automatic monitoring, documentation, backups and recovery. The general expectation is that Ops can be managed by the Deployment tool. However, the reality is that the Deployment tool does not do Ops.

As already mentioned, deployment tools are one-offs. Therefore, you start the build and Ops tools like random scripts (restart service or change configuration), manual hacks, tribal knowledge (production depends on specific people who knows how to manage).

The ideal workflow needs to include terms like repeatable patterns, a "single source of truth" (Infrastructure-as-a-Code), best practices, rigid-to-agile, declarative (desired state) and personalized cloud experience.

We did not want to develop this workflow by ourselves, so we found OpenStack-Salt an optimal tool. It's been an official project under the big tent since May 2016. Its service-oriented approach covers almost all the above-mentioned parameters of an ideal workflow. It offers production-ready proven architecture managed as code.

alt text here

However, our production infrastructure still looks like the figure below. It consists of about 23 virtual machines on at least three physical KVM nodes just for cloud control plane. We have to upgrade, patch and maintain 23 OS to provide flexibility and service-oriented architecture.

alt text here

DevOps

Based on previous ideas we asked question “what about to treat infrastructure as a microservice?” This bring us from Ops to DevOps, which really means to treat OpenStack as a set of applications.

alt text here

Our defined criteria is that it must be composable, modular, repeatable, immutable and split applications from infrastructure. It has to break monolithic VMs to containers and micro-services.

We also decided that we didn't want to reinvent the wheel to create a new project, but reuse existing knowledge invested in the OpenStack-Salt project.

These steps depict the evolution of OpenStack deployment in last two-three years. Now let's take a look at how to build containers and micro-services instead of the monolithic deployments of the past. The following sections explain the DevOps workflow.

How to build containers

The configuration management era started a couple of years ago, when tools like Fabric, Puppet, Chef and later SaltStack or Ansible changed the approach to deploying application and infrastructure in companies. These tools ended the era of bash scripting and bring repeatable, idempotent and reusable patterns with serialized knowledge. Companies invested huge efforts into this approach and community brings opportunity to deploy OpenStack in almost every configuration management tool.

Recently the era of micro-services (accelerated by Docker containers) dawned and as we described in the DevOps workflow, containers should encapsulate services to help to operate and treat infrastructure as micro-service applications. Unfortunately, Docker pushes configuration management tools and serialized knowledge off to the side. Even some experts predict end of configuration management with Docker. If you think about it, you realize that Docker brings dockerfiles and entry points which invoke déjà vu of bash scripting again. So why have we invested so much into a single source of truth (infrastructure-as-a-code), as if we started from scratch? This is the question we had on our minds before we started working on the concept for the containerization of OpenStack. The first requirement was building Docker containers in more effective way than just bashing everything. We took a look at OpenStack Kolla, CoreOS and other projects around that provide an approach for getting OpenStack in containers.

Kolla uses Ansible for containers build and Jinja for the parametrization of dockerfiles. The concept is very interesting and promising for the future. However, it is a completely new way of serialized knowledge and production operation of OpenStack. Kolla tries to be universal for Docker containers builds. There is, however, missing orchestration or a recommended workflow for running in production not only a single machine with host networking. Kolla-kubernetes project started almost month ago, but it is still too early to run in an enterprise environment. A lot of work must be done to bring a more operational approach. Basically, we want to reuse what we have in OpenStack-Salt as much as possible without a new project or maintaining two solutions.

We defined two criteria to leverage running OpenStack services in containers.

  • Use configuration management for building Docker containers as well as standard OS
  • Reuse existing solution - do not start from scratch and rewrite all knowledge into another tool just for containers build and maintain two worlds.

We created a simple repository Docker-Salt, which builds containers by exactly same salt formulas used for dozens of production deployments. This enabled knowledge reuse, for example when someone patches configuration in a standard OpenStack deployment, it automatically builds a new version of the Docker container as well. It provides the opportunity to use single tool for Virtual Machine OpenStack deployment as well as micro-services. We can mix VM and Docker environments and operate environment from one place without a combination of two or three tools.

Build the pipeline

The following diagrams shows building pipeline for Docker images. This pipeline is completely automated by Jenkins CI.

The reclass metadata model is deployment specific and it is single source of truth (described above), which contains configurations like Neutron plugin, Cinder backends, Nova CPU allocation ratio, etc. It's a Git repository with a simple YAML structure.

OpenStack-salt formulas are currently used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions and many other common tasks.

Docker Salt provides scripts to build, test and upload Docker images. It contains dockerfiles definitions for base image and all Openstack support and core micro-services.

Build process downloads the metadata repository and all salt-formulas to build salt-base image with a specific tag. Tag can be an OpenStack release version or any other internal versioning. Every configuration change in OpenStack requires a rebuild of this base image. The base image is used to build all other images These images are uploaded to a private Docker registry.

alt text here

The Docker Salt repository contains compose files for local testing and development. OpenStack can be run locally without Kubernetes or any other orchestration tool. Docker compose will be part of functional testing during the CI process.

Changes in current formulas

The following review shows changes required for salt-formula-glance. Basically, we had to prevent starting Glance services and sync_db operations during the container build. Then we have to add entrypoint.sh, which instead of a huge bash script that replaces env variables by specific values then runs salt highstate. Highstate reconfigures config files and runs sync_db.

You might notice that Salt is not uninstalled from container. We wanted to know what is the difference between container with or without salt. The glance container with salt has about 20MB more than glance itself. The reason is that both is written in python and uses same libraries.

The second post will offer more information on container orchestration and live upgrades.

This post first appeared on tcp cloud's blog. Superuser is always interested in community content, email: editor@superuser.org

Cover Photo // CC BY NC

by Lachlan Evenson and Jakub Pavlik at June 29, 2016 06:04 PM

RDO

Red Hat Summit, day 1

Red Hat Summit 2016<script async="" charset="utf-8" src="https://embedr.flickr.com/assets/client-code.js"></script>

The first day at Red Hat Summit in San Francisco was, as always, very busy, with hundreds of people coming by the Community Central area of the exhibit hall to learn about RDO, as wlel as other community projects including ManageIQ, Ceph, CentOS, and many others.

Due to the diverse nature of the audience, and their many reasons for attending Red Hat Summit, perhaps half of these attendees hadn't heard of RDO, whild most of them were familiar with OpenStack. So, we have many new people who are going to look at RDO as a way of trying ot OpenStack at their organizations.

If you're at Red Hat Summit, do stop by. We have lots of tshirts left, and we also have TripleO Qu ickStart USB thumb drives so that you can try out RDO with minimal work. We're in Community Central, to your right as you enter the front doors of the Partner Pavillion.

by Rich Bowen at June 29, 2016 03:25 PM

Major Hayden

Talk recap: The friendship of OpenStack and Ansible

When flexibility met simplicity: The friendship of OpenStack and AnsibleThe 2016 Red Hat Summit is underway in San Francisco this week and I delivered a talk with Robyn Bergeron earlier today. Our talk, When flexibility met simplicity: The friendship of OpenStack and Ansible, explained how Ansible can reduce the complexity of OpenStack environments without sacrificing the flexibility that private clouds offer.

The talk started at the same time as lunch began and the Partner Pavilion first opened, so we had some stiff competition for attendees’ attention. However, the live demo worked without issues and we had some good questions when the talk was finished.

This post will cover some of the main points from the talk and I’ll share some links for the talk itself and some of the playbooks we ran during the live demo.

IT is complex and difficult

Getting resources for projects at many companies is challenging. OpenStack makes this a little easier by delivering compute, network, and storage resources on demand. However, OpenStack’s flexibility is a double-edged sword. It makes it very easy to obtain virtual machines, but it can be challenging to install and configure.

Ansible reduces some of that complexity without sacrificing flexibility. Ansible comes with plenty of pre-written modules that manage an OpenStack cloud at multiple levels for multiple types of users. Consumers, operators, and deployers can save time and reduce errors by using these modules and providing the parameters that fit their environment.

Ansible and OpenStack

Ansible and OpenStack are both open source projects that are heavily based on Python. Many of the same dependencies needed for Ansible are needed for OpenStack, so there is very little additional software required. Ansible tasks are written in YAML and the user only needs to pass some simple parameters to an existing module to get something done.

Operators are in a unique position since they can use Ansible to perform typical IT tasks, like creating projects and users. They can also assign fine-grained permissions to users with roles via reusable and extensible playbooks. Deployers can use projects like OpenStack-Ansible to deploy a production-ready OpenStack cloud.

Let’s build something

In the talk, we went through a scenario for a live demo. In the scenario, the marketing team needed a new website for a new campaign. The IT department needed to create a project and user for them, and then the marketing team needed to build a server. This required some additional tasks, such as adding ssh keys, creating a security group (with rules) and adding a new private network.

The files from the live demo are up on GitHub:

In the operator-prep.yml, we created a project and added a user to the project. That user was given the admin role so that the marketing team could have full access to their own project.

From there, we went through the tasks as if we were a member of the marketing team. The marketing.yml playbook went through all of the tasks to prepare for building an instance, actually building the instance, and then adding that instance to the dynamic inventory in Ansible. That playbook also verified the instance was up and performed additional configuration of the virtual machine itself — all in the same playbook.

What’s next?

Robyn shared lots of ways to get involved in the Ansible community. AnsibleFest 2016 is rapidly approaching and the OpenStack Summit in Barcelona is happening this October.

Downloads

The presentation is available in a few formats:

The post Talk recap: The friendship of OpenStack and Ansible appeared first on major.io.

by Major Hayden at June 29, 2016 03:43 AM

June 28, 2016

OpenStack Superuser

What OpenStack and rock climbing can teach you

Madhura Maskasky knows a thing or two about reaching new heights. After working at Oracle and VMware, she formed part of the founding crew of Platform9, a startup whose mission is to make OpenStack private clouds easy.

She talks to Superuser about container integration, how rock climbing applies to startup life and why three times is the charm for live demos.

What’s your role in the OpenStack community?

As a co-founder of Platform9 which is pioneering a fundamentally differentiated model around deployment of OpenStack private clouds, I am an active member of the OpenStack community. I participate in the form of contributions to OpenStack made by my engineering team, talks and presentations at OpenStack summits, blog posts and technical content around OpenStack components and participation in local meetups/events around OpenStack.

You have given a lot of presentations - what are some of your tips for doing them well? Any advice for live demos?

A single tip - the more you do them, the better you get at doing them :). For live demos, I do a least three independent dry-runs to ensure that they perform.

Why do you think it's important for women to get involved with OpenStack?

Any tech enthusiast wishing to be part of the open source movement that is powering the private data centers of today and future should get involved with OpenStack. I don’t think this applies any differently to women.

What obstacles do you think women face when getting involved in the OpenStack community?

None, in my opinion. As a community member, you will encounter challenges around slow approval of changes and blueprints, new projects or new integrations requiring a lot of leverage for approval, etc, but these are a side effect of OpenStack being the second most active open source project with participation from members across the world. Women won’t face these challenges any differently from men.

There are always a lot of debates in the OpenStack community - which one is the most important, now?

The popular debate for some time was around OpenStack APIs and Amazon Web Services (AWS) compatibility and I had hoped for it to resolve in favor of AWS APIs. I think it’s critical for standardization.

Recently the popular conversation has been around OpenStack and the popular container orchestration frameworks, specially with demos at the Austin summit on OpenStack on Kubernetes,etc.

My view is that these are independent frameworks that can interoperate to satisfy a broad set of private cloud use cases. For example, Kubernetes can collaborate with OpenStack components such as Keystone, Cinder for a better integration with SSO, persistent storage, etc. for end users.

What’s the highest mountain you’ve climbed?

Tiger Hill -- (2,590 meters or about 8,500 feet) near Darjeeling, India.

What lessons do you take from rock climbing into startup life?

Rock climbing taught me how to persistently work on a hard problem, collaborate with partners to build strategy, motivate and help others and finally the satisfaction of conquering a challenge. All of these apply directly to startup life.

OpenStack has been called a lifelong learning project - how do you stay on top of things and/or learn more?

The OpenStack foundation has done a great deal in helping the community keep track of what’s happening. I keep an eye on the OpenStack User Survey, the OpenStack Project Navigator, release notes from each OpenStack major release and activity during each OpenStack summit, among other things, to keep myself aligned with the latest.

You can find out more about her on LinkedIn or follow her on Twitter.

This post is part of the Women of OpenStack series spotlighting the women in various roles in our community who help make OpenStack successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’re interested in being featured, please email editor@openstack.org.

Cover Photo // CC BY NC

by Superuser at June 28, 2016 11:31 PM

Monitoring a multi-region cloud based on OpenStack

When you’re running a large, federated cloud across multiple regions in Europe, each with its own funding sources and institutional governance, you need to make sure your monitoring system is top-notch.

To monitor and manage this diverse group of users is why Silvio Cretti of create-net and his team at FIWARE Labs chose OpenStack.

Cretti came to the OpenStack Summit in Austin, TX this past April to discuss the FIWARE Lab’s monitoring architecture and implementation.

What is FIWARE and FIWARE Lab?

FIWARE provides OpenStack-based cloud capabilities along with a library of open source APIs tha make developing smart apps much easier for developers. It has five areas of operation, or pillars: FIWARE, a generic open standard platform which serves the needs of developers in multiple domains, FIWARE Lab, a digital sandbox for developers to try out new technologies, as well as a funding arm (FIWARE Accelerate), a global outreach program (FIWARE Mundus), and a local support community (FIWARE iHubs).

FIWARE Lab, the focus of Cretti’s talk, is a playground that lets innovative developers experiment with innovative applications. It’s an open, working instance of FIWARE where innovation is encouraged.

“Infrastructure owners, data providers, entrepreneurs, developers and technology providers are all stakeholders who can contribute to this open instance,” says Cretti.

alt text here

Funded in part by the European Commission (EC,) the multi-regional, federated cloud called FIWARE Lab is based on OpenStack. It serves 15 regions spread across Europe, as well as a presence in Latin America. It offers over more than 3,000 cores, 10 terabytes of RAM and 600 terabytes of hard disk space. It serves more than 5,000 public IP addresses with over 1,500 users and the same number of virtual machines, putting FIWARE Lab in the top 30 percent of OpenStack users in the world.

Cretti and his team leveraged Ceilometer, a monitoring component that can be used to provide customer billing, resource tracking and alarm capabilities for OpenStack-based systems and Monasca, an open-source monitoring-as-a-service solution that integrates with OpenStack, to allow FIWARE Lab to collect, process, analyze and share relevant data to all interested users.

Cretti points out that FIWARE Lab must have a highly dynamic infrastructure, as well, with various regions joining, leaving and re-joining FIWARE Lab as funding is secured or evaporates. FIWARE Ops, a suite of tool that ease the creation and operation of FIWARE instances, is the process by which the various stakeholders can join and leave the digital federation, as well as to manage and monitor the services in real time, including deployment, federation management and monitoring, connectivity management and service offer management.

The high-level architecture for FIWARE Labs is shown below.

alt text here

Requirements for monitoring FIWARE Lab

Because FIWARE Lab is a multi-region environment and managed by different owners who might have different objectives, FIWARE Lab is a sort of “federation” of OpenStack nodes.

“Multi-tenancy is a must, not related to end-users, however, but in terms of administration,” says Cretti.

The FIWARE Lab team has to manage errors and monitor different servers, processes and services under different perspectives to know if the services offered are running correctly from the point of vie of the end-users, and then be able to notify infrastructure owners and administrators about any detected problems.

Root-cause analysis is important, as is performance management, Cretti adds. The monitoring system must give exhaustive information about the performance of the whole Lab, in terms of resource allocation and consumption (CPU/disk/memory) per region, computing node or instance. It also needs to analyze capacity and trends, and easily integrate with existing monitoring tools that are already installed, like Ceilometer and other data collectors that are configured and running in most nodes.

“We (also) have a Monasca agent installed because of the monitoring of specific processes and services,” says Cretti, “but also the needed integration with pre-existing monitoring systems.”

alt text here

The team also uses Ceilosca, a codebase that gathers data from Ceilometer and sends to Monasca, which helps translating the different data to various preferred monitoring systems. The team developed custom pollsters to collect data from the varying sources not covered by the default OpenStack tools, as well.

“All this is what we did architecturally,” says Cretti, “then deployed this system into production into FIWARE Lab.”

Everything went well during installation, though there was a small problem installing Monasca, but the team solved it quickly.

Issues and solutions

The team had a couple of smaller issues in publishing metrics and filtering metrics, but were able to work around the problem with some added code. Both solutions involve some massaging of data before submission to the larger monitoring processes as below.

alt text here

For publishing metrics, Ceilosca (version 2015.1) gets a string deserialization error when publishing into the Monasca API those Ceilometer metrics whose metadata includes non-string items, like the “properties” dict of Glance images. The team solved this by marshaling the “value_meta” (stringification) before posting those metrics to the Monasca API.

With regards to filtering metrics, Ceilosca processes a configuration file “monasca_field_definitions.yaml” that lets FIWARE filter the dimensions and metadata to be included when publishing metrics, but there is no flexibility in filtering the metrics to be posted to Monasca.

“We solved this by introducing a new storage driver (impl_monasca_filtered.py) which only processes metrics explicitly listed in a new item ‘metrics,’” Cretti adds.

What’s next?

The team would like to integrate more powerful analytics into FIWARE Lab, like Apache Spark. They are also looking into open source dashboards like grafana, using root cause analysis more often, adding monitoring and analysis for logging, adding automatic deployment via OpenStack FUEL of Ceilometer Pollster, adding agents and other custom components, and verifying potential contributions to the OpenStack community.

You can find code repositories and catalogs for FIWARE at the following links:

FIWARE catalog
FIWARE Ceilometer Pollsters
Ceilosca 2015.1.3-FIWARE

Ultimately, Cretti credits the ease of implementation and current monitoring to OpenStack and the various open source tools like Ceilometer and Monasca with making FIWARE Lab a success across the regions of Europe and into the future.

You can catch his entire 34-minute talk on the Foundation video page.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/PFGpVbdLU5s" width=""></iframe>

Cover Photo // CC BY NC

by Rob LeFebvre at June 28, 2016 10:22 PM

DreamHost

DreamHost Wins ScaleUp Award

The DreamHost Cloud team has just won the ScaleUp Award from the StackWorld Conference! The honor is given to technologies that have enabled customers to scale up fast.

StackWorld Award DreamCompute

DreamHost is honored to be among the 25 winners of 2016 for DreamCompute. The recognition comes from DreamHost Cloud helping our customer, atmail, to drastically simplify and automate its expansion.  After only one year from development to deployment, atmail has successfully migrated 10% of their 30 million email account base to the DreamCompute platform, representing a massive 40% of the company’s revenue.

DreamCompute’s use of SSDs and embedded cloud storage technologies has delivered atmail a significant boost in performance. According to atmail, the most surprising impact of its move to DreamCompute (and its newfound capability to scale at will) has been the opportunity to re-engage with and approach new customers knowing they can now confidently quote large scale email solutions and deliver on those commitments simply and efficiently.

 

We’ll be grinning all the way to the stage on Tuesday, June 28, to accept the award in San Francisco. Help cheer us on — let’s raise a glass together!

by Stefano Maffulli at June 28, 2016 07:43 PM

Mirantis

Docker Engine 1.12 embeds Swarm. Is there an agenda?

The post Docker Engine 1.12 embeds Swarm. Is there an agenda? appeared first on Mirantis | The Pure Play OpenStack Company.

When we first heard that Docker 1.12 included “swarm mode”, which enables container orchestration, our first thought was “Swarm has been out for months, how is that news?”  Then we realized that this was not Docker Swarm as a separate product, as it’s been so far; this is swarm integrated natively into Docker Engine, so that it can be turned on and off and provide orchestration without the programming that is necessary with other choices out there, such as Kubernetes.  Our interest was piqued.

And why shouldn’t it be? The integration of swarm mode, which provides a self organizing, self healing, infrastructure-agnostic cluster that can run multi-container apps across multiple hosts could easily be seen as a shot across the bow of projects such as Kubernetes and Mesos. Is it meant to be? We can’t know for sure, of course, but it is turned off by default. That said, ContainerJournal writes, “The decision to embed Swarm inside Docker Engine is a pre-emptive strike.”

“Docker announced this week that it is including orchestration and security tools with the subtly named 1.12 release of its platform (couldn’t they have least named it 1.2?),” SDXCentral wrote. “This was supposed to freak out a bunch of the VC-funded container startups that are providing orchestration and security tools.”

Docker 1.12 also includes an experimental version of a new Distributed Application Bundle (DAB) tool, which enables deployment of updates to multiple containers simultaneously.

And there is some evidence that while many in the Docker ecosystem insist there’s room for everybody, some companies are taking the threat seriously, beginning to distance themselves from Docker technologies — or at least attempt to insulate their customers from it. For example, Platform9 announced its Managed Kubernetes service, saying that it could later add support for other technologies such as Mesos.

Red Hat is trying to go even further, insulating its customers from the issue altogether. The company announced the launch of Ansible Container, which enables users to use Ansible playbooks to coordinate containers directly. Puppet and Chef are doing the same.  (Ansible creates Kubernetes templates in the background, the the user doesn’t interact with them.)

At the end of the day, however, it’s unclear how much of this announcement is substance, and how much is noise. SDXCentral goes on, “For example, officials at Apcera, the container management company majority owned by Ericsson, pointed out that the new Docker orchestration features only work for single application microservices and are primarily targeted at software-as-a-service (SaaS) providers.”

Resources

The post Docker Engine 1.12 embeds Swarm. Is there an agenda? appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at June 28, 2016 07:23 AM

Opensource.com

We are what we contribute to OpenStack

Tesora's Amrith Kumar explains why collaboration across a broad open source community is key to OpenStack's success.

by amrith at June 28, 2016 07:02 AM

June 27, 2016

OpenStack Superuser

How to craft a successful OpenStack Summit proposal

OpenStack summits are conferences for developers, users and administrators of OpenStack Cloud Software.

For the upcoming Summit there are 28 Summit tracks, from community building and security to hands-on labs.

The deadline for proposals is July 13, 2016 at 11:59PM PDT (July 14 6:59 UTC.) Find your time zone here. If you've applied to speak at the Summit before, take note that there are some new rules for 2016.

The community has plenty to say: the OpenStack Foundation receives more than 1500 submissions for the Main Conference. To improve your chances, the Women of OpenStack held a webinar with tips to make your pitch successful. These tips will give anyone who wants to get on stage at this or future Summits a boost. A video of the 50-minute session is available here.

Proposals go from an idea on the back of a cocktail napkin to center stage in a few steps. After you've submitted the proposal, the OpenStack community reviews and votes on all of them. For each track, a lead examines votes and orchestrates them into the final sessions. Track leads see where the votes come from, so if a company stuffs the virtual ballot box to bolster a pitch, they can correct that imbalance. They also keep an eye out for duplicate ideas, often combining them into panel discussions.

alt text hereStanding tall in the room session featuring (from left to right) Beth Cohen, Nalee Jang, Shilla Saebi, Elizabeth K. Joseph, Radha Ratnaparkhi and Rainya Mosher

Find your audience

Rapid growth of the OpenStack community means that many summit attendees are relative newcomers. At the previous, around 50-60 percent were first-time attendees.

alt text hereAttendee data from the OpenStack Summit Vancouver

For each of those Summits, developers made up about 30 percent of attendees; product managers, strategists and architects made up roughly another quarter. Users, operators and sys admins were about 15 percent; CEOs, business developers and marketers about 10 percent each with an “other” category coming in under 10 percent.

“Don’t make knowledge assumptions,” says Anne Gentle, who works at Rackspace on OpenStack documentation and has powered through 12 Summits to date. But you don’t have to propose a talk for beginners, she adds, “be ready to tackle something deeply technical, don’t limit yourself.”

Consider the larger community, too. Your talk doesn’t necessarily have to be about code, says Niki Acosta of Cisco, adding that recent summit panels have explored gender diversity, community building and startup culture.

Set yourself up for success

There are some basic guidelines for getting your pitch noticed: use an informative title (catchy, but not cute — more below), set out a problem and a learning objective in the description, match the format of your talk to a type of session (hands-on, case study), make sure the outline mirrors what you can actually cover in the time allotted and, lastly, show your knowledge about the topic.

Be relevant

Remember that you’re pitching for an OpenStack Summit, not a sales meeting or embarking on a public relations tour. Be honest about who you work for and push your pitch beyond corporate puffery.

Diane Mueller, who works at Red Hat on OpenShift, spells it out this way. “I have corporate masters and we have agendas about getting visibility for our projects and the work we’re doing. But the Summit is all about OpenStack.” Instead of saying “give me an hour to talk about platform-as-a-service,” highlight an aspect of your business that directly relates to OpenStack. “It may be about how you deploy Heat or Docker," she adds, but it’s not a vendor pitch.

While you want to keep the volume on corporate-speak low, all three speakers agreed that the bio is the place to get loud. Make sure to highlight your knowledge of OpenStack and any contributions you’ve made to the community. “Contributors get respect and priority,” Mueller says. “So whatever you’ve done — organizing, documentation, Q/A, volunteering at events — make sure you mention it.”

Be clear and complete

State your intent clearly in the abstract, title and description. The abstract should highlight what the “attendee will gain, rather than what you’re going to say,” Acosta says. “Focus on the voter and the attendee rather than making it all about you.” If English is your second language, proofread closely before submitting. If you’re struggling with the writing, make sure to add links for background, complete your bio and upload a photo.

Gentle notes that although the team regularly gets pitches from around the world and works with speakers whose native tongue isn’t English, making your proposal as clear as possible goes a long way to getting it accepted. For examples, check out the sample proposals at O’Reilly.

“I’ve read some really bad abstracts,” says Mueller. "The worst ones are just one line that says, ‘I’m going to talk about updates to Project X.’”

Nervous? Don’t fly solo

If you’ve got great ideas for a talk but hate the thought of standing up alone in front of an audience, there are a few workarounds. Try finding a co-presenter, bringing a customer or putting together a panel.

“Reach out to people who have the same role as you do at different companies,” says Acosta. “There’s nothing more exciting than a panel with competitors who have drastically different methodologies and strategies.”

Toot your own horn

Make your title captivating — but not too cute — and social-media ready. Voting for your proposal and attendance at your session often depend on the strength of the title.

“Tweet early, tweet often,” says Gentle. “I always get a little nervous around voting time, that’s natural. But trust in the process.”

Start stumping for your proposal as soon as you submit it. Your boss, the PR team and product manager should all be on board; letting your company know early may be key to getting travel approved. Network with your peers to get the word out, too. Finally, remember to vote for yourself. You don’t want to miss out by just one vote.

And, if you don’t get accepted this time, keep trying.

The rate of rejection is “quite high,” Acosta admits. “Don’t be discouraged. It doesn’t mean that your session topic wasn’t good. It just means that yours didn’t make it this time.”

Photos: lead CC-licensed, thanks M1ke-Skydive on Flickr; Standing tall in the room session at the Vancouver Summit courtesy of the OpenStack Foundation.

by Superuser at June 27, 2016 06:41 PM

Hugh Blemings

Lwood-20160626

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 20 June to 26 June 2016 for openstack-dev:

  • ~478 Messages (down about 15% relative to last week)
  • ~154 Unique threads (down about 12% relative to last week)

Notable Discussions – openstack-dev

Midcycle Summaries

Just the one so far – Ironic Midcycle Summary courtesy of Mathieu Mitchell.

Proposal for an Architecture Working Group picks up steam

Late the week before last Clint Byrum penned a well reasoned proposal for the creation of an Architecture Working Group – at the time of last weeks Lwood then there had been little discussion.

The thread picked up quite a bit this week just past in what was at times a somewhat impassioned but collegiate discussion – looks like a draft charter will appear for comment in Gerrit before long.

Release naming for P and Q open for nominations

Monty Taylor noted that it’s time to suggest names for the P and Q releases of OpenStack, nominations close at midnight UTC on Wednesday 28 June.  Voting will commence thereafter once the eligibility of names has been checked.

There’s already been a suggestion for “Panda” which may well meet the “really cool but not place name” test :)

Status of the OpenStack port to Python 3

Victor Stinner provided an update on the progress in porting to Python 3. Three projects are yet to be ported – Nova, Trove and Swift, the consensus from the ensuing thread seems to be that it’s too late to be done by Newton but is an achievable and desirable goal for Ocata.

(Answered) What do OpenStack Developers work on upstream of OpenStack ?

Last week Doug Hellman posed the question of what OpenStack Developer work on upstream of OpenStack.

He kindly took the time to collate the results in a blog post. Unsurprisingly it’s a long list and an interesting read to be sure!

Notable Discussions – other OpenStack lists

Feedback from app developers sought

Piet Kruithof points out that the OpenStack UX project with support from the Foundation and TC are looking to build a community of application and software developers interested in providing feedback at key points during the development process ?

Sound a good initiative, please consider participating!

Upcoming OpenStack Events

Midcycle

Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

Core nominations & changes

  • [Fuel] Nominating Dmitry Burmistrov for core reviewers of fuel-mirror – Sergey Kulanov

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

Random things I read this week;

  • Was a bit of a head down tails up week this week past so not much to report alas

This edition of Lwood brought to you by Bruce Hornsby (Hot House, Levitate and The Way It Is) among a smattering of other tunes…

 

by hugh at June 27, 2016 01:34 PM

Opensource.com

New projects, security, and more OpenStack news

Are you interested in keeping track of what is happening in the open source cloud? Here's what's happening this week, June 27 - July 3.

by Jason Baker at June 27, 2016 06:58 AM

June 26, 2016

Major Hayden

My list of must-see sessions at Red Hat Summit 2016

Red Hat Summit 2016The Red Hat Summit starts this week in San Francisco, and a few folks asked me about the sessions that they shouldn’t miss. The schedule can be overwhelming for first timers and it can be difficult at times to discern the technical sessions from the ones that are more sales-oriented.

If you’re in San Francisco, and you want to learn a bit more about using Ansible to manage OpenStack environments, come to the session that I am co-presenting with Robyn Bergeron: When flexibility met simplicity: The friendship of OpenStack and Ansible.

Confirmed good sessions

Here’s a list of sessions that I’ve seen in the past and I still highly recommend:

Dan is also doing a lab for container security called “A practical introduction to container security” that should be really interesting if you enjoy his regular sessions.

Sessions I want to see this year

If you want to talk OpenStack, Ansible, or Rackspace at the Summit, send me something on Twitter. Have a great time!

The post My list of must-see sessions at Red Hat Summit 2016 appeared first on major.io.

by Major Hayden at June 26, 2016 10:52 AM

June 25, 2016

OpenStack in Production

Deploying the new OpenStack EC2 API project

OpenStack has supported a subset of the EC2 API since the start of the project. This was originally built in to Nova directly. At CERN, we use this for a number of use cases where the experiments are running across both the on-premise and AWS clouds and would like a consistent API. A typical example of this is the HTCondor batch system which can instantiate new workers according to demand in the queue on the target cloud.

With the Kilo release, this function was deprecated and has been removed in Mitaka. The functionality is now provided by the new ec2-api project which uses the public Nova APIs to provide an EC2 compatible interface.

Given that CERN has the goal to upgrade to the latest OpenStack release in the production cloud before the next release is available, a migration to the ec2-api project was required before the deployment of Mitaka, due to be deployed at CERN in 2H 2016.

The EC2 API project was easy to set up using the underlying information from Nova and a small database which is used to store some EC2 specific information such as tags.

As described in Subbu's blog, there are many parts needed before for an OpenStack API to become a service. By deploying using the CERN cloud, many aspects on identity, capacity planning, log handling, onboarding are covered by the existing infrastructure.


From the CERN perspective, the key functions we need in addition to the code are
  • Packaging - we work with the RDO distribution and the OpenStack RPM-Packaging project to produce a package for installation on our CentOS 7 controllers.
  • Configuration - Puppet provides us the configuration management for the CERN cloud. We are currently merging the CERN Puppet EC2 API modules to the puppet-ec2api project. The initial patch is now in review.
  • Monitoring - each new project has a set of daemons to make sure are running smoothly. These have to be integrated into the site monitoring system.
  • Performance - we use the OpenStack Rally project to continuously run functionality and performance tests, simulating a user. The EC2 support has been added in this review.
The current steps are the end user testing and migration from the current service. Given that the ec2-api project can be run on a different port, the two services can be run in parallel for testing. Horizon would need to be modified to change the EC2 endpoint in the ec2rc.sh  (which is downloaded from Compute->Account & Security->API Access).

So far, the tests have been positive and further validation will be performed over the next few months to make sure that the migration has completed so there is no impact on the Mitaka upgrade.

    Acknowledgements

    • Wataru Takase (KEK) for his work on Rally
    • Marcos Fermin Lobo (CERN/Oviedo) for the packaging and configuration
    • Belmiro Moreira (CERN) for the necessary local CERN customisations in Nova
    • The folks from Cloudscaling/EMC for their implementation and support of the OpenStack EC2 API project

    References





    by Tim Bell (noreply@blogger.com) at June 25, 2016 03:59 PM

    OpenStack Blog

    OpenStack Developer Mailing List Digest June 18-24

    Status of the OpenStack Port to Python 3

    • The only projects not ported to Python 3 yet:
      • Nova (76%)
      • Trove (42%)
      • Swift (0%)
    • Number of projects already ported:
      • 19 Oslo Libraries
      • 4 development tools
      • 22 OpenStack Clients
      • 6 OpenStack Libraries (os-brick, taskflow, etc)
      • 12 OpenStack services approved by the TC
      • 17 OpenStack services (not approved by the TC)
    • Raw total: 80 projects
    • Technical Committee member Doug Hellmann would like the community to set a goal for Ocata to have Python 3 functional tests running for all projects.
    • Dropping support for Python 2 would be nice, but is a big step and shouldn’t distract from the goals of getting the remaining things to support Python 3.
      • Keep in mind OpenStack on PyPy which is using Python 2.7.
    • Full thread

    Proposal: Architecture Working Group

    • OpenStack is a big system that we have debated what it actually is [1].
    • We want to be able to point to something and proud tell people “this is what we designed and implemented.”
      • For individual projects this is possible. Neutron can talk about their agents and drivers. Nova can talk about conductors that handle communication with compute nodes.
      • When we talk about how they interact with each other, it’s a coincidental mash of de-facto standards and specs. They don’t help someone make decisions when refactoring or adding on to the system.
    • Oslo and cross-project initiatives have brought some peace and order to implementation, but not the design process.
      • New ideas start largely in the project where they are needed most, and often conflict with similar decisions and ideas in other projects.
      • When things do come to a head these things get done in a piecemeal fashion, where it’s half done here, 1/3 over there, ¼ there, ¾ over there.
      • Maybe nova-compute should be isolated from Nova with an API Nova, Cinder and Neutron can talk to.
      • Maybe we should make the scheduler cross-project aware and capable of scheduling more than just Nova.
      • Maybe experimental groups should look at how some of this functionality could perhaps be delegated to non-OpenStack projects.
    • Clint Byrum would like to propose the creation of an Architecture Working Group.
      • A place for architects to share their designs and gain support across projects to move forward and ratify architectural decisions.
      • The group being largely seniors at companies involved and if done correctly can help prioritize this work by advocating for people/fellow engineers to actually make it ‘real’.
    • How to get inovlved:
      • Bi-weekly IRC meeting at a time convenient for the most interested individuals.
      • #openstack-architecture channel
      • Collaborate on the openstack-specs repo.
      • Clint is working on a first draft to submit for review next week.
    • Full thread

    Release Countdown for Week R-15, Jun 20-24

    • Focus:
      • Teams should be working on new feature development and bug fixes.
    • General Notes:
      • Members of the release team will be traveling next week. This will result in delays in releases. Plan accordingly.
    • Release Actions:
      • Official independent projects should file information about historical releases using the openstack/releases repository so the team pages on release.openstack.org are up to date.
      • Review stable/liberity and stable/mitaka branches for needed releases.
    • Important Dates:
      • Newton 2 milestone, July 14
      • Newton release schedule [2]

    • Full thread

    Placement API WSGI Code – Let’s Just Use Flask

    • Maybe it’s better to use one of the WSGI frameworks used by the other OpenStack projects, instead of going in a completely new direction.
      • It will easier for other OpenStack contributors to become familiar with the new API placement API endpoint code if it uses Flask.
      • Flask has a very strong community and does stuff well that the OpenStack community could stop worrying about.
    • The amount of WSGI glue above Routes/Paste is pretty minimal in comparison to using a full web framework.
      • Template and session handling are things we don’t need. We’re a REST service, not web application.
    • Which frameworks are in use in Mitaka:
      • Falcon: 4 projects
      • Custom + routes: 12 projects
      • Pecan: 12 projects
      • Flask: 2 projects
      • web.py: 1 project
    • Full thread

    [1] – http://lists.openstack.org/pipermail/openstack-dev/2016-May/095452.html

    [2] – http://releases.openstack.org/newton/schedule.html

    by Mike Perez at June 25, 2016 03:54 AM

    June 24, 2016

    Tesora Corp

    Short Stack: Getting from couch to OpenStack, Gaining traction in Australia, and OpenStack infrastructure embraces containers

    Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

    Here are our latest links:

    Getting from couch to OpenStack | OpenStack Superuser

    Eric Wright offered a training plan for those who are not running OpenStack but looking to get started. Wright’s tutorial offers a starting point for those who wish to begin their journey with OpenStack. He acknowledged that there are a lot of options for OpenStack distributions, but very few trainings with explicit step-by-step instructions. Wright also walked through the OpenStack projects, dashboard environment, and important documentation resources.

    OpenStack infrastructure embraces containers, but challenges remain | TechTarget

    Trevor Jones discussed the obstacles on the path to OpenStack container technology enterprise adoption. Jones asserted that containers are a central piece to the OpenStack roadmap. Container technology was majorly featured at each of the last three OpenStack Summits, and some believe that it will fulfill OpenStack’s promise to avoid vendor lock-in as well as share public and private cloud resources.

    OpenStack gaining traction in Australia | Delimiter

    A recent survey taken at OpenStack Day Sydney last month indicated that OpenStack is very likely to experience major growth in the Australian market in the next 18 months. This survey polled attendees about both their OpenStack experience and adoption, and around 65% of whom had already adopted OpenStack. Additionally, 70% of attendees who had not yet adopted OpenStack are investigating the use of the platform within their respective companies.

    Apache Libcloud: The open-source cloud library to link all clouds together | ZDNet

    Apache Software Foundation recently released Apache Libcloud version 1.0, a cloud service interoperability library. Libcloud has been adopted by more than 50 cloud providers, including include Amazon Web Services, Apache CloudStack, Rackspace, Google Cloud Platform, Microsoft Azure, OpenStack, and VMware. In these instances, it is used as a library to service multi-cloud integration and direct application programming interface (API) integration.

    OpenStack and Storage — Flowing Downstream With Openness | InformationWeek

    Daniel Giffix evaluated how companies can build their future using OpenStack and software-defined storage. He stated that one of the biggest necessities for OpenStack users is increasingly responsive and flexible storage that is designed for the cloud. Both the open source and OpenStack communities exposes users to innovation and avoids lock-in.

    The post Short Stack: Getting from couch to OpenStack, Gaining traction in Australia, and OpenStack infrastructure embraces containers appeared first on Tesora.

    by Alex Campanelli at June 24, 2016 06:40 PM

    June 23, 2016

    SUSE Conversations

    Blue Skies and Bright Talks – openSUSE Conference Day Two

    This is a guest blog written by Tanja Roth, Technical Writer at the SUSE Documentation Team (supported by photos from chabowski). Thursday, June 23rd, 2016. The second day of the openSUSE conference not only offered bright blue sky and high temperatures outside, but also very comfortable temperatures and a lot of interesting talks inside. Being …

    +read more

    The post Blue Skies and Bright Talks – openSUSE Conference Day Two appeared first on SUSE Blog. chabowski

    by chabowski at June 23, 2016 09:02 PM

    Enriquez Laura Sofia

    Get RBD Ceph running

    After cloning the DevStack repository and read the developer’s guide. I realized that nothing should be installed manually, or unless for beginners developers as myself. I know what you think “the guide tell me to clone my project repository manually” (here), but this isn’t the good way to do it. Use stack.sh.

    How stack.sh works?

    • Installs Ceph (client and server) packages
    • Creates a Ceph cluster for use with openstack services
    • Configures Ceph as the storage backend for Cinder, Cinder Backup, Nova, Manila (not by default), and Glance services
    • (Optionally) Sets up & configures Rados gateway (aka rgw or radosgw) as a Swift endpoint with Keystone integration
    • Supports Ceph cluster running local or remote to openstack services

    <figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_56" style="width: 900px">code_review<figcaption class="wp-caption-text">dev dynamic</figcaption></figure>

    Devstack plugin to configure Ceph as the storage backend for openstack services

    First, just enable the plugin in your local.conf:

    enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph

    Example of my local.conf:

    
    [[local|localrc]]
    #######
    # MISC #
    ########
    ADMIN_PASSWORD=pass
    DATABASE_PASSWORD=$ADMIN_PASSWORD
    RABBIT_PASSWORD=$ADMIN_PASSWORD
    SERVICE_PASSWORD=$ADMIN_PASSWORD
    #SERVICE_TOKEN = <this is generated after running stack.sh>
    
    # Reclone each time
    #RECLONE=yes
    
    # Enable Logging
    LOGFILE=/opt/stack/logs/stack.sh.log
    VERBOSE=True
    LOG_COLOR=True
    SCREEN_LOGDIR=/opt/stack/logs
    #################
    # PRE-REQUISITE #
    #################
    ENABLED_SERVICES=rabbit,mysql,key
    #########
    ## CEPH #
    #########
    enable_plugin devstack-plugin-ceph https://github.com/openstack/devstack-plugin-ceph
    
    # DevStack will create a loop-back disk formatted as XFS to store the
    # Ceph data.
    CEPH_LOOPBACK_DISK_SIZE=10G
    
    # Ceph cluster fsid
    CEPH_FSID=$(uuidgen)
    
    # Glance pool, pgs and user
    GLANCE_CEPH_USER=glance
    GLANCE_CEPH_POOL=images
    GLANCE_CEPH_POOL_PG=8
    GLANCE_CEPH_POOL_PGP=8
    
    # Nova pool and pgs
    NOVA_CEPH_POOL=vms
    NOVA_CEPH_POOL_PG=8
    NOVA_CEPH_POOL_PGP=8
    
    # Cinder pool, pgs and user
    CINDER_CEPH_POOL=volumes
    CINDER_CEPH_POOL_PG=8
    CINDER_CEPH_POOL_PGP=8
    CINDER_CEPH_USER=cinder
    CINDER_CEPH_UUID=$(uuidgen)
    
    # Cinder backup pool, pgs and user
    CINDER_BAK_CEPH_POOL=backup
    CINDER_BAK_CEPH_POOL_PG=8
    CINDER_BAKCEPH_POOL_PGP=8
    CINDER_BAK_CEPH_USER=cinder-bak
    
    # How many replicas are to be configured for your Ceph cluster
    CEPH_REPLICAS=${CEPH_REPLICAS:-1}
    
    # Connect DevStack to an existing Ceph cluster
    REMOTE_CEPH=False
    REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring
    
    ###########################
    ## GLANCE - IMAGE SERVICE #
    ###########################
    ENABLED_SERVICES+=,g-api,g-reg
    
    ##################################
    ## CINDER - BLOCK DEVICE SERVICE #
    ##################################
    ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
    CINDER_DRIVER=ceph
    CINDER_ENABLED_BACKENDS=ceph
    
    ###########################
    ## NOVA - COMPUTE SERVICE #
    ###########################
    #ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-net
    #EFAULT_INSTANCE_TYPE=m1.micro
    
    #Enable Tempest
    ENABLED_SERVICES+=tempest' inside 'local.config
    

     

    And finally, run in your terminal:

    ~/devstack$ ./stack.sh

    …and Boom! wait for the magic to happen!

    Pictures from OpenStack

    Please check out:


    by enriquetaso at June 23, 2016 05:24 PM

    OpenStack Superuser

    What's new under the OpenStack big tent: Vitrage

    OpenStack's big tent keeps expanding, Vitrage is one of the newest projects to join.

    We'll give you a quick overview and find out how you can get involved from the project team lead (PTL).

    What Vitrage is the OpenStack RCA (Root Cause Analysis) service for organizing, analyzing and expanding OpenStack alarms and events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected.

    Who Ifat Afek, PTL, a system architect at Nokia.

    What new features/enhancements/fixes are expected for the Newton release?

    Vitrage's first release was Mitaka-compatible and it included Vitrage basic functionalities: root cause analysis, deduced alarms and deduced states. As Vitrage was recently accepted into the big tent, Newton will be its first official release.

    Many enhancements are expected in Newton:

    • Zabbix datasource will provide extended monitoring support, in addition to Nagios.

    • New APIs for templates validation and , create, read, update and delete (CRUD.) Vitrage templates provide a way to configure the desired behavior in response to changes in the cloud, in an easy and human-readable way. A template contains scenarios with conditions and actions. For example, in case of a host NIC failure alarm (condition) perform two actions: modify the state of the host to ERROR, and raise 'instance_unreachable' alarm on every instance running on this host. The new template API introduced in Newton will make managing these templates easier.

    • A puppet installation of Vitrage will be available.

    • Vitrage evaluator algorithms will be improved, for example for supporting overlapping templates use cases

    • Vitrage UI (horizon plugin) will be enhanced.

    • We hope to also add a Heat datasource. Heat datasource will make Vitrage aware of the applicative layer and its relation to the underlying virtual and physical infrastructure. This will allow propagating physical layer alarms to the applicative layer. For example, as a result of a failure in the host NIC (detected by Nagios) Vitrage will raise an alarm on the instances running on this host, and in turn on the VNFs running on these instances. Such insights are currently missing in OpenStack.

    What contributions do you need most from the community?

    As a new project, we are looking for new contributors.

    Contributors are welcome to suggest new use cases for Vitrage, help us write blueprints to implement these use cases, or implement some of the blueprints we already have.

    Specific areas where we could use some help:

    • We would like to add more datasources to Vitrage, like a Sensu monitoring tool. The more datasources Vitrage has, the better insights it will be able to provide.

    • UI developers are welcome to help us enhance Vitrage UI.

    Vitrage has some exciting challenges ahead, come and join us!

    More information can be found in Vitrage wiki page: https://wiki.openstack.org/wiki/Vitrage
    And you can check out the Vitrage latest demo: https://www.youtube.com/watch?v=tl5AD5IdzMo

    Get Involved!

    Use Ask OpenStack for general questions. For roadmap or development issues, subscribe to the OpenStack development mailing list and use the relevant tag [vitrage]

    The team holds meetings Wednesdays at 08:00 UTC on in #openstack-meeting-4 (IRC)

    Cover Photo // CC BY NC

    by Nicole Martinelli at June 23, 2016 05:19 PM

    Doug Hellmann

    OpenStack contributions to other open source projects

    As part of preparing for the talk I will be giving with Thierry Carrez at EuroPython 2016 next month, I wanted to put together a list of some of the projects members of the OpenStack community contribute to outside of things we think of as being part of OpenStack itself. I started by brainstorming myself, … Continue reading OpenStack contributions to other open source projects

    by doug at June 23, 2016 12:30 PM

    June 22, 2016

    OpenStack Superuser

    Getting from couch to OpenStack

    If you are running no OpenStack at all and you want to get to the next point where you can test it out to see if the knees work, Eric Wright has a training plan for you. He sees it much like the popular programs that get you off the couch to running a 5K in no time.

    With this plan, “there's no commitment to suddenly become a massive, resilient OpenStack operator. It's simply about being able to give you a starting point where you can begin your learning journey on OpenStack.” In about in 40 minutes, with zero experience you can launch an entirely functional multi-node OpenStack lab, he says. Wright outlined how to go from “zero to hero” in a recent talk at the OpenStack Summit Austin.

    “You are not a zero now and you actually won't be a hero but at the very minimum it's about giving yourself the opportunity to test things out without having to shave yaks in order to do it,” he adds.

    OpenStack learning challenges

    That first jog can often be a humbling one, he says. People interested in learning OpenStack start out going online, interested in learning about networking, say, so they Google it and there's a lot of material to quickly and easily walks you through step-by-step on how to do networking.

    alt text here

    “It's very simple, two steps. Everybody will tell you, first, you draw a couple of circles then you finish drawing the owl,” Wright says. “But it's important that you find the right resources to get you from the circles to maybe a bit more detailed circles and add a few steps because if you don’t, you will begin your OpenStack journey and you will be solving mathematically.”

    That’s how he describes his first run at installing OpenStack. He trudged through an install line by line, code by code on Ubuntu using the OpenStack documentation about four years ago. “I don't think I have ever recovered from that…If feels like the finish line but it's really the start line,” he says. “You've gone through a lot of pain to get to the start line. Luckily we've got a lot of ways that we can do this more simply because there are distributions out there.”

    OpenStack distributions

    alt text here

    alt text here

    Enter the distribution, a way to fast-track your OpenStack knowledge, Wright says. “The package managers are different. You want to pick the one that you’re comfortable with, that's what makes more sense…The OpenStack core is identical. It's just package management that changes it.”

    OpenStack Project topologies

    alt text here

    Wright keeps the diagram of the OpenStack projects on his desktop background “as a gentle reminder of how complex it is,” he says. “But it's okay because you don't have to do all this stuff. OpenStack does it for you.” All of the OpenStack projects communicate to each other by the API. It's a core requirement to deliver an OpenStack project that it has to communicate with other projects via the API. every single OpenStack project very simply plugs into the other one as a loosely coupled environment. “It's important when we talk about being loosely coupled because by being loosely coupled it ensures that there's no SDK you've latched on to. Even an SDK is close but we can make changes. They don't necessarily deprecate them cleanly,” he says. “You always have an API that speaks to the other projects together. This makes sure that forward compatibility, backwards compatibility or future proofing some of our stuff.”

    There are six core projects and these are important because they are most likely the projects you are going to interact with in the beginning, he says.

    alt text here

    You can further stretch your learning with the OpenStack project navigator. “There's a ton of great documentation there,” he says. The “number one” thing you’ll want to figure out is Horizon and the dashboard environment.

    While it is not considered a core project because it's not actually required to do anything, he says, OpenStack is built to be consumed by other computers. That's really what it was meant for. “Horizon is the human side of the dashboard. You can login, you can pull down some content and create environment that's what that is,” he says. “Go through them and pick and choose which ones you want to dive into. Take a little time and get a sense of what it is because the beauty part is there's a rich amount of information and it's freely available.”

    Neutron

    alt text here

    Neutron (networking) “When you are kicking up the first environment, it's going to be a local environment. Maybe you have a flat topology. If you are running some kind of an overlay, you can go a little but more advance... “It’s not just at the software tier but at the hardware tier, you are now able to plug in your network environment and you don't have to necessarily configure. It is just fully available to you,” he said.

    “That's the beauty part. As an OpenStack administrator, we've done all the work for you in the community to make sure that you can just communicate with it and you can request networks, request IP addresses. It's all encapsulated within the OpenStack environment but advanced features can go into your existing topology.”

    Nova

    alt text here There is no OpenStack without compute. All the compute platform does is wait around our guest instances and boots the instances based on the current block image or an object image and again Nova is not a hypervisor cell so it requires a hypervisor. “When we talk about vanilla OpenStack, it just means that it's usually Canonical, Ubuntu with a straight OpenStack main trunk code running on top of it,” he says. “They call it vanilla because that's typically where people start.” Nova is the management tier and that gives you the ability to interact with a hypervisor and do operations like launch an instances. When you look at that environment. All of a sudden you are like, ‘Okay, I can start to see what the different project topologies are and where they make sense,’” Wright says.

    Marketplace and the App Catalog

    “This came out of the most recent release with Liberty and now with Mitaka, it's even more cool because what you have is one single webpage where you can go to and you have the opportunity to view all these different apps, different images, different distributions all in one spot,” he says.

    This marketplace is a community-contributed environment where you can go through and scan for what you need. “You can find other people who are living the pain you are about to live already for you. They have already done a lot of the early steps to help you along that road. This is all fully and freely available online, and if you want to contribute you just fire it upstream through GitHub.”

    OpenStack Cookbook Lab

    Wright gave a shout out to the Rackspace crew who have brewed up three editions of the OpenStack Cookbook, and the lab that you can spin up by deploying the code, installing VirtualBox and Vagrant that will launch the environment. It will set up the private network and the management network and it will run the environment for you. You can check out a tutorial for Mac on Wright’s blog.

    “It takes about half an hour to actually download and deploy. The reason is because it is pulling down the core image itself, it's running down a VirtualBox image and then all of the code because it literally builds it live every time you do it,” he says, adding that you wouldn’t want to try this when tethering to your phone to avoid a massive bill.

    Online resources

    alt text here

    “We are all in this journey together and we all want to succeed together,” Wright says.

    Cover Photo // CC BY NC

    by Nicole Martinelli at June 22, 2016 04:30 PM

    Aptira

    First OpenStack Australia Day Indicates Strong Uptake of OpenStack in Australia

    Survey shows 65% of attendees have adopted OpenStack, with remainder are considering adoption

    SYDNEY, AUSTRALIA, June 16, 2016 – The first OpenStack conference in Australia has signalled expectations for the growth of OpenStack by the local market over the next 18 months. It also reflects significant opportunity for vendors and service providers specialising in OpenStack technology and services.  

    Held in Sydney last month with a keynote from Jonathan Bryce, Executive Director of the OpenStack Foundation who announced Sydney as the host city for the global OpenStack Summit in November 2017, OpenStack Australia Day was the region’s largest, and Australia’s first conference focusing on the Open Source cloud technology.

    Hosted by Aptira, the sold out event attracted more than 300 users from a range of public and private sector organisations, as well as some of the largest OpenStack vendors and solution providers both locally and internationally, including Red Hat, SUSE and Rackspace among many others.

    A survey to gather information about attendees’ OpenStack experience and adoption showed that approximately 65% had adopted OpenStack, while 70% of those who had not yet adopted OpenStack, have, or are in the process of investigating the use of this platform within their company, suggesting the potential for further growth of this platform within Australia.

    Conference attendees provided valuable insights into why their organisations have not yet adopted this technology and where technical and commercial education is needed to bridge the gap. They also identified many advantages to running OpenStack – from increased transparency and integrity, to streamlining their internal IT infrastructure and providing greater scalability and flexibility within their organisation.

    <iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/MDw8y6n2shQ" width="420"></iframe>
    Assessing the growth of OpenStack within Australia, Aptira’s COO Roland Chan said “I was pleasantly surprised by the number of Australian enterprises that showed up to OpenStack Australia Day, and of those that attended and advocated uptake. This means we’re starting to see more innovative thinking from Australian enterprises in this area. There’s a lot of value to be unlocked in OpenStack so it’s a really positive indication for the future.”

    There’s obviously a lot of interest in the local market but I think we are going to see a real ramp up in the coming 12 to 18 months with regards to actual OpenStack adoption,” said Keiran McCartney, Alliances & Solutions Manager at NetApp and Solidfire.

    “We didn’t even think, a few years ago, that OpenStack could be a fundamental building block for the strategic choice of next generation carrier networks via Network Functions Virtualisation, but it just happened.” says Peter Jung, Business Development Manager at Red Hat.

    Events such as OpenStack Australia Day, and the upcoming OpenStack Government Day help develop deeper understanding of the benefits of OpenStack among the community through user case studies, the exchange of feedback, technical sessions, live demonstrations and workshops.

    Following the success of the first OpenStack Australia Day, several more Australian OpenStack events are planned between now and the OpenStack Summit in Sydney in November next year. These include the aforementioned OpenStack Government Day to be held in Canberra in November this year, the second OpenStack Australia Day in Melbourne in May 2016 and a collaboration with GovHack in mid 2017.

    Hosted by Aptira, and supported by the OpenStack Foundation, these events will be held in addition to the long running Australian OpenStack User Group which meets quarterly in Sydney, Melbourne, Brisbane and Canberra.

    “We can’t do this without awesome volunteers who help us put these [OpenStack events] on all over the world,” said Jonathan Bryce, Executive Director of the OpenStack Foundation. “So thank you to everybody that made OpenStack Day Australia possible.”

    About Aptira

    Aptira is the leading provider of OpenStack in Asia-Pacific, providing cloud solutions and technology consultancy to meet the most demanding technology specifications for a wide range of organisations in telecommunications, media, finance, retail, utilities and government. With offices in Australia, India, Taiwan and Hungary, Aptira is a growing global business as its reputation for high quality services expands.

    As the founder and prime motivator of the OpenStack community in Australia and India, the company is committed to the idea that what it is doing for its customers today will be mainstream tomorrow.

    For more information, please visit aptira.com or follow Aptira on Twitter: @Aptira

    About OpenStack

    OpenStack® is the most widely deployed open source software for building clouds. Enterprises use OpenStack to support rapid deployment of new products, reduce costs and improve internal systems. Service providers use OpenStack to give customers reliable, easily accessible cloud infrastructure resources, supporting technologies including platforms and containers. OpenStack powers clouds for many of the world’s largest brands, including AT&T, Bloomberg, Cisco Webex, Disney, Fidelity and Walmart. Nearly 500 companies and 23,000 individuals across more than 150 countries are supporters of the project.

    For more information on OpenStack, please visit openstack.org or follow OpenStack on Twitter: @OpenStack

    To stay up to date with the latest OpenStack news and events in Australlia, please visit australiaday.openstack.org.au or follow OpenStackAU on Twitter: @OpenStackAU

    Read More

    The post First OpenStack Australia Day Indicates Strong Uptake of OpenStack in Australia appeared first on Aptira OpenStack Services in Australia Asia Europe.

    by Jessica Field at June 22, 2016 11:47 AM

    DreamHost

    DreamCompute Goes “M” for Mitaka Without Unleashing the Dragons

    Today, the DreamHost Cloud team upgraded DreamCompute to the 13th release of OpenStack, codenamed “Mitaka”. This new release of DreamCompute brings lots of changes under the hood, and the latest OpenStack developments.

    The most visible feature is the possibility to save snapshots of ephemeral instances. Previous OpenStack releases had a limitation that prevented instances booted from ephemeral disks to be properly snapshotted on Ceph backends. Since DreamCompute uses Ceph as its underlying storage system, that bug affected us quite badly. With the upgrade to Mitaka, this issue is now fixed and all instances can be upgraded. 

     

    Image courtesy Henry Burrows   https://www.flickr.com/photos/foilman/14680469518/in/photolist-ongf8q-ChijLD-dY8CFi-5jwD9g-2jzvXW-bmzvVY-afLeQx-7N35MW-anzR5F-a8c4x1-bEqbXM-asCy6L-7tcofU-BR6tm4-rPZsi5-qo2h9M-8H3sob-kmsAUt-9SHEjW-pEbrMp-vv34bz-amHAih-b9EQJX-CubuFy-9U3KRH-8nSYXB-G9NaNk-ew58Ce-ppznSh-5yMPf6-7bZSVq-oxPGjU-5yS7Ud-fHjdNX-ew5fup-qEsPqp-dYcuHh-pNQJKZ-fH7QjF-pwpj8R-qnUK67-7StRQe-7DEQEf-hhZuDj-8RCQph-fNjTo1-fQb26w-dR3K18-93g2aZ-pyBau1

    Image courtesy Henry Burrows /Flickr

    Customers will notice that the new DreamCompute control panel looks a lot like the old DreamCompute—good eyes! The team decided not to upgrade the control panel’s version of Horizon at this stage. OpenStack’s modularity allows us to run separate versions of many of its components and we decided to skip more versions and save energy to really improve the user experience of DreamCompute.

    Speaking of energy, some may wonder how a cloud gets upgraded in the first place. There is some magic to it; there are physical hosts running virtual machines while the services below are pulled and upgraded without the virtual machines being affected. All of this is done programmatically using Chef recipes. As DreamHost Cloud Systems Engineer David Wahlstrom put it, we make “a small(ish) change to the environment file for production, and Chef does the rest”. Gotta love that!

    In order to get to the “push a button, let the machine do the rest”, the awesome DreamHost Cloud team spent months developing upgrade scripts, new Chef recipes, and creating packages. After weeks of development, and even more weeks of testing on staging clusters so that the upgrade scripts would not unleash the dragons, well… If you read this, the dragons are still under control.

    Previously, DreamCompute was running on OpenStack Kilo; we skipped Liberty and went straight to Mitaka. From now on, all upgrades will be faster in order to give DreamCompute customers the newest and greatest features developed with the OpenStack community. The rest is the DreamCompute our customers have come to know and love: same fantastic speed, unbeatable predictably low prices and a great support community. If you don’t have an account, go sign up for DreamCompute now!  Give it a try!  If you don’t like it, write back to me and I’ll issue a refund. Promise.

    by Stefano Maffulli at June 22, 2016 05:57 AM

    June 21, 2016

    OpenStack Superuser

    What's new under the OpenStack big tent: Openstack-salt and Watcher

    OpenStack's big tent keeps expanding: two of the newest projects are Openstacksalt and Watcher.

    We'll give you a quick overview of them and find out how you can get involved from the project team leads (PTLs).

    Openstacksalt

    What The OpenStack-salt project brings together the best open source technologies to drive large-scale OpenStack deployments with SaltStack as an integration service.

    Who Ales Komarek, PTL, a software architect for tcp cloud.

    What new features/enhancements/fixes are expected for the Newton release?

    There are a number of new features that are focused on improving the workflow automation to provide the full life cycle management capabilities both for host services and container micro-services.

    • The ability to run OpenStack control plane in containers as well as in virtual machine services from the same configuration origin. This gives you great versatility to use containers for just some or for all services within system.

    • The complete testing environment that lets you test and share any Gerrit review or predefined environment. This lets you share your development and testing setups easily, actually can serve as proof-of-concept (POC) as a service.

    • Many new service backends are implemented (DVR, Midokura for example for Neutron). As well as the Murano and Swift services, which will end up in the official repositories soon.

    • Many services already have their support formulas defined. These include monitoring, metering, logging, firewalls, backups and documentation.

    What contributions do you need most from the community?

    We would love the community to try, test and then use our solution in production.

    There are several areas where we really need your support:

    • Our new testing environment needs more testing and cover all the edge cases it has been created for.

    • The backends are easily expandable and any new Neutron, Cinder or Nova plugins are welcome along any OpenStack services. Please feel free to contribute any plugin or OpenStack service.

    Thank you for the opportunity to share our work and vision with this wonderful community and I'm looking forward to see how it helps to solve our day-to-day pains either by inspiration or better by using the Openstack-salt project. Get in touch at our weekly irc meetings (see info below.)

    Get involved!

    Use Ask OpenStack for general questions.
    For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the relevant tag [openstacksalt]

    The team holds meetings Tuesdays at UTC 1300 in #openstack-salt (IRC)

    Watcher

    What Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

    Who Antoine Cabot, Watcher PTL. He's also head of the cloud computing lab at research institute IRT B-COM.

    What new features/enhancements/fixes are expected for the Newton release?

    Watcher joined the OpenStack Big Tent during the Newton cycle, our top priority is to rely on existing OpenStack components to help accomplish the Watcher mission.

    We are working with Nova, Telemetry, Monasca and Congress teams to collect metrics and handle constraints between virtual machines. Watcher is a framework that allows any administrator to build their own optimization strategy, so our main focus is to make it easier to do that at the end of the cycle. We will also work on scalability tests because Watcher optimization gains will be significant for an infrastructure above 30 nodes compared to a "manual" optimization.

    In the first official release, Watcher will provide the following features: * Run an audit of your infrastructure to identify optimization opportunities.

    • Audit will return an action plan with efficacy indicators associated (number of migrations needed, for example.)

    • Run the action plan and give feedback on the completion state.

    What contributions do you need most from the community?

    We are looking for operators who need to optimize their OpenStack clusters for business use cases that can be tackled in Watcher. We are also looking for new contributors to build Watcher optimization strategies to demonstrate the real value of the framework in terms of total cost of ownership (TCO) reduction.

    Get involved!

    Use Ask OpenStack for general questions.
    For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the relevant tag [watcher]

    The team holds meetings Wednesdays at 14:00 UTC on even weeks, 9:00 UTC on odd weeks, in #openstack-meeting-4 (IRC)

    Cover Photo // CC BY NC

    by Nicole Martinelli at June 21, 2016 10:28 PM

    OpenStack security, piece by piece

    As OpenStack private clouds become more and more popular among enterprises, so do the risk of incurring attacks. In part 1 of tighten the security of your OpenStack clouds we defined common threats for an OpenStack cloud and discussed general recommendations for threat mitigation tools and techniques.

    In addition, you need to ensure to treat each OpenStack component separately and incorporate security best practices according to the component roles and importance within the complex OpenStack environment. In this post we will discuss the different vulnerabilities of all components and list the actions that need to be taken.

    Looking at the latest OpenStack security vulnerabilities you will find many related to each of the OpenStack components, such as, Nova, Glance and Neutron. For example, a few Swift versions before Kilo and Liberty contain a hole which allows remote attackers to launch a Denial of Service (DoS). Another example is a Nova version, before the Kilo release, which allows attackers to obtain sensitive information by reading log files.

    OpenStack CLI Security

    Use of the OpenStack CLI tools requires a username and a password. Many OpenStack guides recommend using the OS_USERNAME and OS_PASSWORD environment variables that are defined in the OpenStack RC file. However, it goes against best security practices to store the credentials in a plain, unencrypted file.

    For a more secure option, OpenStack CLI tools can request a username and password for each request, or they can use a provisioned authentication token. Also a dedicated node or VM, located in separated internal DMZ to use OpenStack CLI tools can be used in this additional node for such purposes only. Ensure to disable unnecessary services; allow access by SSH only and from a trusted network; disable the Bash history and store all of the logs in an isolated, remote, secure, highly available storage repository.

    Nova security

    Nova is probably the most complex OpenStack service, it communicates with many, many other services and has a number of configuration options; therefore Nova is the number one service to ensure security hardening.

    • The owner of Nova configuration files should be root, the group of configuration files should be nova. Also the permissions should be 640 (read/write for owner, read for group).
    • Disable PCI passthrough for your hypervisor to restrict direct access from a VM to hostís hardware, such as DMA.
    • Use sVirt or SELinux/AppArmor to put your hypervisor in a separate security context.
    • Some of the hypervisors have memory optimization techniques to de-duplicate memory pages used by VMs on the host. For example, Xen has Transparent Page Sharing (TPS) and KVM has Kernel Samepage Merging (KSM). To mitigate cloud tenant threats use a strict separation of tenants per compute; therefore, disable hypervisor memory optimization techniques for such computes.
    • Store hypervisor logs in a secure remote storage.
    • Monitor the integrity of your hypervisor executable files. For example, you can use debsums for Debian based operating systems, rpm -V for Red Hat.
    • Use Address Space Layout Randomization (ASLR) and Position Independent Executable (PIE) for hypervisor executables; qemu-kvm supports this.
    • Use TLS for VNC or SPICE sessions.
    • Ensure that Nova securely communicates (via TLS) with the other OpenStack services, such as Keystone or Glance. This is also a general recommendation for all of the OpenStack services.

    Glance security

    Glance stores images, which are used to launch new VMs. To avoid attacks to the images integrity it is important to keep this service secure.

    • Do not use pre-built images or Docker containers from untrusted sources, because they may contains security breaches or malicious software.
    • Use Glance image signing (available in Mitaka).

    Neutron security

    Neutron provides network connectivity and IP addresses for VMs in the cloud. Neutronís architecture is based on plugins, so it is important to understand which plugins are required and which plugins are used for third party solutions; then disable any unnecessary plugins.

    • Use isolated management network for OpenStack services
    • Use L2 isolation with VLAN segmentation or GRE tunnels
    • Enable security groups in Neutron and disable security groups in Nova (all Nova security group API calls should be forwarded to Neutron)
    • Secure Neutron API endpoints with TLS
    • Use iptables along with ebtables rules to prevent MAC spoofing and ARP spoofing attacks.
    • Use network quotas to mitigate DoS attacks

    Message Queue (RabbitMQ) security

    Message queue facilitates communication for OpenStack services and RabbitMQ is the most popular solution for the OpenStack cloud. OpenStack does not support message signing, so the message queue should provide secure transport for OpenStack services.

    • Delete the RabbitMQ guest user
    • Use a separate RabbitMQ virtual host for each OpenStack service; use unique †credentials and appropriate permissions for each RabbitMQ virtual host
    • Secure RabbitMQ API with TLS
    • Store RabbitMQ logs in a secure remote storage

    Keystone security

    Keystone provides identity services for other OpenStack services and it should be properly protected from spoofing and other attacks.

    • Keystone does not provide methods to enforce policies on password strength, password expiration time or failed authentication attempts as recommended by NIST. However, Keystone can use an external authentication system that supports all these requirements.
    • Multi-factor authentication should be enabled through an external authentication system, such as Apache HTTP server.
    • By default, the token expiration time is 1 hour. The recommended expiration value should be set to a lower value. However, the interval should be long enough for OpenStack services to complete their requests, otherwise the operation will fail if the token expires before the services complete a request. Note that certain operations are especially lengthy, for example, when Nova transfers a disk image onto the host.
    • Use Fernet tokens, which are are designed specifically for REST APIís as they are more secure than standard tokens and they require fewer resources.
    • Use Keystone domains for more granular access control for tenants. A domain owner can create additional users, groups, and roles to be used within the domain.

    Cinder security

    Cinder provides high level API to manage block-level storage devices and is actively used by Nova. It should be protected from denial of service, information disclosure, tampering and other threats.

    • The owner of Cinder configuration files should be root, the group of configuration files should be cinder. Also the permissions should be 640 (read/write for owner, read for group).
    • Set the maximum request size. By default it is not defined and an attacker can send a large request to crush Cinder (DoS attack).
    • To secure delete Cinder volumes, use volume wiping.

    Swift security

    Swift provides durable object storage for OpenStack, primarily to store Glance images, so it should be protected as well.

    • The owner of Swift configuration files should be root, the group of configuration files should be swift. Also the permissions should be 640 (read/write for owner, read for group).
    • Use a dedicated network †for storage nodes.
    • Use a firewall to protect public interfaces on proxy nodes

    Further reading

    OpenStack security related projects: Barbican OpenStack Barbican is a PKI and cryptography service for the OpenStack cloud. It has been available since the Havana release. Barbican supports trusted CA for TLS certificates, transparent encryption and key distribution for Cinder LVM volumes, the KDS service to sign messages and Swift objects encryption.

    Anchor Anchor is a lightweight PKI service for enabling cryptographic trust in OpenStack services. Anchor uses short term certificates, which are typically valid for 12-24 hours.

    Quality of Service as a Service (QoSaaS) QoSaaS is currently in development and it will provide traffic shaping, rate-limiting per port/network/tenant and flow analysis.

    Firewall as a Service (FWaaS) FWaaS is currently in development and the goal of the project is to provide unified API for traditional L2/L3 firewalls and next generation firewall to use in an OpenStack cloud.

    Load Balancer as a service (LBaaS) The existing LBaaS implementation is based on HAProxy, but the goal of this project is to leverage proprietary and open-source load balancing technologies to do the actual load balancing of requests.

    In conclusion

    In addition to the suggestions above, it is important that you stay tuned to OpenStack security vulnerabilities and strive to maintain an updated environment. Security hardening of your OpenStack environment must be addressed on many levels, starting from the physical (data center equipment and infrastructure), through the application level (user workloads) and organization level (formal agreements with cloud users to address cloud privacy, security, and reliability). There are many issues and actions to consider for hardening OpenStack security and we hope that post gave you the information and tools to secure your OpenStack cloud.

    This post first appeared on Stratoscale's blog.

    Superuser is always interested in how-tos and other contributions, please get in touch: editor@superuser.org

    Cover Photo/ // CC BY NC

    by Superuser at June 21, 2016 05:22 PM

    AppFormix

    Out With the Old: Your Monitoring Tools Don’t Cut It Anymore

    Until now, microservice application developers and infrastructure operators have been stuck with old monitoring tools designed for monolithic technologies, where it is common for schedulers to take several minutes to do their work.

    by Sumeet Singh (sumeet@appformix.com) at June 21, 2016 04:00 PM

    RDO

    Recent RDO blogs - June 21, 2016

    Here's what RDO enthusiasts have been blogging about in the last week.

    Skydive plugin for devstack by Babu Shanmugam

    Devstack is the most commonly used project for OpenStack development. Wouldn’t it be cool to have a supporting software which analyzes the network infrastructure and helps us troubleshoot and monitor the SDN solution that Devstack is deploying?

    … read more at http://tm3.org/7a

    ANNOUNCE: libvirt switch to time based rules for updating version numbers by Daniel P. Berrangé

    Until today, libvirt has used a 3 digit version number for monthly releases off the git master branch, and a 4 digit version number for maintenance releases off stable branches. Henceforth all releases will use 3 digits, and the next release will be 2.0.0, followed by 2.1.0, 2.2.0, etc, with stable releases incrementing the last digit (2.0.1, 2.0.2, etc) instead of appending yet another digit.

    … read more at http://tm3.org/7b

    Community Central at Red Hat Summit by Rich Bowen

    OpenStack swims in a larger ecosystem of community projects. At the upcoming Red Hat Summit in San Francisco, RDO will be sharing the Community Central section of the show floor with various of these projects.

    … read more at http://tm3.org/7c

    Custom Overcloud Deploys by Adam Young

    I’ve been using Tripleo Quickstart. I need custom deploys. Start with modifying the heat templates. I’m doing a mitaka deploy

    … read more at http://tm3.org/7d

    Learning about the Overcloud Deploy Process by Adam Young

    The process of deploying the overcloud goes through several technologies. Here’s what I’ve learned about tracing it.

    … read more at http://tm3.org/7e

    The difference between auth_uri and auth_url in auth_token by Adam Young

    Dramatis Personae:

    Adam Young, Jamie Lennox: Keystone core.

    Scene: #openstack-keystone chat room.

    ayoung: I still don’t understand the difference between url and uri

    … read more at http://tm3.org/7f

    Scaling Magnum and Kubernetes: 2 million requests per second by Ricardo Rocha

    Two months ago, we described in this blog post how we deployed OpenStack Magnum in the CERN cloud. It is available as a pre-production service and we're steadily moving towards full production mode.

    … read more at http://tm3.org/7g

    Keystone Auth Entry Points by Adam Young

    OpenStack libraries now use Authenication plugins from the keystoneauth1 library. One othe the plugins has disappered? Kerbersop. This used to be in the python-keystoneclient-kerberos package, but that is not shipped with Mitaka. What happened?

    … read more at http://tm3.org/7h

    OpenStack Days Budapest, OpenStack Days Prague by Eliska Malikova

    It was the 4th OSD in Budapest, but at a brand new place, which was absolutely brilliant. And by brilliant I mean - nice place to stay, great location, enough options around, very good sound, well working AC in rooms for talks and professional catering. I am not sure about number of attendees, but it was pretty big and crowded - so awesome!

    … read more at http://tm3.org/7i

    by Rich Bowen at June 21, 2016 03:07 PM

    OpenStack Superuser

    OpenStack Days Israel 2016: government adoption highlights local adoption, innovation

    A sold out crowd of more than 500 people attended for the seventh OpenStack Days Israel at the Tel Aviv Convention Center. With standing room only, Nati Shalom kicked off the event welcoming new and old faces and announced that next year, the event will span two days with plans to double the content and attendees.

    The keynotes echoed the findings from the recent OpenStack user survey with AT&T and HPE discussing the evolution of OpenStack and network functions virtualization (NFV) and content featuring complementary technologies, including Israel-based Cloudify and a keynote on VMware drivers, easy enough to use that "no OpenStack PhD is required."

    <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

    Shalom celebrated the OpenStack growth in Israel by sharing one of the largest OpenStack deployments at LivePerson of 12,000 physical cores, 6,000 virtual servers and 20,000 virtual cores. The team at LivePerson was brought on stage and received an award for being a local community superuser. There were two additional sessions held by LivePerson, and Koby Holzer, the director of cloud engineering at LivePerson, shared more about LivePerson’s OpenStack deployment in a Q&A on Superuser.

    Mellanox was also recognized by the event with an award for being the leading OpenStack contributor in Israel, a strong indication of the OpenStack demand in Israel.

    AT&T, the most recent Superuser Awards winner, exemplifies the innovation in Israel with one of its own Innovation Lab and R&D Centers in Israel. Shalom discussed the scale of one of AT&T’s OpenStack deployment and how the number of zones is rapidly increasing. AT&T currently has 75 different zones running on OpenStack with the goal to exceed 1,000 by 2020.

    A digital revolution: the Israeli Defense Intelligence (IDI)

    For a local perspective on open source and OpenStack adoption, Colonel Avi Simon and Dor Litay, the CIO and Digital CTO did a keynote from the perspective of the IDI, which is responsible for Israel's national intelligence evaluation and is the largest IT employer in Israel.

    Like other organizations, Simon says that the IDI was affected by the transition to the digital age, including the introduction of computers decades after the IDI formed. Then the IDI command realized that something else had changed - the world was changing very rapidly and suddenly technologies that were being used at home were becoming more advanced, and the IDI command knew it had to transition from the way it built applications--from monolithic to a more modern, open sourced, microservices architecture, using enterprise-wide services and cluster computing platforms.

    "We needed to come up with a clear strategy and where to take digital data and looked at the open source phenomenon," Simon said.

    This kicked off a multi-year digital strategy and architecture that emphasized networking digital systems, using the Internet and web companies as the reference frame.

    They needed total control of every aspect of the cloud, but without having to have thousands of developer resources - this is how they got to OpenStack.

    “We are moving from not-invented-here (NIH) to probably-found-elsewhere (PFE)," he added. Embracing open source, the IDI is now looking for ways to contribute to the OpenStack community. “It’s new for us, and one of the reasons we are here today today was to start that dialogue,” Simon said.

    Check out more feedback about the OpenStack Days Israel from local users and ecosystem companies.

    <iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/JoWhdsvnKLs" width=""></iframe>
    <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

    by Allison Price at June 21, 2016 02:23 AM

    Mirantis

    Partners and the Mirantis Unlocked validation program — Part 2: Fuel Plugins

    The post Partners and the Mirantis Unlocked validation program — Part 2: Fuel Plugins appeared first on Mirantis | The Pure Play OpenStack Company.

    According to the latest OpenStack user survey, Fuel is extremely popular as a deployment tool, and we believe one of the reasons for that is Fuel plugins.

    The plugin framework became a part of the Fuel architecture in the 6.1 release because the Fuel development team realized external contributors need an easy way to extend Fuel functionality without building on the actual core. To make this possible, they revised the Fuel architecture.

    This means if you are a vendor who wants to add functionality to deployments of OpenStack (or even just to Fuel), you can choose to contribute to the Fuel project directly, or you can create a plugin. The latter is much easier because there’s no need to go through multiple cycles of approving blueprints, and so on. To begin, you create a repository for your Fuel plugin (for example, https://github.com/openstack/fuel-plugin-mellanox) and create a Launchpad project (for example, https://launchpad.net/lma-toolchain). You should still follow OpenStack community standards, but you are the owner of your own project.

    Right now there are more than 30 Fuel plugins listed in DriverLog, and more than 60 repositories in the https://github.com/openstack space containing “fuel-plugin” in their names. (We believe there are many more Fuel plugins in the world, but we just don’t know about them.) You can imagine how these plugins make the Fuel user experience a much richer user experience.

    Before we move on to talk about the details, let’s make sure we’re on the same page when it comes to a little terminology:

    • Fuel plugin framework: An architecture term meaning that Fuel is designed to allow plugin development that extends its functionality.
    • Fuel plugin SDK (Software Development Kit): A set of instruments, practices, documents and tutorials designed to help a developer create Fuel plugins. See https://wiki.openstack.org/wiki/Fuel/Plugins for the current version, and stay tuned for an updated version of the SDK.

    Why should you validate your plugin?

    Any developer interested in expanding Fuel functionality can create a plugin and use it; validation is not necessary.

    There is a business reason for validation, however. If you want to confirm that your plugin is compatible with a particular version of MOS and publish it to the Fuel plugin catalog on Mirantis’ site, you need to go through the validation process. The greatest benefit here is a high level of trust from Mirantis’ Solution Architects and from Mirantis’ customers.

    You can read about  what the Fuel plugin validation process includes, but let’s spend some time explaining how this process works and why.

    During Fuel plugin validation, Mirantis acts as a core reviewer. The Fuel plugin SDK says, “Here are the rules for writing, testing, documenting and maintaining plugins.” If you don’t plan to validate your plugin with Mirantis, it’s up to you to follow or not follow those recommendations. But if you plan on taking advantage of validation, these recommendations become requirements you need to meet.

    The Fuel plugin SDK provides different types of templates (for code, for documents, for CI, etc.), but designing, developing, testing and documenting every unique part of a plugin is a developer’s responsibility. Here’s a look at the relative responsibilities of the Fuel plugin developer versus Mirantis:

    Fuel plugin developers are responsible for adequate design and code quality of Fuel plugins they create. Mirantis will review a Fuel plugin design specification and then will recommend  improvements.
    Fuel plugin developers are responsible for test coverage, test automation and the auto build process. Mirantis will review your Test Plan, Test Report and your CI design, providing feedback and help if needed.
    Fuel plugin developers are responsible for creating extensive supporting documentation. Mirantis will review the User Guide and provide comments.
    Fuel plugin developers are responsible for releasing a plugin that works and meets customer needs. Mirantis will run User Acceptance testing and let you know if there are bugs present.

    Plugins with numerous and critical bugs can’t pass validation and won’t be recommended to customers.

    Limitations

    Fuel plugin validations cannot guarantee that all plugins will meet all customer use cases. But Mirantis makes sure the plugin works the way it’s described in the User Guide and makes sure all limitations are documented.

    Sometimes a client-based Mirantis project team faces a situation when a validated plugin does not fully meet customer requirements. It’s completely normal: most Mirantis customers have large installations, and some requirements can be quite exotic, while some plugins have only MVP functionality. In this case, it may be necessary to add features to the plugin, or even to write a new one. In this case, the Mirantis project/deployment team connects with the Fuel plugin’s maintenance team and decides how to proceed.

    Fuel plugins and Mirantis OpenStack

    Fuel plugins are not a part of the MOS distribution. They must be downloaded separately from the Mirantis Fuel Plugin Catalog (if a user is interested in a validated plugin). A plugin is a set of puppet manifests performing a set of pre-deployment or post-deployment actions, including package installation.

    Fuel is open source (as part of OpenStack Big Tent), and developers are advised to keep plugin code open source, too, though packages can be proprietary.  For example, the Juniper Contrail Fuel plugin is not open source. (The Fuel community also recommends publishing plugins in DriverLog to keep a record of existing plugins.)

    So what happens if users want to deploy MOS with Juniper Contrail SDN as a Neutron backend? In this case, they need to download the Fuel plugin from the Mirantis Fuel Plugin Catalog and then buy the Juniper Contrail packages from Juniper. Juniper Contrail bits should be downloaded and stored in a repository from which the Fuel plugin will get them before deployment and configuration.

    Other important steps of the validation process

    As you may have realized, the Fuel plugin validation is a complex process that comprises several stages. For example, one important step we haven’t mentioned yet is making a demo.

    The demo serves several purposes:

    • It shows the plugin in action
    • It provides a knowledge transfer on what the plugin does, how it works and what limitations it has

    (This is why it’s not just the Mirantis Partner Validation team that attends demos, but also Support, Solution Architects and Sales Engineers.)

    There’s also the matter of User Acceptance (Partner Acceptance) testing. In this case, the Mirantis validation team receives a Fuel plugin rpm package. Then, in either a Mirantis’ or a partner’s lab, we install Fuel and the Fuel plugin, deploy an environment according to the steps described in the User Guide and perform actions according to the Test Plan.

    Basically we are testing repeating the user’s journey to make sure it is successful. If we see any inconsistencies or bugs, we ask partners to fix them.

    All validated plugins (both an rpm and a set of documents) are published in the Mirantis Fuel plugin catalog.

    Why we do it

    From the outside, the process seems complex, and our validation requirements may seem too strict.   But for those who know how the OpenStack community works, there’s nothing new in the development standards validation follows.

    Mirantis believes that our strength is in integration with partners, and our customers should rely on these integrations.

    The Mirantis Validation team is always ready to help all plugin developers succeed. We are constantly contributing to the Fuel plugin SDK and answering questions in IRC chats (on Freenode at #fuel-dev), the OpenStack mailing list, and on the unlocked-tech@mirantis.com mailing list.

    The post Partners and the Mirantis Unlocked validation program — Part 2: Fuel Plugins appeared first on Mirantis | The Pure Play OpenStack Company.

    by Evgeniya Shumakher at June 21, 2016 12:52 AM

    June 20, 2016

    Cloudwatt

    5 Minutes Stacks, episode 28 : Drone

    Episode 27 : Blueprint 3 tier

    This blueprint will help you to set up a 3-tier architecture. We have automated the deployment of various nodes component architecture. Through this blueprint we propose to set up web front-end, the glusterfs with a database cluster. You can choose to deploy on different Web front-end applications (Apache & php, tomcat 8 or nodejs). Here is the architecture diagram.

    Drone

    Preparations

    The version

    • CoreOS Stable 899.13.0
    • Docker 1.10.3
    • Drone v0.4

    The prerequisites to deploy this stack

    These should be routine by now:

    Size of the instance

    By default, the stack deploys on an instance of type “Standard 2” (n1.cw.standard-2). A variety of other instance flavors exist to suit your various needs, allowing you to pay only for the services you need. Instances are charged by the minute and capped at their monthly price (you can find more details on the Pricing page on the Cloudwatt website). Stack parameters, of course, are yours to tweak at your fancy.

    By the way…

    If you do not like command lines, you can go directly to the “run it thru the console” section or “run it by the 1-clic” section by clicking here.

    What will you find in the repository

    Once you have cloned the github, you will find in the blueprint-coreos-drone/ repository: * blueprint-coreos-drone.heat.yml: HEAT orchestration template. It will be used to deploy the necessary infrastructure. * stack-start.sh: Stack launching script. This is a small script that will save you some copy-paste.

    Start-up

    Initialize the environment

    Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

    Source the downloaded file in your shell. Your password will be requested.

    $ source COMPUTE-[...]-openrc.sh
    Please enter your OpenStack Password:
    
    

    Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

    Adjust the parameters

    In the blueprint-coreos-drone.heat.yml file (heat template), you will find a section named parameters near the top. The only mandatory parameter is the keypair_name. The keypair_name’s default value should contain a valid keypair with regards to your Cloudwatt user account, if you wish to have it by default on the console.

    Within these heat templates, you can also adjust (and set the defaults for) the instance type by playing with the flavor_name parameter accordingly.

    By default, the stack network and subnet are generated for the stack. This behavior can be changed within the blueprint-coreos-drone.heat.yml file as well, if need be, although doing so may be cause for security concerns.

    heat_template_version: 2013-05-23
    
    description: Bundle CoreOS Drone
    
      parameters:
        keypair_name:        <-- Indicate here your keypair
          description: Keypair to inject in instance
          label: SSH Keypair
          type: string
    
        flavor_name:      
          default: n1.cw.standard-1     <-- Indicate here flavor size
          description: Flavor to use for the deployed instance
          type: string
          label: Instance Type (Flavor)
          constraints:
            - allowed_values:
              - n1.cw.standard-1
              - n1.cw.standard-2
              - n1.cw.standard-4
              - n1.cw.standard-8
              - n1.cw.standard-12
              - n1.cw.standard-16
        drone_driver:
          default: github     <-- Indicate here VCS type
          description: Flavor to use for the deployed instance
          type: string
          label: drone driver
          constraints:
            - allowed_values:
                - github
                - gitlab
                - bitbucket
        drone_driver_url:     <-- Indicate here VCS url
          default: https://github.com
          description:  drone driver url for example https://github.com, https://bitbucket.org/ or your gitlab url
          label:  drone github client
          type: string
        drone_client:        <-- Indicate here OAuth client id
          description: OAuth id client
          label:  OAuth id client
          type: string
        drone_secret:        <-- Indicate here secret code OAuth client for VCS used
          description: OAuth secret client
          label: OAuth secret client
          type: string
     [...]
    

    Start the stack

    In a shell, run the script stack-start.sh:

     $ ./stack-start.sh Drone
     +--------------------------------------+------------+--------------------+----------------------+
     | id                                   | stack_name | stack_status       | creation_time        |
     +--------------------------------------+------------+--------------------+----------------------+
     | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | Drone    | CREATE_IN_PROGRESS | 2025-10-23T07:27:69Z |
     +--------------------------------------+------------+--------------------+----------------------+
    

    Within 5 minutes the stack will be fully operational. (Use watch to see the status in real-time)

     $ watch -n 1 heat stack-list
     +--------------------------------------+------------+-----------------+----------------------+
     | id                                   | stack_name | stack_status    | creation_time        |
     +--------------------------------------+------------+-----------------+----------------------+
     | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | Drone    | CREATE_COMPLETE | 2025-10-23T07:27:69Z |
     +--------------------------------------+------------+-----------------+----------------------+
    

    That’s fine but…

    I already came out of my shell in order to drone… do I have to go back?

    Nah, you can keep your eyes on the browser: all Drone setup can be accomplished from the console.

    To create our Drone stack from the console:

    1. Go the Cloudwatt Github in the applications/blueprint-coreos-drone repository
    2. Click on the file named blueprint-coreos-drone.heat.yml
    3. Click on RAW, a web page will appear containing purely the template
    4. Save the file to your PC. You can use the default name proposed by your browser (just remove the .txt)
    5. Go to the «Stacks» section of the console
    6. Click on «Launch stack», then «Template file» and select the file you just saved to your PC, and finally click on «NEXT»
    7. Name your stack in the «Stack name» field
    8. Enter the name of your keypair in the «SSH Keypair» field and few other fields required
    9. Choose your instance size using the «Instance Type» dropdown and click on «LAUNCH»

    The stack will be automatically generated (you can see its progress by clicking on its name). When all modules become green, the creation will be complete. You can then go to the “Instances” menu to find the floating-IP, or simply refresh the current page and check the Overview tab for a handy link.

    If you’ve reached this point, Drone is running!

    A one-click sounds really nice…

    … Good! Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOY and follow the simple steps… 2 minutes later, a green button appears… ACCESS: you have your Drone!

    Enjoy

    Once all of this done, stack’s description can be obtained with the following command :

     $ heat stack-show Drone
     +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
    | Property              | Value                                                                                                                                |
    +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
    | capabilities          | []                                                                                                                                   |
    | creation_time         | 2016-06-09T10:53:33Z                                                                                                                 |
    | description           | Bundle CoreOS Drone                                                                                                                  |
    | disable_rollback      | True                                                                                                                                 |
    | id                    | a754ce3f-870b-47f9-9863-9ddbe41a0267                                                                                                 |
    | links                 | https://orchestration.fr1.cloudwatt.com/v1/7da34701e2fe488683d8a8382ee6f454/stacks/drone/a754ce3f-870b-47f9-9863-9ddbe41a0267 (self) |
    | notification_topics   | []                                                                                                                                   |
    | outputs               | [                                                                                                                                    |
    |                       |   {                                                                                                                                  |
    |                       |     "output_value": "http://flottingIp",                                                                                           |
    |                       |     "description": "Drone URL",                                                                                                      |
    |                       |     "output_key": "floating_ip_url"                                                                                                  |
    |                       |   }                                                                                                                                  |
    |                       | ]                                                                                                                                    |
    | parameters            | {                                                                                                                                    |
    |                       |   "OS::project_id": "7da34701e2fe488683d8a8382ee6f454",                                                                              |
    |                       |   "OS::stack_id": "a754ce3f-870b-47f9-9863-9ddbe41a0267",                                                                            |
    |                       |   "OS::stack_name": "drone",                                                                                                         |
    |                       |   "keypair_name": "testkey",                                                                                                         |
    |                       |   "drone_driver": "github",                                                                                                          |
    |                       |   "drone_client": "********************",                                                                                            |
    |                       |   "flavor_name": "n1.cw.standard-1",                                                                                                 |
    |                       |   "drone_secret": "****************************************",                                                                        |
    |                       |   "drone_url": "https://github.com"                                                                                                  |
    |                       | }                                                                                                                                    |
    | parent                | None                                                                                                                                 |
    | stack_name            | drone                                                                                                                                |
    | stack_owner           | youremail@cloudwatt.com                                                                                                 |
    | stack_status          | CREATE_COMPLETE                                                                                                                      |
    | stack_status_reason   | Stack CREATE completed successfully                                                                                                  |
    | stack_user_project_id | eb79ff46f2e44090ada252dc32f62b4a                                                                                                     |
    | template_description  | Bleuprint CoreOS Drone                                                                                                                  |
    | timeout_mins          | 60                                                                                                                                   |
    | updated_time          | None                                                                                                                                 |
    +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
    
    

    Once this is done you can connect via a web browser on the postfixamdin interface from this url http://flottingIp.

    page1

    Then authenticate on github, bitbucket or gitlab.

    page2

    Then you arrive at this page.

    Page 3

    You choose the drone project and activate it.

    Page 4

    Then commit anything in this project and you will see the result.

    Page5

    After commit.

    Page6

    For creating OAuth see this links:

    systemd - init system for Drone service

    To start the service : ~~~ bash sudo systemctl start drone.service ~~~

    Logs can be seen with the following command: ~~~ bash journalctl -f -u drone.service ~~~

    To stop the service: ~~~ bash sudo systemctl stop drone.service ~~~

    Configurations files

    /home/core/drone.env: File contains the environment variables.

    So watt?

    The goal of this tutorial is to accelerate your start. At this point you are the master of the stack.

    You now have an SSH access point on your virtual machine through the floating-IP and your private keypair (default user name cloud).

    Other resources you could be interested in:


    Have fun. Hack in peace.

    by Mohamed-Ali at June 20, 2016 10:00 PM

    Dan Prince

    TripleO: remote execution

    I had a chance to sit down with Steve Baker at the OpenStack Austin Summit and prototype a custom Mistral action that can drive Heat software deployments. Iterating on that idea a bit further I have since posted a few patches which demonstrate a new remote execution feature in TripleO.

    #execute a shell script on a specific server name
    openstack overcloud execute --server-name=overcloud-controller-0 test.sh
    
    #execute a shell script on only the compute nodes
    openstack overcloud execute --server-name=compute test.sh
    
    #execute a shell script on all overcloud nodes
    openstack overcloud execute --server-name=overcloud test.sh
    

    A couple of things to highlight:

    • It is API driven.
    • It does not use ssh (rather it uses our TripleO os-collect-config agent).
    • Relies on Heat software deployments for much of its functionality the same way we do deployments in TripleO today.
    • Power is in the workflows... but the CLI work is cool and demonstrates it nicely.

    A short demo of the new TripleO Newton feature here:

    Video demo: TripleO remote execute

    by Dan Prince at June 20, 2016 09:00 PM

    RDO

    OpenStack Days Budapest, OpenStack Days Prague

    OpenStack Day - Budapest, 6th June 2016

    OpenStack Days Prague, Budapest<script async="" charset="utf-8" src="https://embedr.flickr.com/assets/client-code.js"></script>

    It was the 4th OSD in Budapest, but at a brand new place, which was absolutely brilliant. And by brilliant I mean - nice place to stay, great location, enough options around, very good sound, well working AC in rooms for talks and professional catering. I am not sure about number of attendees, but it was pretty big and crowded - so awesome! I really can say that it was awesome experience for me (please accept my apology for using word “awesome” too often here). I brought 2 banners - RDO + OpenStack, RDO stickers, TripleO stickers with the Owl, RDO/CentOS beermats, cheat-sheets, leaflets and few t-shirts. It was only RH booth at the beginning, but I changed it a bit into RDO by adding those promo stuff. It was distributed very well. The problem of this OSD is (and I was warned in advance by Ondrej Marek) that there are mostly Hungarian people, who are not very open to have some discussion or ask many questions. I was at a booth between talks anyway. Also I joined many talks, had a nice discussion with Marketing guys on the booth and meet also one RH Sales guy Miroslaw Pawlowski (from PL).

    It was very nice, very well organised OSD. Lovely food, lovely place, great people, nice talks, it was a bit more quiet than on FOSDEM or Devconf.cz. But that is probably something related to the nationality, as Ondrej mentioned.

    Hugh Brock from our team had a great TripleO talk, received some questions after a talk. It was very nice talk with cool jokes.

    OpenStack Day - Prague, 8th June 2016

    This event was 1st OpenStack Day in Prague. According to that I was really surprised how well organised and placed it was. For this event we were at DOX gallery, which is very modern, beautiful and quiet and also at a perfect distance from bus station. So we had a lot of space to go around, have a coffee, conversation or a good meal. At the entrance some OpenStack swag was distributed - OS sticker, super cool OpenStack glasses, some info leaflets and map of the area, also awesome bag saying “OpenStack Prague” which I now use for shopping and feel so cool now. Attendance was pretty large, we really filled that area with attendees. I meet there Zdenka Van Der Zwan and Ondrej Marek, that was just great to meet them in person. Also many people which I met in Budapest. I provided to Ondrej the info about OSD in Budapest and also we had a chance to discuss the swag, or what could I do to motivate people to attend a bit more, and also we agreed to be in touch to see in advance some more possibilities for future events.

    I met also Jirka Kolář from TCP Cloud and also Aleš from Elostech and few other guys I know from local community. Ondrej confirmed that the community meetups are a bit silent, still but again - let's see how it will go.

    We had there a beautiful Red Hat booth, with fancy RH promo stuff - free to take. Also you could win a quadricopter (if you were not a RH employee of course) by filling up a survey. I brought some RDO things, to refresh the RH booth. I took RDO banner, OpenStack banner, RDO stickers, beermats (super successful), cheat-sheets, TripleO stickers. I have great muscles now :D thanks to carrying heavy backpack and two banners, lol.

    Angus Thomas had a talk (the same that Hugh Brock had in Budapest OpenStack day) and we also had a chance to chat a bit and enjoy a lunch together. It was very nice. Hugh couldn't attend this OSD due to Dublin management meeting. It is a shame that not many people from our team were present at this Day. I met some technical writers (Radek Bíba, and his colleague whose name I forgot), Angus, few guys from Marketing. There was no one from HR - Tyler Golden couldn't attend. I met many familiar faces from Budapest there, but from our teams there was only me and Angus and Flavio Percoco, who traveled from Italy. I hope that next year there will be more attendance from our local team members.

    For a next OSD in Praha, I will advertise it much better. My vision is to travel there as a team and maybe have some nice dinner together? Let's see. But for a next time I would definitely ask people directly myself.

    by Eliska Malikova at June 20, 2016 06:22 PM

    OpenStack Superuser

    Build it yourself: How a small team deployed OpenStack

    Sometimes a small team needs a do-it-yourself solution rather than a massive implementation involving specialists or other budget-draining, off-the-rack, one-size-fits-all solutions.

    Cody Herriges, principal system operations engineer at Puppet, needed a solution to create and manage thousands of virtual machines for a research project.

    These developer-minded automation enthusiasts wanted to enable a more effective work environment that reflected their own core values: rapid improvement, user value and collaboration.

    “At the end of the day,” says Herriges, "[you need to] be proud of the systems you’re running."

    The team wanted to implement a solution that was open source, platform agnostic, and that could be built, not bought.

    They turned to OpenStack, an open-source cloud operating system that can run on standardized hardware with a robust user community of thousands of people around the globe that was originally founded by Rackspace and NASA.

    Herriges took time at the recent OpenStack Summit in Austin, Texas, to tell the story about his team’s journey to its own DIY implementation of OpenStack.

    They named their OpenStack project SLICE, a “backronym” for Self-Service Low Impedance Compute Environment.

    “We needed a safe, flexible environment that people could try and experiment with new things,” says Herriges. “It had to be fast, close, and with near-no latency."

    alt text here

    Herriges’ team only had one full time person and two floaters. Ultimately, it took 20 months to produce the 4,000 lines of freshly-written automation and 10 community modules, with 37 rebuilt, built, or back-ported.

    Issues in DIY implementation

    The team faced a common challenge when adopting a new software solution: a lack of knowledge about OpenStack itself.

    “We had the tools to automate it and the ability to deploy the entire stack without knowing what was underneath,” says Herriges, “but that really goes against what we believe in.”

    Instead, the team took the extra time needed to get up to speed on OpenStack without its built-in automation so that when issues occurred, the team could make intelligent fixes instead of relying on random band-aid approaches.

    alt text here

    OpenStack does have some known issues, including an as-yet unfixed SSL bug, which Herriges documented his own team’s fix at herrig.es/os-ssl-issue. Other workarounds the team came across can be found there, as well.

    Finally, the team needed to make sure that OpenStack met their needs as a high availability, or HA, application. OpenStack Rally helps the team manage things at a much higher level than ever before, even as Puppet has become an even more integral part of their virtual infrastructure.

    “It’s made us all very confident in the platform that we’ve delivered to people,” says Herriges.

    Tips for teams implementing OpenStack

    For teams considering following in Puppet’s virtual footsteps, Herriges has a number of tips.

    First, he says, go for simple; the more complexity, the more points of failure will creep into the project. “You’ve got to remove complexity as much as possible” he says.

    Agnosticism is also key — solutions and techniques should be chosen for their ability to solve specific problems, not just because they fit a preconceived understanding of how things “should be” done. The Puppet team chose synthOS over Debian for its package-building system because it made things easier to integrate in the long run, not because of any preference for running enterprise Linux.

    “The more dogmatic you are about the implementation details,” he says, “the more complication you’re going to add to the deployment.”

    Learn from the ecosystem itself, says Herriges. The team had originally designed an architecture that met all their internal requirements as a highly available and load-balanced stack, but realized that such an approach would cause management headaches for whomever came in after Puppet moved on to another project.

    The team decided to reuse assets from their own Puppet website code repository that worked as well as any unique code they could have created to do the same thing, resulting in less of a barrier to long-term management. Using publicly available modules, like MySQL, helps the team remain productive and allows them to rely on other developers in the ecosystem when trouble arrives.

    Herriges also urges small teams looking to implement OpenStack to only deploy modules that they actually need.

    “Don’t deploy all these different services because you think they might be super cool in the future,” he says.

    You can always add in things later. Don’t add in anything that has no value to your specific goals, says Herriges; use only the modules that meet your needs.

    An OpenStack implementation team will also need a working knowledge of Python, if not an expert-level understanding and confidence in the packaging system they choose.

    “You do not want to be post-patching code after deployment,” says Herriges. It makes version tracking and the upgrade process extremely difficult.

    Teams will also need to be able to trace packets and manage dumping, and understand SQL at a relatively comfortable level. This will help with error cleanup and finding issues when they happen.

    “Not everything is available via the public API,” says Herriges.

    alt text here

    Herriges also recommends teams focus on an active/active HA configuration, which makes the OpenStack implementation much easier to maintain as well as less prone to complete failure. The other option, active/passive, will need more automation to be built as well as manual intervention, possibly in the middle of the night.

    Choose mature automation solutions, as well - newer might seem cooler at first, but with this type of implementation, you’re better off with an automation system that is robust and well-documented. This is the one area in which Herriges recommends teams don’t try to roll their own solution.

    “Don’t go out and grab the new fancy thing that demos well,” he cautions. “Find a tool that’s proven in the marketplace that’s driving real workloads.”

    Finally, use early adopters to slowly roll out OpenStack implementations, based on a deep understanding of OpenStack itself.

    Of course, Herriges recommends that teams consider the Puppet OpenStack system, as it comes with all of his team’s successes and solutions. It’s being commercially used by Mirantis and RedHat with a ridiculously robust command line interface (CLI). It’s also being used by large and active clients, including Time Warner Cable, Puppet itself, Hewlett Packard Enterprise (HPE), and many more.

    You can catch his 33-minute talk on the Summit video page.

    <iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/7pqKNwiGtx4" width=""></iframe>

    Cover Photo // CC BY NC

    by Rob LeFebvre at June 20, 2016 04:57 PM

    Where the present and future meet: OpenStack Day Israel

    Tel Aviv is the crossroads where present and future OpenStack innovation meets.

    This year's OpenStack Day Israel, held on June 2, brings together more than 400 local and international professionals with a roster of sponsors and speakers that range from IBM, Rackspace, AT&T to local startups with global reach like LivePerson.

    The event provides diverse tracks on everything OpenStack - from compute, networking and storage, to hands-on workshops and case-study sessions. These case studies include OpenStack innovators and leaders like CERN alongside Israel’s Defense Intelligence Agency. IDI's session, titled “A Colossal Evolution - Migrating Legacy to Open Source,” is testimony to the fact that OpenStack is no longer a theoretical exercise in open source cloud, but a leading player for even the most conservative organizations providing the complex and mission critical services. An amazing evolution has taken place with disruptive and refreshing young startups beginning to prove themselves leading players and giving enterprises a run for their money; the same holds true for Israel’s community serving as a driving force behind the global community.

    This event builds on the recent OpenStack Austin, where the Israeli community earned quite a few keynote nods, in addition to a significant presence on the ground. Koby Holzer, fellow OpenStack Day Israel organizer and director of cloud engineering at Liveperson, was interviewed during Mark Collier’s keynote address at the Summit, creating a frenzy on the OpenStack Israel Twitter account.

    Holzer spoke about Liveperson’s OpenStack clusters spanning seven data centers and 8,000 virtual machines on more than 20,000 physical cores, as well as their new challenge of taking 150 microservices and going full scale with from VMs to containers, using Docker and Kubernetes. The underlying infrastructure runs on OpenStack, which provides a single solution for managing bare metal, virtual machines and containers.

    Liveperson is an early adopter of OpenStack and the cutting-edge chat software company was featured as a use case of OpenStack in 2013.  Liveperson’s OpenStack deployment has since grown exponentially since then, and diversified to new and exciting technologies. You can listen to Chen Leibovitz’s take on Liveperson and OpenStack in the most recent episode of the OpenStack & Beyond podcast.

    Leaders of Related Open Source and OpenStack Projects

    In that same keynote address, Mark Collier discussed the three primary use cases for OpenStack, as established by the biannual OpenStack user survey, which provides invaluable data not only on the state of the union of the OpenStack project itself, but the open source landscape as a whole.  The top use cases identified by OpenStack users is containers, NFV, and bare metal. (Hybrid cloud...being a close fourth), where you can find Israeli companies diligently working towards providing solutions on top of OpenStack for all of these use cases.

    A major contributing factor making this possible is the fact that OpenStack is open source, makes it easy to provide native integration to OpenStack by external projects without having to be officially under the auspices of OpenStack, thanks to the rich set of OpenStack APIs, as was described in this session at the Summit by Ran Ziv and Yoram Weinreb from the OpenStack Israel community: How to Develop for OpenStack APIs.

    Mark then went on to present the leading players in this revolution on the screen.  Among the projects cited were the Google-backed Kubernetes project, Red Hat’s OpenShift, and Cloudify, the company that drives the Israeli OpenStack community (and where I am proudly employed), infrastructure agnostic-orchestration.  The most interesting part of this analysis is that while Kubernetes, Mesos, and Docker Swarm (Docker’s own project), are all container-specific, and OpenShift and CloudFoundry are PaaS, Cloudify is the only product among those driving this revolution that serve all three of the top use case for OpenStack - containers, NFV MANO, bare metal, and even the close fourth - hybrid cloud.

    Israeli technology

    AT&T, sponsors of the OpenStack Israel event, recently acquired the Israeli internet of things company Interwise to act as one of their few innovation centers worldwide who are contributing a large part to the strategic Enhanced Control, Orchestration, Management and Policy project (ECOMP) for AT&T and to their cutting edge IoT frameworks.

    AT&T’s ECOMP announcement shook up the industry, setting a high bar that many will not be able to uphold, says Nati Shalom in a post titled Where AT&T Leads, Cisco cannot follow.   AT&T offer a deep dive into this new project at the Tel Aviv keynote. They are not the only ones to invest in Israeli technology: Huawei and HPE (also sponsors of OpenStack Israel) have both bought Israeli technology companies recently, too - TogaNetworks & Contextream respectively, to help drive their OpenStack strategy.  

    From Big Tent to Project Neutron - leading in code

    OpenStack Neutron, the OpenStack networking project, was announced as the project with the most lines of code contributed at the last OpenStack Summit in Tokyo.  Among the top code contributors in this project are Livnat Peer (who will be speaking at OpenStack Israel this year), and Assaf Muller (former speaker at OpenStack Israel), Israelis of Red Hat (sponsors of OpenStack Israel).  Additional notable major code contributions by the Israeli community are from Gal Sagie of Huawei Israel, leading Project Kuryr - the container networking project, leveraging everything good in Neutron and bridging the gap for the container revolution.

    Screen Shot 2016-05-05 at 23.05.35.png

    Additional important Israeli development work around OpenStack include the OVS project by Mellanox (again, sponsors of the OpenStack Foundation and the Israel community) that has become an almost de facto standard for high performance networking for OpenStack, alongside additional disruptive Israeli technologies like Stratoscale, a provider of hyperconverged infrastructure and cloud capabilities.

    Join us!

    Join us on Thursday, June 2, 2016, to learn why the OpenStack Israel community is leading in global OpenStack innovation.

    Speakers at this year’s event include:

    • Jonathan Bryce, executive director of OpenStack Foundation who will be speaking with VMware, HPE, Nokia, and Red Hat aboutTim Bell of CERN, the European Organization for Nuclear Research

    • Avi Simon and Dor Litay, CIO & Digital CTO Israeli Defense Intelligence Agency

    • Jonathan Bryce, executive director of OpenStack Foundation who will be speaking with VMware, HPE, Nuage Networks, and Red Hat about their OpenStack activity locally and globally

    • Tim Bell, of CERN, the European Organization for Nuclear Research

    • Jesse Proudman, founder & CTO, Bluebox/IBM, alongside co-worker Monty Taylor

    • Walter Bentley, Cloud Solutions Architect, Rackspace

    • Flavio Percoco - principal software engineer, Red Hat



    In order to lower the barrier of entry, the OpenStack Day Israel event has once again opened up 30 free tickets for women, students and soldiers who tweet at @OpenStackIL why they’d like to attend tagging #OpenStack Israel, and is offering a 20 percent discount off the already subsidized tickets to everyone else who sends us their shares on social media.

    Sharone Zitzman is the director of marketing for Cloudify at GigaSpaces Technologies. In her spare time, she helps drive local open source communities, including the DevOps Israel community, the OpenStack Israel community - and helps organize five meetups worldwide. Find her on Twitter or Linkedin.

    by Superuser at June 20, 2016 04:18 PM

    Hugh Blemings

    Lwood-20160619

    Introduction

    Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

    Basic Stats for week 13 June to 19 June 2016 for openstack-dev:

    • ~562 Messages (up about 1% relative to last week)
    • ~175 Unique threads (down about 8% relative to last week)

    Traffic pretty steady this week :)

    Notable Discussions – openstack-dev

    Announcing What’s Happening in OpenStack-Ansible (“WHOA”)

    Major Hayden announced the first of a monthly blog posts he’ll be doing to give an update on what’s happening in Ansible.  It’s a well written read and bound to be of interest to OpenStack Developers and Operators alike.  Major kindly notes that his reading of Lwood was something of a catalyst for starting this endeavour.  Check it out :)

    Proposal for an Architecture Working Group

    Late in the week Clint Byrum penned a well reasoned proposal for the creation of an Architecture Working Group.  As he puts it “This group’s charge would not be design by committee, but a place for architects to share their designs and gain support across projects to move forward with and ratify architectural decisions. That includes coordinating exploratory work that may turn into being the base of further architectural decisions for OpenStack.”

    He goes on to add that his expectation is that people involved in the group would largely be senior at the companies involved and be in a position to help prioritise this work by advocating for resources to be contributed to make the work in question real.

    At the time of writing just the one reply to the thread, but a positive one, have a read and see what you think :)

    What do OpenStack Developers work on upstream of OpenStack ?

    Was the question posed by Doug Hellmann early in the week.  Doug went on to clarify that he was interested to gather information about contributions OpenStack Developers make that “were in some way triggered or related to their work on OpenStack.”

    Though the thread is as yet fairly short, in part I suspect because Doug suggested offline replies which he’ll summarise, it’s already an interesting mix.  Folk have noted work on everything from Linux kernel internals, to documentation in other FOSS projects to ISO8601 (I had to look it up too – date and time formats) and a myraid of other things.  Will be interesting to see Dougs summary when it’s published!

    Towards ensuring level playing fields for OpenStack Projects

    One of the longer threads this week past was kicked off by Thierry Carrez in this post. Thierry outlines some concerns about, in essence, ensuring that new OpenStack projects which seek to become Official projects are unlikely to become overly dominated by any one organisation/company.

    From the original proposed change: “The project shall provide a level open collaboration playing field for all contributors. The project shall not not benefit a single vendor, or a single vendors product offerings; nor advantage contributors from a single vendor organization due to access to source code, hardware, resources or other proprietary technology available only to those contributors.”

    The review comments in Gerrit are largely positive and my read of the thread itself is the general consensus there is likewise positive – Thierry makes the point that the guidelines are used by humans that interpret them on a case by case basis which should ensure the basic intent is carried out reasonably.

    All seems pretty sensible to your humble correspondent :)

    A look under the hood of Nova datastructures

    If you’ve ever been curious to look under the hood of Nova, Matt Booth put together a nice little summary of his journey through the data structures used in the block device section of same.

    There is no Jenkins, only Zuul

    How could one not include a thread with such a cool $SUBJECT – particularly with a new Ghostbusters film just around the corner ?

    Silliness on my part aside – James Blair gives a concise update on the ongoing work that has seen much of the project automation used in OpenStack migrate from Jenkins to Zuul – the latter being developed specifically with OpenStack in mind.

    Notable Discussions – other OpenStack lists

    An OPS Cross Project Liaison ?

    So asks Lana Brindley in her post to the Openstack-Operators mailing list.  Lana notes that the Docs team have a number of Cross Project Liaisons (CPLs) with OpenStack projects to coordinate documentation related matters but no such person exists on the Operators side – and she seeks volunteers.  Seems a good plan :)

    User Research/Usability Study

    Also on the -Ops list is an email from Piet Kruithof noting that Danielle Mundle will contributing to upstream by helping to conduct user research on behalf of the OpenStack community.

    He goes on to say that “One of her priorities is to begin investigating how operators both learn about OpenStack and triage issues within their deployments.  As a result, you may receive an email invite from the foundation to participate in some form of research such as an interview, focus group or usability study.”

    Upcoming OpenStack Events

    Midcycle

    Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

    People and Projects

    Core nominations & changes

    Further Reading & Miscellanea

    Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

    This edition of Lwood brought to you by a shuffle play of my music collection, so everything from Alan Kelly to Miles Davis to Queensrÿche to ZZ-Top

     

    by hugh at June 20, 2016 02:04 PM

    Opensource.com

    Saying farewell to Jenkins, powering satellite Internet, and more OpenStack news

    Are you interested in keeping track of what is happening in the open source cloud? Here's what's happening this week, June 20 - 26, 2016.

    by Jason Baker at June 20, 2016 06:59 AM

    Solinea

    Solinea in Techtarget – OpenStack infrastructure embraces containers, but challenges remain

    Despite a recent push to embrace container technology, OpenStack still has a number of hurdles to clear before boosting enterprise adoption.

    OpenStack continues to gain modest traction despite its ongoing growing pains. Now, users see a lifeline in a technology some view as the death knell for the open source project.

    OpenStack leaders push the software as the underlying cloud framework to connect the disparate bits of enterprise IT, and, increasingly, containers are the central piece to that roadmap. The technology has been prominent at each of the last three user conferences and some think it will fulfill OpenStack’s long-held promise to avoid vendor lock-in and seamlessly share public and private cloud resources.

    The rise of Docker and containers as a way to package applications is seen by some as offering the same benefits as OpenStack without the headaches and costs of deploying the cloud operating system. But instead of positioning it as an alternative, the OpenStack community has embraced both open source technologies.

    Read the entire article on Techtarget.

    The post Solinea in Techtarget – OpenStack infrastructure embraces containers, but challenges remain appeared first on Solinea.

    by Solinea at June 20, 2016 06:33 AM

    Adam Young

    Keystone Auth Entry Points

    OpenStack libraries now use Authenication plugins from the keystoneauth1 library. One othe the plugins has disappered? Kerbersop. This used to be in the python-keystoneclient-kerberos package, but that is not shipped with Mitaka. What happened?

    To list the posted entry points on a Centos Based system, you can first look in the entry_points.txt file:

    cat /usr/lib/python2.7/site-packages/keystoneauth1-2.4.1-py2.7.egg-info/entry_points.txt
    [keystoneauth1.plugin]
    v2token = keystoneauth1.loading._plugins.identity.v2:Token
    admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken
    v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode
    v2password = keystoneauth1.loading._plugins.identity.v2:Password
    v3password = keystoneauth1.loading._plugins.identity.v3:Password
    v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword
    token = keystoneauth1.loading._plugins.identity.generic:Token
    v3token = keystoneauth1.loading._plugins.identity.v3:Token
    password = keystoneauth1.loading._plugins.identity.generic:Password
    

    But are there others?

    Looking in the source repo: We can see a reference to Kerberos (as well as SAML, which has also gone missing), before the enumeration of the entry points we see above.

    [extras]
    kerberos =
      requests-kerberos>=0.6:python_version=='2.7' or python_version=='2.6' # MIT
    saml2 =
      lxml>=2.3 # BSD
    oauth1 =
      oauthlib>=0.6 # BSD
    betamax =
      betamax>=0.7.0 # Apache-2.0
      fixtures>=3.0.0 # Apache-2.0/BSD
      mock>=2.0 # BSD
    
    [entry_points]
    
    keystoneauth1.plugin =
        password = keystoneauth1.loading._plugins.identity.generic:Password
        token = keystoneauth1.loading._plugins.identity.generic:Token
        admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken
        v2password = keystoneauth1.loading._plugins.identity.v2:Password
        v2token = keystoneauth1.loading._plugins.identity.v2:Token
        v3password = keystoneauth1.loading._plugins.identity.v3:Password
        v3token = keystoneauth1.loading._plugins.identity.v3:Token
        v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword
        v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode
        v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken
        v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1
        v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos
        v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP
    

    We see that the Kerberos plugin requires requests-kerberos>=0.6 so let’s get that installed via

    sudo yum install python-requests-kerbero
    

    And then try to enumerate the entry points via python

    >>> import pkg_resources
    >>> named_objects = {}
    >>> for ep in pkg_resources.iter_entry_points(group='keystoneauth1.plugin'):
    ...     named_objects.update({ep.name: ep.load()})
    ... 
    >>> print (named_objects)
    {'v2token': <class>, 'token': <class>, 'admin_token': <class>, 'v3oidcauthcode': <class>, 'v3token': <class>, 'v2password': <class>, 'password': <class>, 'v3password': <class>, 'v3oidcpassword': <class>}
    

    We still don’t have the Kerberos plugin. Going back to the setup.cfg file, we see the Python class for the Kerberos plugin is not listed. Kerberos is implemented here in the source tree. Does that exist in our package managed file system?

    $ rpm --query --list python2-keystoneauth1-2.4.1-1.el7.noarch | grep kerberos.py$
    /usr/lib/python2.7/site-packages/keystoneauth1/extras/kerberos.py
    

    Yes. It does. Can we load that by class?

    >>> from keystoneauth1.extras import kerberos
    >>> print kerberos
    <module>
    

    Yes, although the RPM version is a little earlier than the git repo. So what is the entry point name? There is not one, yet. The only way to get the class is by the full class name.

    We’ll fix this, but the tools for enumerating the entrypoints are something I’ve used often enough that I want to get them documented.

    by Adam Young at June 20, 2016 03:28 AM

    June 17, 2016

    Solinea

    End-to-end Training for Open Infrastructure

    Why Training is Critical to Modern Open Infrastructure

    Enterprises and service providers need to move faster than ever to stay competitive in today’s markets. IT agility, efficiency, and repeat-ability are paramount to companies’ success and are seen as business and revenue drivers by many executive teams and boards.

    One way to achieve these goals is by adopting Open Infrastructure, which we define as:

    Graphic-Open-Infrastructure-050916

    While the business benefits for open infrastructure are significant and include removing technical debt, saving resources and money, and accelerating time to market. Challenges as we all know it is early, very complex, and incredibly resource constrained.

    That said, we’ve helped organizations adopt open infrastructure, from Legacy through 2.0

    Website-OI2

    A few specific examples of moving infrastructure forward include:

    • We’ve worked with one of the largest car companies in the world in an end-to-end engagement to conceive, architect, and implement a production, enterprise-wide private cloud to support a big data initiative to track and model automobile feature usage to inform their product roadmap.
    • We’ve also worked with the leading genomics organization, The Broad Institute, to plan, build, and help them adopt a similar big data analytics infrastructure with a 3-week POC, and 6-week production launch as highlights.
    • We’ve worked with a global stock exchange to transform their legacy IT and build an enterprise hybrid cloud with real, department chargeback modeling, and an application migration strategy and implementation. Simply building these is one thing, ensuring the organization is ready with the proper training on DevOps process and OpenStack training is another. We trained departments on being able to manage and scale the infrastructure on their own.

    A few CI/CD, Containers and Microservices examples:

    • We’ve worked with a worldwide web property to design, pilot, and deploy in production a very large container orchestration platform (Kubernetes) together with the DevOps and CI/CD toolchain and processes to support it ongoing. We believe this is one of the largest in the world outside of Google.
    • We’ve worked and continue to work with a leading media company to developed the IT strategy that has driven their hybrid cloud solution, supported by DevOps processes and automation to enable agility and embrace a rapidly growing customer base.

    Solinea helps organizations adopt DevOps and containers and microservices through consulting services where we help enterprises adopt open infrastructure by designing, building and deploying the solutions, and then through training our customers to become self sufficient with technical training for operators, and process training for management.

    Why Solinea Training?

    Open Infrastructure is hard, you need to partner with experts like Solinea to sell the initiative internally to sometimes even get started, then to define and architect the adoption roadmap, and finally to train your teams to upskill your capabilities.

    That is why our training offerings are 50% hands-on lab work, and have been developed covering rigorous and comprehensive open infrastructure learning paths across Architecture, Administration, Implementation, and Design to enable the necessary right skills and facilitate adoption.

    We recognize that OpenStack is but just one of the fundamental components of Open Infrastructure solutions, which is why you’ll find the Solinea team contributing code to Docker, deploying packages using Ansible or RunDeck and managing cloud native applications under Kubernetes.

    These training offerings span three basic categories:

    Website-Training-Curriculum

    Here is a high-level diagram of the training tracks Solinea offers by category…

    • Containers and Microservices Curriculum
      • Our expertise in containers and microservices and extensive first-hand experience architecting and deploying Kubernetes solutions in the enterprise gives you a real-world view on how to architect, install, manage and scale Docker and Kubernetes on your own.
      • Our containers and microservices curriculum provides IT practitioners (Software Developers, System Architects, System Engineers, Operations & System Administrators, and Cloud End-Users) the requisite knowledge and skills to understand, architect, deploy, and use Docker and Kubernetes to manage their containerized applications.
    • Configuration Management & Automation Curriculum
      • These courses introduce your application developers and Cloud IT/Operations teams to the fundamentals of Continuous Integration and Continuous Delivery (CI/CD) utilized to develop online services with a comprehensive indoctrination to Ansible with a focus on the application of Ansible plays, modules, playbooks, and playbook development. The courses are taught using a combination of interactive lectures leveraging PowerPoint, whiteboards, core references, instructor solo and follow-me demonstrations, and hands-on lab to ensure you know how to deploy Ansible to automate the configuration management of your IT infrastructure.
      • The courses are again taught by Solinea Architects that are active Ansible practitioners and have extensive first-hand experience deploying Ansible at scale in the enterprise.
    • OpenStack Core and Specialty Curriculum
      • As OpenStack pioneers, Solinea understands the specific needs of the enterprise; we know how to architect and integrate OpenStack clouds into existing legacy environments to support vertical (e.g,. service provider, telco, automotive/manufacturing, healthcare, internet, and financial services) and horizontal (e.g., Big Data, mobile, HPC, streaming media, dev/test) applications and workloads. We understand the enterprise has security, compliance and regulatory needs as well as challenging operating requirements. That’s why we provide training curriculum that extends beyond the technology implementation, to ensure that the cloud is fully integrated and adopted by the enterprise.
      • The courses are again taught by Solinea Architects that are active Ansible practitioners and have extensive first-hand experience deploying OpenStack at scale in the enterprise.

    In summary, we’ve partnered with the largest network equipment provider in the world, the leading US telco and other global enterprise customers to train their engineers on OpenStack, DevOps, Docker and Kubernetes to ensure their teams are ready for what’s next.

    Set up a 20-minute overview with our training team to see how Solinea training offerings can help your organization prepare for open infrastructure.

    The post End-to-end Training for Open Infrastructure appeared first on Solinea.

    by Solinea at June 17, 2016 11:23 PM

    OpenStack Blog

    OpenStack Developer Mailing List Digest May 14 to June 17

    SuccessBot Says

    • Qiming: Senlin has completed API migration from WADL.
    • Mugsie: Kiall Fixed the gate – development can now continue!!!
    • notmyname: exactly 6 years ago today, Swift was put into production
    • kiall: DNS API reference is live [1].
    • sdague: Nova legacy v2 api code removed [2].
    • HenryG: Last remnant of oslo incubator removed from Neutron [3].
    • dstanek: I was able to perform a roundtrip between keystone and testshib.org using my new SAML2 middleware!
    • Sdague: Nova now defaults to Glance v2 for image operations [4].
    • Ajaeger: First Project Specific Install Guide is published – congrats to the heat team!
    • Jeblair: There is no Jenkins, only Zuul.
    • All

    Require A Level Playing Field for OpenStack Projects

    • Thierry Carrez proposes a new requirement [5] for OpenStack “official” projects.
    • An important characteristic of open collaboration grounds is they need to be a level playing field. No specific organization can be given an unfair advantage.
      • Projects that are blessed as “official” project teams need to operate in a fair manner. Otherwise they could be essentially a trojan horse for a given organization.
      • If in a given project, developers from one specific organization benefit from access specific knowledge or hardware, then the project should be rejected under the “open community” rule.
      • Projects like Cinder provide an interesting grey area, but as long as all drivers are in and there is a fully functional (and popular) open source implementation there is likely no specific organization considered as unfairly benefiting.
    • Neutron plugin targeting a specific piece of networking hardware would likely given an unfair advantage to developers of the hardware’s manufacturer (having access to hardware for testing and being able to see and make changes to its proprietary source code).
    • Open source projects that don’t meet the open community requirement can still exist in the ecosystem (developed using gerrit and openstack/* git repository, gate testing, but as an unofficial project.
    • Full thread

    Add Option to Disable Some Strict Response Checking for Interoperability Testing

    • Nova introduced their API micro version change [6]
    • QA team adds strict API schema checking to Tempest to ensure no additional Nova API responses [7][8].
      • In the last year, three vendors participating in the OpenStack powered trademark program were impacted by this [9].
    • DefCore working group determines guidelines for the OpenStack powered program.
      • Includes capabilities with associated functional tests from Tempest that must pass.
      • There is a balance of future direction of development with lagging indicators of deployments and user adoption.
    • A member of the working group Chris Hoge would like to implement a temporary waiver for strict API checking requirements.
      • While this was discussed publicly in the developer community and took some time to implement. It still landed quickly, and broke several existing deployments overnight.
      • It’s not viable for downstream deployers to use older versions of Tempest that don’t have these strict response checking, due to the TC resolution passed [10] to advise DefCore to use Tempest as the single source of capability testing.
    • Proposal:
      • Short term:
        • there will be a blueprint and patch to Tempest that allows configuration of a grey-list of Nova APIs which strict response checking on additional properties will be disabled.
        • Using this code will emit a deprecation warning.
        • This will be removed 2017.01.
        • Vendors are required to submit the grey-list of APIs with additional response data that would be published to their marketplace entry.
      • Long term:
        • Vendors will be expected to work with upstream to update the API returning additional data.
        • The waiver would no longer be allowed after the release of 2017.01 guidleine.
    • Former QA PTL Matthew Treinish feels this a big step backwards.
      • Vendors who have implemented out of band extensions or injected additional things in responses believe that by doing so they’re interoperable. The API is not a place for vendor differentation.
      • Being a user of several clouds, random data in the response makes it more difficult to write code against. Which are the vendor specific responses?
    • Alternatives to not giving vendors more time in the market:
      • Having some vendors leave the the Powered program unnecessarily weakening it.
      • Force DefCore to adopt non-upstream testing, either as a fork or an independent test suite.
    • If the new enforcement policies had been applied by adding new tests to Tempest, then DefCore could have added them using it’s processes over a period of time and downstream deployers might have not had problems.
      • Instead behavior to a bunch of existing tests changed.
    • Tempest master today supports all currently supported stable branches.
      • Tags are made in the git repository support for a release is added/dropped.
        • Branchless Tempest was originally started back in Icehouse release and was implemented to enforce the API is the same across release boundaries.
    • If DefCore wants the lowest common denominator for Kilo, Liberty, and Mitaka there’s a tag for that [11]. For Juno, Kilo, Liberty the tag would be [12].
    • Full thread

    There Is No Jenkins, Only Zuul

    • Since the inception of OpenStack, we have used Jenkins to perform testing and artifact building.
      • When we only had two git repositories, we have one Jenkins master and a few slaves. This was easy to maintain.
      • Things have grown significantly with 1,200 git repositories, 8,000 jobs spread across 8 Jenkins masters and 800 dynamic slave nodes.
    • Jenkins job builder [13] was created to create 8,000 jobs from a templated YAML.
    • Zuul [14] was created to drive project automation in directing our testing, running tens of thousands of jobs each day. Responding to:
      • Code reviews
      • Stacking potential changes to be tested together.
    • Zuul version 3 has major changes:
      • Easier to run jobs in multi-node environments
      • Easy to manage large number of jobs
      • Job variations
      • Support in-tree job configuration
      • Ability to define jobs using Ansible
    • While version 3 is still in development, it’s today capable of running our jobs entirely.
    • As of June 16th, we have turned off our last Jenkins master and all of our automation is being run by Zuul.
      • Jenkins job builder has contributors beyond OpenStack, and will be continued to be maintained by them.
    • Full thread

    Languages vs. Scope of “OpenStack”

    • Where does OpenStack stop, and where does the wider open source community start? Two options:
      • If OpenStack is purely an “integration engine” to lower-level technologies (e.g. hypervisors, databases, block storage) the scope is limited and Python should be plenty and we don’t need to fragment our community.
      • If OpenStack is “whatever it takes to reach our mission”, then yes we need to add one language to cover lower-level/native optimization.
    • Swift PTL John Dickinson mentions defining the scope of OpenStack projects does not define the languages needed to implement them. The considerations are orthogonal.
      • OpenStack is defined as whatever is takes to fulfill the mission statement.
      • Defining “lower level” is very hard. Since the Nova API is listening to public network interfaces and coordinating with various services in a cluster, it lower level enough to consider optimizations.
    • Another approach is product-centric. “Lower-level pieces are OpenStack dependencies, rather that OpenStack itself.”
      • Not governed by the TC, and it can use any language and tool deemed necessary.
      • There are a large number of open source projects and libraries that OpenStack needs to fulfill its mission that are not “OpenStack”: Python, MySQL, KVM, Ceph, OpenvSwitch.
    • Do we want to be in the business of building data plane services that will run into Python limitations?
      • Control plane services are very unlikely to ever hit a scaling concern where rewriting in another language is needed.
    • Swift hit limitations in Python first because of the maturity of the project and they are now focused on this kind of optimization.
      • Glance (partially data plane) did hit this limit and mitigated by folks using Ceph and exposing that directly to Nova. So now Glance only cares about location and metadata. Dependencies like Ceph care about data plane.
    • The resolution for the Go programming language was discussed in previous Technical Committee meetings and was not passed [14]. John Dickinson and others do plan to carry another effort forward for Swift to have an exception for usage of the language.
    • Full thread

     

    by Mike Perez at June 17, 2016 06:09 PM

    OpenStack in Production

    Scaling Magnum and Kubernetes: 2 million requests per second

    Two months ago, we described in this blog post how we deployed OpenStack Magnum in the CERN cloud. It is available as a pre-production service and we're steadily moving towards full production mode.

    As part of this effort, we've started testing the upgrade procedures, the latest being to the final Mitaka release. If you're here to see some fancy load tests, keep reading below, but some interesting details on the upgrade:
    • We build our own RPMs to include a few patches from post-Mitaka upstream (the most important being the trustee user to support lifecycle operations on the bays) and some CERN customizations (removal of neutron LBaaS and floating ips which we don't yet have, adding the CERN Certificate Authority, ...). Check here for the patches and build procedure
    • We build our Fedora Atomic 23 image to get more recent versions of docker and kubernetes (1.10 and 1.2 respectively), plus support for an internal distributed filesystem called CVMFS. We do use the upstream disk-imagebuilder procedure with a few additional elements available here
    While discussing how we could further test the service, we thought of this kubernetes blog post, achieving 1 million requests per second against a service running on a kubernetes cluster. We thought we could probably do the same. Requirements included:
    • kubernetes 1.2, which our recent upgrade offered
    • available resources to deploy the cluster, and luckily we were installing a new batch of a few hundred physical nodes which could be used for a day or two
    So along with the upgrade, Bertrand and Mathieu got to work to test this setup and we quickly got it up and running.

    Quick summary of the setup:
    • 1 kubernetes bay
    • 1 master node, 16 cores (not really needed but why not)
    • 200 minions, 4 cores each
    In total there are 800 cores, which matches the cluster used in the original test. How did our test go?



    We ended up trying a bit more and doubled the number to 2 million requests per second :)



    We learned a few things on the way:
    • set Heat's max_resources_per_stack to something big. Magnum stacks create a lot of these, and with bays of hundreds of nodes the value gets high enough that unlimited (-1) is tempting and we have it like that now. It leaves the option for people to deploy a stack with so many resources that Heat could break, so we'll investigate what the best value is
    • while creating and deleting many large bays, Heat shows errors like 'TimeoutError: QueuePool limit of size ... overflow ... reached' which we've seen in the past for other OpenStack services. We'll contribute the patch to fix it upstream if not there yet
    • latency values get high even before the 1 million barrier, we'll check further the demo code and our setup (using local disk, in this case SSDs instead of the default volume attachment in Magnum should help)
    • Heat timeout and retrial configuration values need to be tuned to manage very large stacks. We're still not sure what are the best values, but will update the post once we have them
    • Magnum shows 'Too many files opened' errors, we also have a fix to contribute for this one
    • Nova, Cinder (bay nodes use a volume), Keystone and all other OpenStack services scaled beautifully, our cloud usually has a rate of ~150 VMs created and deleted per hour, here's the plot for the test period, we eventually tried bays up to 1000 nodes


    And what's next? 
    • Larger bays: at the end of these tests we deployed a few bigger bays with 300, 500 and 1000 nodes. And in just a couple weeks there will be a new batch of physical nodes arriving, so we plan to upgrade Heat to Mitaka and build on the recent upstream work (by Spyros together with Ton and Winnie from IBM) adding Magnum scenarios to Rally to run additional scale tests and see where it breaks
    • Bay lifecycle: we stopped at launching a large number of requests in a bay, next we would like to perform bay operations (update of number of nodes, node replacement) and see which issues (if any) we find in Magnum
    • New features: lots of upstream work going on, so we'll do regular Magnum upgrades (cinder support, improved bay monitoring, support for some additional internal systems at CERN)
    And there's also Swarm and Mesos, we plan on testing those soon as well. And kubernetes updated their test, so stay tuned...

    Acknowledgements

    • Bertrand Noel, Mathieu Velten and Spyros Trigazis, for the work upstream and integrating Magnum at CERN, and on getting these demos running
    • Rackspace for their support within the CERN Openlab on running containers at scale
    • Indigo Datacloud building a platform as a service for e-science in Europe
    • Kubernetes for an awesome tool and the nice demo
    • All in the CERN OpenStack Cloud team, for a great service (especially Davide Michelino and Belmiro Moreira for all the work integrating Neutron at CERN)
    • The upstream Magnum team, for building what is now looking like a great service, and we look forward for what's coming next (bay drivers, bare metal support, and much more)
    • Tim, Arne and Jan for letting us use the new hardware for a few days

    by Ricardo Rocha (noreply@blogger.com) at June 17, 2016 01:50 PM

    Adam Young

    The difference between auth_uri and auth_url in auth_token

    Dramatis Personae:

    Adam Young, Jamie Lennox: Keystone core.

    Scene: #openstack-keystone chat room.

    ayoung: I still don’t understand the difference between url and uri
    jamielennox:auth_uri ends up in “WWW-Authenticate: Keystone uri=%s” header. that’s its only job
    ayoung: and what is that meant to do? tell someone where they need to go to authenticate?
    jamielennox: yea, it gets added to all 401 responses and then i’m pretty sure everyone ignores it
    ayoung:so they should be the same thing, then, right? I mean, we say that the Keystone server that you authenticate against is the one that nova is going to use to validate the token. and the version should match
    jamielennox: depends, most people use an internal URL for auth_url but auth_uri would get exposed to the public
    ayoung: ah
    jamielennox: there should be no version in auth_uri
    ayoung: so auth_uri=main auth_url=admin in v2.0 speak
    jamielennox: yea. more or less. ideally we could default it way better than that, like auth.get_endpoint(‘identity’, interface=’public’), but that gets funny
    ayoung: This should be a blog post. You want to write it or shall I? I’m basically just going to edit this conversation.
    jamielennox: mm, blog, i haven’t written one of those for a while

    (scene)

    by Adam Young at June 17, 2016 03:10 AM

    June 16, 2016

    AppFormix

    There’s More to Optimizing App Performance Than Maximizing CPU Availability

    When running on software-defined infrastructure, applications share physical hardware resources. Resource contention on a host can lead to reduced and unpredictable performance of an application.

    TL;DR… It does not matter how much CPU is available if workloads are competing for memory.

    by Sumeet Singh (sumeet@appformix.com) at June 16, 2016 11:14 PM

    James E. Blair

    There is no Jenkins, only Zuul

    Since its inception, the OpenStack project has used Jenkins to perform its testing and artifact building.  When OpenStack was two git repos, we had one Jenkins master, a few slaves, and we configured all of our jobs manually in the web interface.  It was easy for a new project like OpenStack to set up and maintain.  Over the years, we have grown significantly, with over 1,200 git repos and 8,000 jobs spread across 8 Jenkins masters and 800 dynamic slave nodes.  Long before we got to this point, we could not manage all of those jobs by hand, so we wrote Jenkins Job Builder, one of our more widely used projects, so that we could automatically generate those 8,000 jobs from templated YAML.

    We also wrote Zuul.

    Zuul is a system to drive project automation.  It directs our testing, running tens of thousands of jobs each day, responding to events from our code review system and stacking potential changes to be tested together.

    We are working on a new version of Zuul (version 3) with some major changes: we want to make it easier to run jobs in multi-node environments, easier to manage large numbers of jobs and job variations, support in-tree job configuration, and the ability to define jobs using Ansible.

    With Zuul in charge of deciding which jobs to run, and when and where to run them, we use very few advanced features of Jenkins at this point.  While we are still working on Zuul v3, we are at a point where we can start to use some of the work we have done already to switch to running our jobs entirely with Zuul.

    As of today, we have turned off our last Jenkins master and all of our automation is being run by Zuul.  It's been a great ride, and OpenStack wouldn't be where it is today without Jenkins.  Now we're looking forward to focusing on Zuul v3 and exploring the full potential of project automation.

     

     

    by James E. Blair (corvus@inaugust.com) at June 16, 2016 09:29 PM

    Kenneth Hui

    Is OpenStack Software Dead?

    King

    Boris Renski, CMO and co-founder of Mirantis, created a stir yesterday with his blog post, “Infrastructure Software is Dead,” which calls into question the viability of having users deploy and operate infrastructure software such as OpenStack on their own.

    Boris says it was a long time coming but he now realizes that you help customers be successful with OpenStack not by building a better mousetrap but by offering a better service.

    Given its history and involvement in the OpenStack project, we are gratified to see Mirantis begin to embrace our point of view. From the beginning, Rackspace knew that building a successful cloud would involve not only the right software but also the right operational and services model.

    That intuition has been validated though five years of running the world’s largest OpenStack public cloud and partnering with enterprises to build the largest production OpenStack private clouds.

    To read more about the Rackspace view on OpenStack as a product or as a service, please click here to go to my article on the Rackspace blog site.


    Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud, Public Cloud Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Public Cloud, Rackspace

    by kenhui at June 16, 2016 05:37 PM

    Adam Young

    Learning about the Overcloud Deploy Process

    The process of deploying the overcloud goes through several technologies. Here’s what I’ve learned about tracing it.

    I am not a Heat or Tripleo developer. I’ve just started working with Tripleo, and I’m trying to understand this based on what I can gather, and the documentation out there. And also from the little bit of experience I’ve had working with Tripleo. Anything I say here might be wrong. If someone that knows better can point out my errors, please do so.

    [UPDATE]: Steve Hardy has corrected many points, and his comments have been noted inline.

    To kick the whole thing off in the simplest case, you would run the command openstack overcloud deploy .
    VM-config-changes-via-Heat-1

    Roughly speaking, here is the sequence (as best as I can tell)

    1.  User types  openstack overcloud deploy on the command line
    2. This calls up the common cli, which parses the command, and matches the tripleo client with the overcloud deploy subcommand.
    3. tripleo client is a thin wrapper around the Heat client, and calls the equivalent of heat stack-create overcloud
    4. python-heatclient (after Keystone token stuff) calls the Heat API server with the URL and data to do a stack create
    5. Heat makes the appropriate calls to Nova (running the Ironic driver) to activate a baremetal node and deploy the appropriate instance on it.
    6. Before the node is up and running, Heat has posted Hiera data to the metadata server.
    7. The newly provisioned machine will run cloud-init which in turn runs os-collect-config.
      [update] Steve Hardy’s response:

      This isn’t strictly accurate – cloud-init is used to deliver some data that os-collect-config consumes (via the heat-local collector), but cloud-init isn’t involved with actually running os-collect-config (it’s just configured to start in the image).

    8. os-collect-config will start polling for changes to the metadata.
    9. [update] os-collect-config will start calling Puppet Apply based on the hiera data [UPDATE] os-refresh-config only, which then invokes a script that runs puppet. .
      Steve’s note:

      os-collect-config never runs puppet, it runs os-refresh-config only, which then invokes a script that runs puppet.

    10. The Keystone Puppet module will set values in the Keystone config file, httpd/conf.d files, and perform other configuration work.

    Here is a diagram of how os-collect-config is designed

    When a controller image is built for Tripleo, Some portion of the Hiera data is stored in /etc/puppet/. There is a file /etc/puppet/hiera.yaml (which looks a lot like /etc/hiera.yaml, an RPM controlled file) and sub file in /etc/puppet/hieradata such as
    /etc/puppet/hieradata/heat_config_ControllerOvercloudServicesDeployment_Step4.json

    UPDATE: Response from Steve Hardy

    This is kind-of correct – we wait for the server to become ACTIVE, which means the OS::Nova::Server resource is declared CREATE_COMPLETE. Then we do some network configuration, and *then* we post the hieradata via a heat software deployment.

    So, we post the hieradata to the heat metadata API only after the node is up and running, and has it’s network configured (not before).

    https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/controller.yaml#L610

    Note the depends_on – we use that to control the ordering of configuration performed via heat.

    However, the dynamic data seems to be stored in /var/lib/os-collect-config/

    $ ls -la  /var/lib/os-collect-config/*json
    -rw-------. 1 root root   2929 Jun 16 02:55 /var/lib/os-collect-config/ControllerAllNodesDeployment.json
    -rw-------. 1 root root    187 Jun 16 02:55 /var/lib/os-collect-config/ControllerBootstrapNodeDeployment.json
    -rw-------. 1 root root   1608 Jun 16 02:55 /var/lib/os-collect-config/ControllerCephDeployment.json
    -rw-------. 1 root root    435 Jun 16 02:55 /var/lib/os-collect-config/ControllerClusterDeployment.json
    -rw-------. 1 root root  36481 Jun 16 02:55 /var/lib/os-collect-config/ControllerDeployment.json
    -rw-------. 1 root root    242 Jun 16 02:55 /var/lib/os-collect-config/ControllerSwiftDeployment.json
    -rw-------. 1 root root   1071 Jun 16 02:55 /var/lib/os-collect-config/ec2.json
    -rw-------. 1 root root    388 Jun 15 18:38 /var/lib/os-collect-config/heat_local.json
    -rw-------. 1 root root   1325 Jun 16 02:55 /var/lib/os-collect-config/NetworkDeployment.json
    -rw-------. 1 root root    557 Jun 15 19:56 /var/lib/os-collect-config/os_config_files.json
    -rw-------. 1 root root 263313 Jun 16 02:55 /var/lib/os-collect-config/request.json
    -rw-------. 1 root root   1187 Jun 16 02:55 /var/lib/os-collect-config/VipDeployment.json
    

    For each of these files there are two older copies that end in .last and .orig as well.

    In my previous post, I wrote about setting Keystone configuration options such as ‘identity/domain_specific_drivers_enabled’: value => ‘True’;. I can see this value set in /var/lib/os-collect-config/request.json with a large block keyed “config”.

    When I ran the openstack overcloud deploy, one way that I was able to track what was happening on the node was to tail the journal like this:

     sudo journalctl -f | grep collect-config
    

    Looking through the journal output, I can see the line that triggered the change:

    ... /Stage[main]/Main/Keystone_config[identity/domain_specific_drivers_enabled]/ensure: ...
    

    by Adam Young at June 16, 2016 03:13 AM

    June 15, 2016

    Adam Young

    Custom Overcloud Deploys

    I’ve been using Tripleo Quickstart.  I need custom deploys. Start with modifying the heat templates. I’m doing a mitaka deploy

    git clone https://github.com/openstack/tripleo-heat-templates.git
    cd tripleo-heat-templates/
    git branch --track mitaka origin/stable/mitaka
    git checkout mitaka
    
    diff -r  /usr/share/openstack-tripleo-heat-templates/ tripleo-heat-templates/
    

    Mine shows some differences, but in the file extraconfig/tasks/liberty_to_mitaka_aodh_upgrade_2.pp which should be OK. The commit is

    Add redis constraint to aodh upgrade manifest

    Modify the launch script in /home/stack

    $ diff overcloud-deploy.sh.orig overcloud-deploy.sh
    48c48
    < openstack overcloud deploy --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 60 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org \
    ---
    > openstack overcloud deploy --templates  /home/stack/tripleo-heat-templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 60 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org \
    

    The only change should be from

    --templates  #(followed by another flag which means that --templates takes the default) 
    

    to

    --templates /home/stack/tripleo-heat-templates 
    

    OK…let’s make sure we still have a stable system. First, tear down the overcloud deliberately:

    [stack@undercloud ~]$ . ./stackrc 
    [stack@undercloud ~]$ heat stack-delete overcloud
    Are you sure you want to delete this stack(s) [y/N]? y
    +--------------------------------------+------------+-----------------+---------------------+--------------+
    | id                                   | stack_name | stack_status    | creation_time       | updated_time |
    +--------------------------------------+------------+-----------------+---------------------+--------------+
    | 00d81e5b-c2f9-4f6a-81e8-b135fadba921 | overcloud  | CREATE_COMPLETE | 2016-06-15T18:01:25 | None         |
    +--------------------------------------+------------+---------------
    

    Wait until the delete is coplete with

    $ watch heat stack-list
    

    Wait until it changes from

    +--------------------------------------+------------+--------------------+---------------------+---------
    -----+
    | id                                   | stack_name | stack_status	 | creation_time       | updated_
    time |
    +--------------------------------------+------------+--------------------+---------------------+---------
    -----+
    | 00d81e5b-c2f9-4f6a-81e8-b135fadba921 | overcloud  | DELETE_IN_PROGRESS | 2016-06-15T18:01:25 | None
         |
    +--------------------------------------+------------+--------------------+---------------------+---------
    -----+
    

    To

    +----+------------+--------------+---------------+--------------+
    | id | stack_name | stack_status | creation_time | updated_time |
    +----+------------+--------------+---------------+--------------+
    +----+------------+--------------+---------------+--------------+
    

    And now run the modified overcloud deploy:

    ./overcloud-deploy.sh
    

    End of the output looks like this

    Stack overcloud CREATE_COMPLETE
    /home/stack/.ssh/known_hosts updated.
    Original contents retained as /home/stack/.ssh/known_hosts.old
    PKI initialization in init-keystone is deprecated and will be removed.
    Warning: Permanently added '192.0.2.9' (ECDSA) to the list of known hosts.
    The following cert files already exist, use --rebuild to remove the existing files before regenerating:
    /etc/keystone/ssl/certs/ca.pem already exists
    /etc/keystone/ssl/private/signing_key.pem already exists
    /etc/keystone/ssl/certs/signing_cert.pem already exists
    Connection to 192.0.2.9 closed.
    Skipping "horizon" postconfig because it wasn't found in the endpoint map output
    Overcloud Endpoint: http://10.0.0.4:5000/v2.0
    Overcloud Deployed
    + heat stack-list
    + grep -q CREATE_FAILED
    + exit 0
    

    Don’t be fooled by the last line grep -q CREATE_FAILED as that is the shell script execution logging, not a statement of failure.

    OK, to do a proper “Hello, World” here I’d really like to be able to affect change on the deployment. I’m going to try and set a coupole Keystone config values that are not set (yet) in /etc/keystone/keystone.conf.

    In my undercloud git repo for tripleo-heat-templates I make changes to the Overcloud post config.

    $ git diff
    diff --git a/puppet/manifests/overcloud_controller.pp b/puppet/manifests/overcloud_controller.pp
    index c353ec0..c6385d4 100644
    --- a/puppet/manifests/overcloud_controller.pp
    +++ b/puppet/manifests/overcloud_controller.pp
    @@ -223,6 +223,11 @@ if hiera('step') >= 3 {
     
       #TODO: need a cleanup-keystone-tokens.sh solution here
     
    +  keystone_config {  
    +   'identity/domain_specific_drivers_enabled': value => 'True';  
    +   'identity/domain_config_dir': value => '/etc/keystone/domains';  
    +  }  
    +
       file { [ '/etc/keystone/ssl', '/etc/keystone/ssl/certs', '/etc/keystone/ssl/private' ]:
         ensure  => 'directory',
         owner   => 'keystone',
    

    And rerun

    ./overcloud-deploy.sh
    

    Once it has successfull deployed, I can check to see if the change shows up in the keystone.conf file.

    $ . ./stackrc 
    [stack@undercloud ~]$ openstack server list
    +--------------------------------------+-------------------------+--------+---------------------+
    | ID                                   | Name                    | Status | Networks            |
    +--------------------------------------+-------------------------+--------+---------------------+
    | 761a1b61-8bd1-4b85-912b-775e51ad99f3 | overcloud-controller-0  | ACTIVE | ctlplane=192.0.2.11 |
    | f123da36-9b05-4fc3-84bb-4af147fa76f7 | overcloud-novacompute-0 | ACTIVE | ctlplane=192.0.2.10 |
    +--------------------------------------+-------------------------+--------+---------------------+
    [stack@undercloud ~]$ ssh heat-admin@192.0.2.11
    $ sudo grep domain_specific /etc/keystone/keystone.conf
    #domain_specific_drivers_enabled = false
    domain_specific_drivers_enabled = True
    # if domain_specific_drivers_enabled is set to true. (string value)
    [heat-admin@overcloud-controller-0 ~]$ sudo grep domain_config_dir /etc/keystone/keystone.conf
    #domain_config_dir = /etc/keystone/domains
    domain_config_dir = /etc/keystone/domains
    

    Changes applied.

    by Adam Young at June 15, 2016 08:34 PM

    Major Hayden

    What’s Happening in OpenStack-Ansible (WHOA) – June 2016

    OpenStack The world of OpenStack moves quickly. Each day brings new features, new bug fixes, and new ways of thinking. The OpenStack-Ansible community strives to understand these changes and make them easier for operators to implement.

    The OpenStack-Ansible project is a collection of playbooks and roles written by operators for operators. These playbooks make it easier to deploy, maintain, and upgrade an OpenStack cloud.

    Keeping up with the changes in the OpenStack-Ansible project is challenging. After reading Hugh Blemings’ Last Week in OpenStack Dev reports, I thought it would be useful to have a more focused newsletter on where OpenStack-Ansible has been
    recently and where it will go. My goal is to share this on a monthly cadence, but that may change over time.

    Without further ado, here is the inaugural WHOA report for June 2016!

    New releases

    The OpenStack-Ansible project has four active branches under various stages of development.

    Newton

    The Newton (master) branch is still under heavy development and will be released later in the year along with other OpenStack projects. For more details on the Newton development efforts, take a look at the Notable developments and On the horizon sections below.

    Mitaka

    The latest release in the Mitaka branch is 13.1.2 and it was released on June 2nd, 2016.

    This release contained several new backported features and fixes, such as:

    • Horizon: LBaaS v2 panels and IPv6 management support can be enabled
    • Swift: better handling for full disks
    • Security: fixes for audit logs filling disks and ssh configurations with Match stanzas

    More details are available in the full release notes for 13.1.2.

    The release also contains many updates for OpenStack services and related dependencies.

    Liberty

    The latest release in the Liberty branch is 12.0.14 and it was released on June 2nd 2016.

    This release contains many of the same fixes that appeared in the Mitaka release (see above).

    More details are available in the full release notes for 12.0.14.

    The release also contains many updates for OpenStack services and related dependencies.

    Kilo

    Although no releases have appeared in the Kilo branch in the last 30 days, work is being done for the final kilo release. This release will be tagged as 11.2.17 and should be available once the upstream OpenStack projects have completed their kilo-eol tags.

    There is a mailing list thread about the kilo-eol Kilo tagging efforts. It has status updates that are specific to each OpenStack project.

    Notable discussions

    Want to discuss OpenStack-Ansible with the community? We want to hear from you!

    Feel free to join us in #openstack-ansible on Freenode or send email to openstack- dev@lists.openstack.org with [openstack-ansible] in the subject line. Our meeting times and logs from previous meetings are in the OpenStack-Ansible wiki.

    Newton mid-cycle

    The planning for the Newton mid-cycle is underway. It will be held at Rackspace’s headquarters in San Antonio, Texas from August 10th through the 12th. You can find lots of details about the venues and hotel arrangements in the etherpad.

    It’s possible that remote participation will work in the room, but that isn’t guaranteed at this time. Previous attempts at videoconferencing didn’t work terribly well, but we will give it our best try!

    Notable developments

    Many blueprints are in flight for the Newton release. I will touch on some of the most important and the most impactful ones here.

    Ubuntu 16.04 (Xenial) support

    Many of the OpenStack-Ansible roles are compatible with Ubuntu 16.04 or are on the way to becoming compatible. All of the roles have non-voting gate jobs enabled for Ubuntu 16.04 to make it easier to for developers to see how their patches work on multiple versions of Ubuntu.

    For the latest updates on which roles are compatible with Ubuntu 16.04, refer to the etherpad. The blueprint is on Launchpad and you can follow along with the latest patches by filtering the bp/support-ubuntu-1604 topic in Gerrit.

    Installation documentation overhaul

    At the Summit in Austin, many people mentioned that the OpenStack-Ansible installation documentation contains lots of great information, but it is very difficult to navigate. It can become even more challenging for newcomers.

    A new spec lays out the plans for the improvements in the Newton release and work is already underway.

    Here are some more helpful links if you want to follow the development or make improvements of your own:

    Ansible 2.1 support

    Ansible 2.1 is now the default Ansible version used in OpenStack-Ansible for the Newton release. Individual role cleanups to address deprecation warnings and bare variables are in progress.

    On the horizon

    The OpenStack-Ansible community is always looking for ways to improve OpenStack-Ansible. Some of these improvements allow the project to do something completely new, such as deploying a new OpenStack service. Others make the project easier to use or reduce the time required for deployment.

    The following work items are in various stages of development in the Newton release.

    Astara

    Work is underway to deploy Astara with OpenStack-Ansible. Astara provides a new way to handle layer 3 networking services, such as routers, load balancers, and VPN endpoints. Neutron would still handle the layer 2 networking (such as VLAN tagging), but Astara would run layer 3 services within service virtual machines that exist under an admin tenant.

    For more details, see the blueprint or the etherpad. Phil Hopkins is leading this effort and hopes to have a spec proposed soon.

    Magnum

    The Magnum project allows users to deploy and manage Docker and Kubernetes deployments via calls to the Magnum API.

    Many of the OpenStack-Ansible and Magnum developers have submitted patches to make the os_magnum role more mature and feature-rich.

    Multiple architectures and PowerVM

    We welcomed some new community members from IBM who are working to implement PowerVM support in OpenStack-Ansible. Since most of our use cases were on x86-based systems, this requires some rework in various places within the project. Some of the biggest changes will be made within the repo server since some python modules contain code that must be compiled on the Power architecture.

    Adam Reznechek from IBM is drafting a spec that provides details on the required changes. These changes should allow deployers to mix and match hosts within an OpenStack environment. This could allow for some interesting use cases:

    • Mixed deployments of x86 and Power hypervisors
    • x86 control plane with Power hypervisors
    • Power control plane with x86 hypervisors

    This flexible framework could allow for ARM development in later releases.

    Operator scripts

    A new repository called openstack-ansible-ops will contain resources for operators of OpenStack-Ansible clouds.

    One of the first proposed scripts is osa-differ.py. This script allows operators to understand the OpenStack changes that exist between two OpenStack-Ansible releases. If you’re waiting on a fix for a bug in an OpenStack service, you could use this script to see if the fix made it into a particular version of OpenStack-Ansible.

    Sahara

    The Sahara project provisions services like Hadoop or Spark to process big chunks of data. OpenStack-Ansible can deploy Sahara now with the new os_sahara role.

    Xen

    Many large clouds use the Xen hypervisor and Antony Messerli is working to implement Xen hypervisor support with libvirt in OpenStack-Ansible. A new spec recently merged that explains the benefits behind the work as well as the changes that are required to make it work.

    Feedback?

    The goal of this newsletter is three fold:

    • Keep OpenStack-Ansible developers updated with new changes
    • Inform operators about new features, fixes, and long-term goals
    • Bring more people into the OpenStack-Ansible community to share their use
      cases, bugs, and code

    Please let me know if you spot any errors, areas for improvement, or items that I missed altogether. I’m mhayden on Freenode IRC and you can find me on Twitter anytime.

    The post What’s Happening in OpenStack-Ansible (WHOA) – June 2016 appeared first on major.io.

    by Major Hayden at June 15, 2016 07:58 PM

    Mirantis

    Infrastructure Software is Dead

    The post Infrastructure Software is Dead appeared first on Mirantis | The Pure Play OpenStack Company.

    It took me too long to realize it, but now I am certain. Infrastructure software is dead. And with it, numbered are the days of any company whose core business is pinned to selling licenses or subscriptions to infrastructure software bits. It feels as though the entire industry is stuck in a self-perpetuating cycle of trying to justify its own existence. And if we keep it up, we’ll be dead too.  

    Let me take a step back and set the context. We came late to the OpenStack game. At that time software startups like Piston and Cloudscaling, as well as established companies like Red Hat and HP were well underway in their race for the “king of OpenStack software” title. Fast forward five years and we are behind the largest OpenStack deployments on the planet, able to win deals from a super competent rival with 20 years of open source history.

    Now I’d love to tell you that it’s all because Mirantis OpenStack software is so much better than everybody else’s OpenStack software, but I’d be lying. Everybody’s OpenStack software is equally bad. It’s also as bad as all the other infrastructure software out there – software-defined networking, software-defined storage, cloud management platforms, platforms-as-service, container orchestrators, you name it. It’s all full of bugs, hard to upgrade and a nightmare to operate. It’s all bad.

    But none of this matters, because today customers don’t care about software. Customers care about outcomes. And the reason Mirantis has been successful is because, despite ourselves, outcomes are what we’ve been able to deliver to our customers by complementing crappy OpenStack software with hordes of talented infrastructure hackers that made up for the gaps. We didn’t win because of the software. We won because we’ve been shouldering the pain associated with turning OpenStack software into customer outcomes.

    Seventeen years ago, Salesforce.com transformed the business application space by pioneering the SaaS model. Ten years ago AWS did the same for infrastructure. Neither of these transformations were about the software; they were about the software delivery model.

    Infrastructure is deadWhen Salesforce.com launched, Siebel was a 2 billion dollar software company. At the time, Siebel could not have beaten SFDC by writing a better version of its CRM software, because SFDC did not innovate on the software; it innovated on the delivery model that abstracted customers away from the pain of operating (forever crappy) enterprise CRM software.

    In hindsight it is obvious, right? Why is it then that after AWS disrupted the infrastructure delivery model a decade ago, today we still see infrastructure startups and veterans trying to capture market share by shipping a better version of infrastructure software? Would it not be the same as Siebel trying to beat SFDC by building “better” CRM, while continuing to ship it on CDs?

    Analysts constantly ask me: “Is OpenStack mature for the enterprise?” Just a couple more years and it will be mature, right? Wrong. OpenStack is quintessential enterprise software. No enterprise software delivered as packaged software bits will ever be mature for today’s enterprises that are hooked on the cloud delivery model. Especially not the cloud software itself.

    At Mirantis we entered the market with a services-first approach, focused on helping customers build and operate OpenStack before ever releasing an OpenStack software distribution. To this day, many still don’t understand the business model. “Do you want to be a product company or a services company?” – we get asked. “Is AWS a product company or a services company?” – I reply. Cloud is about redefining the notion of “product.” Software is the product of yesterday. The product of today is a combination of software and, more importantly, a service to operate that software, getting the customer to their desired outcome.  So when Mirantis is discounted as “primarily a services company“, I reply: “You bet we are!”

    The post Infrastructure Software is Dead appeared first on Mirantis | The Pure Play OpenStack Company.

    by Boris Renski at June 15, 2016 04:04 PM

    Opensource.com

    5 new OpenStack tutorials for cloud mastery

    Every month, Opensource.com compiles the best new OpenStack tutorials into a handy collection for your browsing pleasure.

    by Jason Baker at June 15, 2016 07:00 AM

    June 14, 2016

    RDO

    Community Central at Red Hat Summit

    OpenStack swims in a larger ecosystem of community projects. At the upcoming Red Hat Summit in San Francisco, RDO will be sharing the Community Central section of the show floor with various of these projects.

    We'll have a series of presentations featuring various projects running througout the event, including OpenStack, oVirt, Ansible, Ceph, Gluster, CentOS, and more.

    For the full schedule of community talks, see the Red Hat Summit schedule.

    We hope to see you in San Francisco. If you haven't registered yet, use the discount code UG2016 for a $500 discount on your registration.

    by Rich Bowen at June 14, 2016 07:07 PM

    Ronald Bradford

    Utilizing OpenStack Trove DBaaS for deployment management

    Trove is used for self service provisioning and lifecycle management for relational and non-relational databases in an OpenStack cloud. Trove provides a RESTful API interface that is same regardless of the type of database. CLI tools and a web UI via Horizon are also provided wrapping Trove API requests.

    In simple terms. You are a MySQL shop. You run a replication environment with daily backups and failover capabilities which you test and verify regularly. You have defined DBA and user credentials ACL’s across dev, test and prod environments. Now there is a request for using MongoDB or Cassandra, the engineering department has not decided but they want to evaluate the capabilities. How long as a operator does it take to acquire the software, install, configure, setup replication, backups, ACLs and enable the engineering department to evaluate the products?

    With Trove DBaaS this complexity is eliminated due to a consistent interface to perform the provisioning, configuration, HA, B&R, ACL across other products the exact same way you perform these tasks for MySQL. This enables operations to be very proactive to changing technology requests supporting digital transformation strategies.

    Enabling this capability is not an automatic approval of a new technology stack. It is important that strategic planning, support and management is included in the business strategy to understanding the DBaaS capability for your organization. Examples of operations due diligence would include how to integrate these products into your monitoring, logging and alerting systems. Determine what additional disk storage requirements may be needed. Test, verify and time recovery strategies.

    Trove specifically leverages several other OpenStack services for source image and instance management. Each trove guest image includes a base operating system, the applicable database software and a database technology specific trove guest agent. This agent is the intelligence that knows the specific syntax and version needs to perform the tasks. The agent is also the communication mechanism between Trove and the running nova instance.

    Trove is a total solution manager for the instance running your chosen database. Instances have no ssh, telnet or other general access. The only access is via the SQL communication via the defined ports, e.g. 3306 for MySQL.

    The Trove lifecycle management covers the provisioning, management, security, configuration and tuning of your database. Amrith Kumar in a recent presentation at the NYC Postgres meetup provides a good description of the specifics.

    Trove is capable of describing and supporting clustering and replication topologies for the various data stores. It can support backup and restore, failover and resizing of clusters without the operator needing to know the specific syntax of complexities of a database product they are unfamiliar with.

    A great example is the subtle difference in MySQL replication management using GTID’s between MySQL and MariaDb. To the developer, the interaction between MySQL and MariaDB via SQL is the same, the management of a replication topology is not identical, but is managed by the Trove guest agent. To the operator, the management is the same.

    Also in his presentation, Kumar described Tesora, an enterprise class Trove service provided with a number of important additional features. Tesora supports additional database products including Oracle and DB2Express as well as commercial versions for Oracle MySQL, EnterpriseDB, Couchbase, Datastax, and mongoDB. Using the Horizon UI customizations with pre-defined trove instances greatly reduces the work needed for operators and deployers to build there own.

    by ronald at June 14, 2016 06:18 PM

    OpenStack Blog

    Technical Committee Highlights June 13, 2016

    It has been a while since our last highlight post so this one is full of updates.

    New release tag: cycle-trailing

    This is a new addition to the set of tags describing the release models. This tag aims to allow specific projects for doing releases after the OpenStack release has been cut. This tag is useful for projects that need to wait for the “final” OpenStack release to be out. Some examples of these projects are Kolla, TripleO, Ansible, etc.

    Reorganizing cross-project work coordination

    The cross project team is the reference team when it comes to reviewing cross project specs. This resolution grants the cross project team approval rights on cross-project specs and therefore the ability to merge such specs without the Technical Committee’s intervention. This is a great step forward on the TC’s mission of enabling the community to be as autonomous as possible. This resolution recognizes reviewers of openstack-specs as a team.

    Project additions and removals

    – Addition of OpenStack Salt: This project brings in SaltStack formulas for installing and operating OpenStack cloud deployments. The main focus of the project is to setup development, testing, and production OpenStack deployments in easy, scalable and predictable way.

    – Addition of OpenStack Vitrage: This project aims to organize, analyze and visualize OpenStack alarms & events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected.

    – Addition of OpenStack Watcher: Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

    – Removal of OpenStack Cue: Cue’s project team activity has dropped below what is expected of an official OpenStack project team. It was therefore removed from the list of official projects.

    Recommendation on location of tests for DefCore verification

    A new resolution has been merged in which it’s recommended for the DefCore team to use Tempest’s repository as the central repository for verification tests. During the summit, 2 different options were discussed as possible recommendations:

    • Use the tests within the Tempest git repository by themselves.
    • Add to those Tempest tests by allowing projects to host tests in their tree using Tempest’s plugin feature.

    By recommending using the Tempest repository, the community will favor centralization of these tests, it’ll improve the collaboration on DefCore matters and it’ll also improve the consistency across the tests used for API verification.

    Mission Statements Updates

    On one hand Magnum has narrowed its mission statement after discussing it at the Austin summit. The team has decided Magnum should focus on managing container orchestration engines (COE) rather than managing containers lifecycle as well. On the other hand, Kuryr has expanded its mission statement to also include management of storage abstractions for containers.

    Expanding technology choices in OpenStack projects

    On the face of it, the request sounds simple. “Can we use golang in OpenStack?” asked of the TC in this governance patch review.

    It’s a yes or no question. It sets us up for black and white definitions, even though the cascading ramifications are many for either answer.

    Yes means less expertise sharing between projects as well as some isolation. Our hope is that certain technology decisions are made in the best interest of our community and The OpenStack way. We would trust projects to have a plan for all the operators and users who are affected by a technology choice. A Yes means trusting all our projects (over fifty-five currently) not to lose time by chasing the latest or doing useless rewrites, and believing that freedom of technology choice is more important than sharing common knowledge and expertise. For some, it means we are evolving and innovating as technologists.

    A No vote here means that if you want to develop with another language, you should form your new language community outside of the OpenStack one. Even with a No vote, projects can still use our development infrastructure such as Mailing Lists, Gerrit, Zuul, and so on. A No vote on a language choice means that team’s deliverable is simply outside of the Technical Committee governance oversight, and not handled by our cross-project teams such as release, doc, quality. For the good of your user base, you should ensure all the technology ramifications that a yes vote would, but your team doesn’t need to work under TC oversight.

    What about getting from No to Yes? Could it mean that we would like you to remain in the OpenStack community but please plugin parts that are not considering the entire community while being built.

    We’ve discussed additional grey area answers. Here is the spectrum:

    • Yes, without limits.
    • Yes, but within limits outlined in our governance.
    • No, remember that it’s perfectly fine to have external dependencies written in other languages.
    • No, projects that don’t work within our technical standards don’t leverage the shared resources OpenStack offers so they can work outside of OpenStack.

    We have dismissed the outer edge descriptions for Yes and No. We continued to discuss the inner Yes and inner No descriptions this week, with none of the options being really satisfactory. After lots of discussion, we came around to a No answer, abandoning the patch, while seeking input for getting to yes within limits.

    Basically, our answer is about focusing on what we have in common, what defines us. It is in-line with the big-tent approach of defining an “OpenStack project” as being developed by a coherent community using the OpenStack Way. It’s about sharing more things. We tolerate and even embrace difference where it is needed, but that doesn’t mean that the project has to live within the tent. It can be a friendly neighbour rather than being in and resulting in breaking the tent into smaller sub-tents.

    by Anne Gentle at June 14, 2016 05:21 PM

    About

    Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

    Subscriptions

    Last updated:
    July 01, 2016 11:28 AM
    All times are UTC.

    Powered by:
    Planet