August 30, 2015

Aptira

Cats Eye View

gemma

Im Gemma, Aptiras Chief Feline Officer, and Im disappointed. With Everything.

My Catherapist has suggested I need to get a few things off my chest. So here we go.

I hear a lot of things around the office. Im not sure what to make of this exchange:

David: tents?

Roland: You need to learn about the big tent and openstack

Roland: Google: big tent openstack

David: TLDR. Governance blah blah.

David: Ive got a quote to get out.

David: *Drops mic*

The post Cats Eye View appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Gemma The Cat at August 30, 2015 02:01 PM

August 28, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Aug., 22 – 28)

IMPORTANT + TIME SENSITIVE:

Mastering containers with OpenStack: a white paper

As containers gain major ground, a new white paper from the OpenStack Foundation highlights how to succeed with them.

Dive in deep with a new book dedicated to Trove

Ahead of Trove Day, the authors of the first book on OpenStack’s database-as-a-service talk about common errors and getting started with contributions.

The Road to Tokyo 

Reports from Previous Events 

Deadlines and Contributors Notifications 

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

Other News 

The weekly newsletter is a way for the community to learn about all the various activities in the OpenStack world.

by Jay Fankhauser at August 28, 2015 07:42 PM

eNovance Engineering Teams

ZooKeeper part 2: building highly available applications, Ceilometer central agent unleashed

The Ceilometer project is in charge of collecting various measurements from the whole OpenStack infrastructure, including the bare metal and virtual level. For instance, we can see the number of virtual machines running or the number of storage volumes.

One of the Ceilometer components is the central agent. Basically it polls the other resources in order to get some measurements. As the name suggests, it’s central and thus it implies some obvious drawbacks in term of reliability and scalability.

In this article we will develop an application which mimic the central agent and then we will study how to improve it with ZooKeeper.

You can download all the samples from here:

$ git clone https://github.com/ylamgarchal/zksamples.git

 

The breakable architecture

Historically, the central agent looked like something like that:

Blank Flowchart - New Page

The architecture was pretty simple, it’s only one agent which polls periodically the resources to get the measurements of the OpenStack infrastructure. A resource is an OpenStack component – for instance it could be a compute node – exposing an API used to retrieve the information of interest.

The two obvious drawbacks of this architecture are:

  • We have a single point of failure because if the central agent fails then we cannot retrieve the measurements anymore.
  • We have a bottleneck because it is working alone and if the amount of resources increase dramatically then the polling mechanism will slow down.

Let’s implement this behavior in Python, the code below mimic a central agent:

# -*- coding: utf-8 -*-
import time

# The number of resources to create.
N = 6

# The resources of the Openstack infrastructure referenced by their names.
os_resources = ["resource %s" % i for i in xrange(0, N)]

class CentralAgent(object):
    """Mimic the Ceilometer central agent."""

    def poll_resource(self, resource):
        """The function in charge to poll a resource and save the result somewhere."""
        print("Send poll request to '%s'" % resource)

    def start(self):
        """Main loop for sending periodically the poll requests."""
        while True:
            for resource in os_resources:
                self.poll_resource(resource)
        time.sleep(3)

if __name__ == '__main__':
    central_agent = CentralAgent()
    central_agent.start()

The code is composed of a set of resources identified by their names and a main loop which send periodically a poll request to each resources. For the sake of simplicity, the networking stuffs with the remote resources will be dropped so that to focus on the central agent improvement.

It’s worth noting that when the performance is not an issue then we can still make the central agent highly available – without anything to develop – by using a cluster manager like Pacemaker or KeepAlived. This tool will manage a cluster of machines and monitor the agent. When the agent or the machine fails then it’s detected and it will restart the agent on another machine.

This architecture result to an active-passive cluster because at one time there is only one central agent running. Here is the documentation to have such a setup with Ceilometer.

The improved architecture

The improved architecture should remove the two drawbacks, thus we want not only one agent running but several ones which cooperate together as a team, let’s call it the Central team :-).

In order to remove the single point of failure and the bottleneck, the Central team should be composed of several agents. Each agent is assigned to a set of resources to poll so that two agents polls two distinct set of resources.

archi1 - New Page(1)

Having an agent polling a unique set of resources is a requirement because in the case of Ceilometer we don’t want to retrieve and store the same result several times in the database.
So far, so good, how the agents will cooperate ? Well, in order to implement the coordination we must answer two questions:

  • What happen if an agent leave (gracefully or from a crash) or join the Central team ?
  • How to make sure that an agent poll a unique set of resources ?

We want to have a dynamic Central team which reacts when a new member joins the team or when a member leaves the team.

More precisely, when a new member joins the team then a set of resources must be assigned to him so that each agent have the same number of resources to poll. It means that the other agents should “give” some resources to poll to the new one.

In return, when an agent leaves the team then the other should share its set of resources to poll. The idea here is that in any way the agents have the same amount of resources to poll.

Okay, it sounds cool but how an agent is notified that a member joined or left the team ? This is where ZooKeeper comes into play 😉 !

Dynamic central team membership with ZooKeeper

Thanks to ZooKeeper we will be able to detect a new member or a left member. The idea is to create a znode which represent the team, like “/central_team” and each agent will join the team by creating an ephemeral znode under that znode.

If you forgot what is an ephemeral znode then go read the part 1 of that article :-) !

Having each agent creating an ephemeral znode is not sufficient. They must listen to the events of their parent znode “/central_team” so that when an agent join the team by creating its entry under “/central_team” ZooKeeper will notify the others.

When an agent leaves the team it just has to remove its znode, if the agent crashes then ZooKeeper will detect it (because we used an ephemeral znode ;-)) and remove its entry.

In all cases, when the number of znode under “/central_team” changes then the whole team is notified.

Let’s see how to implement it on top of our little central agent:

# -*- coding: utf-8 -*-
import functools
import time
from kazoo import client as kz_client
import uuid

# The number of resources to create.
N = 6

# The resources of the Openstack infrastructure referenced by their names.
os_resources = ["resource %s" % i for i in xrange(0, N)]

class CentralAgent(object):
    """Mimic the improved Ceilometer central agent."""

    def __init__(self):
        self._my_client = kz_client.KazooClient(hosts='127.0.0.1:2181',
                                                timeout=5)
        self._my_client.add_listener(CentralAgent.my_listener)
        self._my_resources = []
        self._my_id = str(uuid.uuid4())
        print("Agent id: %s" % self._my_id)

    @staticmethod
    def my_listener(state):
        """Print a message when the client is connected to the ZK server."""
        if state == kz_client.KazooState.CONNECTED:
            print("Client connected !")

    def poll_resource(self, resource):
        """The function in charge to poll a resource and save the result somewhere."""
        print("Send poll request to '%s'" % resource)

    def _get_my_resources(self, children):
        return os_resources

    def _my_watcher(self, event):
        """Kazoo watcher for membership events."""
        if event.type == 'CHILD':
            my_watcher = functools.partial(self._my_watcher)
            children = self._my_client.get_children("/central_team", watch=my_watcher)
            print("Central team members: %s" % children)

    def _setup(self):
        """Ensure the central team group is created."""
        self._my_client.start(timeout=5)
        # Ensure that the "/central_team" znode is created.
        self._my_client.ensure_path("/central_team")

    def start(self):
        """Main loop for sending periodically the poll requests."""
        self._setup()
        self._my_client.create("/central_team/%s" % self._my_id, ephemeral=True)
        my_watcher = functools.partial(self._my_watcher)
        children = self._my_client.get_children("/central_team", watch=my_watcher)
        print("Central team members: %s" % children)
        self._my_resources = self._get_my_resources(children)
        print("My resources: %s" % self._my_resources)

        while True:
            for resource in self._my_resources:
                self.poll_resource(resource)
            time.sleep(3)

if __name__ == '__main__':
    central_agent = CentralAgent()
    central_agent.start()

The code is pretty straightforward, since we have a set of agents now we need to identify them so we added a unique identifier per agent.

The function _setup() is in charge of starting the Kazoo client and creating the “/central_team” znode. Before polling the resources each agent creates its ephemeral znode under “/central_team”.

Afterward, the agent retrieves the znodes under “/central_team” in order to get the current  members of the team, at the same time it sets a watcher (the method _my_watcher()) on “/central_team” in order to be notified when an event occur.

It is worth noting that when the watcher is executed, we must set the watcher again on “/central_team” because watchers are one time triggered by ZooKeeper. The best place to do it is in the watcher itself ;-).

Here is an example of the execution of two agents when agent 1 is run before agent 2:

$ python agent_step2.py
Agent id: fe439a7a-b371-4818-92e2-4fdb75cf9d02
Client connected !
Central team members: [u'fe439a7a-b371-4818-92e2-4fdb75cf9d02']
My resources: ['resource 0', 'resource 1', 'resource 2', 'resource 3', 'resource 4', 'resource 5']
Send poll request to 'resource 0'
Send poll request to 'resource 1'
Send poll request to 'resource 2'
Send poll request to 'resource 3'
Send poll request to 'resource 4'
Send poll request to 'resource 5'
Central team members: [u'ad86afbb-1720-4229-8e27-0811a9c70890', u'fe439a7a-b371-4818-92e2-4fdb75cf9d02']

$ python agent_step2.py
Agent id: ad86afbb-1720-4229-8e27-0811a9c70890
Client connected !
Central team members: [u'ad86afbb-1720-4229-8e27-0811a9c70890', u'fe439a7a-b371-4818-92e2-4fdb75cf9d02']
My resources: ['resource 0', 'resource 1', 'resource 2', 'resource 3', 'resource 4', 'resource 5']
Send poll request to 'resource 0'
Send poll request to 'resource 1'
Send poll request to 'resource 2'
Send poll request to 'resource 3'
Send poll request to 'resource 4'
Send poll request to 'resource 5'

We can see that agent 1 is polling the whole set of resources and then it received a notification when agent 2 joined the team. I suggest you to do some tests with joined agents and left agents to see how it works.
Let’s recap where we are there, so we have a team of agents that are notified when a member join or leave the team. We can see that the agent 2 polls the whole set of resources which is problematic because we want each agent to poll a distinct set of resources. This is the last issue we need to fix :-) !

Dynamic resources partitioning

There is two possible solutions to assign a unique set of resources to each agents:

  • Thanks to ZooKeeper we can elect a special agent from the team to be the leader and then it will be in charge in assigning resources to the others.
  • Or we can use a consistent hashing algorithm on the agent side…

In this article we will implement the second solution because this is what has been done in Ceilometer, the first solution is left as an exercise for the reader 😉 !

Using a consistent hashing algorithm for assigning resources to the agents is an elegant solution because given the team member list each agent can independently retrieve its unique set of resources. Explaining consistent hashing is beyond of this article but you can take a look at this explanation.

The basic idea is to hash the id of each resources and the id of each agents. Depending on the hashes we can assign a resource to an agent.

Let’s see how it works in Python, here is the added lines:

# The resources of the Openstack infrastructure referenced by their names.
os_resources = ["resource %s" % i for i in xrange(0, N)]
os_resources_hash = {}

for resource in os_resources:
    md5_sum = hashlib.md5()
    md5_sum.update(resource)
    os_resources_hash[resource] = md5_sum.hexdigest()
…

class CentralAgent(object):
…
    @staticmethod
    def _get_ring(members):
        ring = {}
        for member in members:
            md5_sum = hashlib.md5()
            md5_sum.update(member)
            ring[md5_sum.hexdigest().encode()] = member
        return ring

    def _get_my_resources(self, children):
        ring = CentralAgent._get_ring(children)
        hash_members = ring.keys()
        hash_members.sort()

        my_resources = []
        for resource in os_resources:
            hash_resource = os_resources_hash[resource]
            member_index = bisect.bisect(hash_members, hash_resource) % len(hash_members)
            hash_member = hash_members[member_index]
            member = ring[hash_member]

            if member == self._my_id:
                my_resources.append(resource)
        return my_resources

    def _my_watcher(self, event):
    """Kazoo watcher for membership events."""
        if event.type == 'CHILD':
            my_watcher = functools.partial(self._my_watcher)
            children = self._my_client.get_children("/central_team", watch=my_watcher)
            print("Central team members: %s" % children)
            self._my_resources = self._get_my_resources(children)
            print("My resources: %s" % self._my_resources)

The most interesting part is the method _get_my_resources() which returns the resources that are assigned to the current agent. It’s a little bit tricky if you don’t know about consistent hashing but with some aspirins it will be clear 😉 !

Let’s see the how it runs for two agents and six resources:

$ python agent_step3.py
Agent id: 898c4bc5-d56f-4a47-a3fa-70499a669078
Client connected !
Central team members: [u'898c4bc5-d56f-4a47-a3fa-70499a669078']
My resources: ['resource 0', 'resource 1', 'resource 2', 'resource 3', 'resource 4', 'resource 5']
Send poll request to 'resource 0'
Send poll request to 'resource 1'
Send poll request to 'resource 2'
Send poll request to 'resource 3'
Send poll request to 'resource 4'
Send poll request to 'resource 5'
Central team members: [u'898c4bc5-d56f-4a47-a3fa-70499a669078', u'e929a9dd-fbbe-4b93-9e5d-ee46fa1a517c']
My resources: ['resource 1', 'resource 2']
Send poll request to 'resource 1'
Send poll request to 'resource 2'

$ python agent_step3.py
Agent id: e929a9dd-fbbe-4b93-9e5d-ee46fa1a517c
Client connected !
Central team members: [u'898c4bc5-d56f-4a47-a3fa-70499a669078', u'e929a9dd-fbbe-4b93-9e5d-ee46fa1a517c']
My resources: ['resource 0', 'resource 3', 'resource 4', 'resource 5']
Send poll request to 'resource 0'
Send poll request to 'resource 3'
Send poll request to 'resource 4'
Send poll request to 'resource 5'

We can see that the agent 1 is first assigned to the whole set of resources. When agent 2 join the team then we can see he automatically get a unique set of resources. You can do some tests with more resources and more agents to see what happens when an agent leave or join the team.

Thanks to the consistent hashing algorithm the resource partitions will be nearly fair between the agents.

Conclusion

If we sum up what has been done we can say that we leveraged ZooKeeper to establish group membership between the set of agents and then get the ability to react when an event happen. We also leveraged the consistent hashing algorithm in combination with ZooKeeper to partition the resources among the agents.

In this way the Ceilometer central agent moved from a weak architecture to a highly available and scalable one. As i said in the previous article, the real code of Ceilometer use the Tooz API but conceptually it acts in a similar manner as our little central agent (here is the real patch https://review.openstack.org/#/c/113549).

I hope you enjoyed this adventure with ZooKeeper :-) ! As an exercise you can implement some real resources, use ZooKeeper to detect events (join, left or failure) so that the agents can adjust their set of assigned resources dynamically.

by Yassine Lamgarchal at August 28, 2015 09:48 AM

August 27, 2015

OpenStack Superuser

Why bridging the OpenStack skills gap is critical

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community. Got something you think we should highlight? Tweet, blog, or email us!

In case you missed it

Bridging the OpenStack Skills Gap

Analysts predict that lack of training and resources would cause some 7 million cloud-related jobs to go unfilled, writes Tom Norton, vice president for HP Helion OpenStack. Interested in getting a job with OpenStack? Norton offers some ways to get yourself trained, pronto. And remember to check out the OpenStack Foundation's training marketplace. for a wide range of bootcamps and classes.

The Death of the Distro: The Future of OpenStack

"The birth of OpenStack ushered in a necessity for the distribution; a faster, more reliable, and easier model of deployment. Many soon found the “romance of the distro” to be short-lived. In his keynote, Jesse detailed business needs, “solved problems” and remaining challenges inherent to distro-based environments and why he remains unconvinced of their place in the OpenStack market," says Jesse Proudman, CTO of Blue Box, an IBM Company, in a talk at OpenStack Day Seattle.

OpenStack Common Culture
"We started this journey with a pretty strong common culture. It was mostly oral tradition. We assumed that as OpenStack grew, our culture would naturally be assimilated by new members....

But we have doubled the number of project teams over the last year and our common culture did not naturally transmit to newcomers... It's time to hold the culture line, rather than time to further relax it," writes Thierry Carrez, technical committee chair and release management project team lead (PTL) of the OpenStack Foundation.

Industry watch

What you need to know before considering OpenStack

OpenStack's maturity and the prevalence of enterprise focused services firms like Mirantis, who are ready to help integrate, means more IT teams are seriously considering the leap. Before you start the journey to native public cloud however, Jonathan Bryce has four tips to make your project a success, writes Margi Murphy of ComputerWorldUK.

In our favorite headline about the Intel's massive cash injection into Mirantis:

OpenStack's newest sugar daddy looks to private cloud to save its bacon

"This funding doesn't say much about OpenStack's future, which remains cloudy so long as the project remains so cumbersome to use. Rather, the $100 million funding says everything about Intel's future," writes Matt Asay, vice president of mobile at Adobe, on Tech Republic.

IBM extends Spectrum storage line into the cloud

"IBM is beefing up its offerings in software-defined storage, which promises to let IT departments better deal with large amounts of storage by uncoupling the management software from its underlying hardware," writes Computer World's Joab Jackson.

Cloud chatter

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us!

Cover Photo by [Nick Della Mora) // CC BY NC

by Nicole Martinelli at August 27, 2015 10:09 PM

OpenStack Silicon Valley zeroes in on containers

OpenStack Silicon Valley (OpenStackSV) returned to the Computer History Museum this week for a two-day conference with 700 attendees filling the sessions and grokking the event's non-official theme: containers.

Jonathan Bryce, executive director of the OpenStack Foundation, kicked off day one by welcoming Amit Tank, principal architect at DirecTV to discuss OpenStack as the platform for enablement and innovation. Before his recruiting-as-a-service plug, a common need for users and ecosystem members searching for OpenStack talent, Tank discussed OpenStack's ability to integrate with emerging technologies, like containers.

"OpenStack gives you the path to production to solve problems like load times using containers," said Tank.

To meet the community, industry demand for information on integrating container technology with OpenStack, Bryce announced the availability of a containers white paper, as well as additional resources available for application developers who are building apps on OpenStack.

Craig McLuckie, a product manager at Google, took the stage to discuss the current state of integrating containers with OpenStack, as well as how Kubernetes fits into the mix.

After sponsoring the OpenStack Foundation last month, Google has continued to commit to the open source community by contributing its container expertise to OpenStack projects, Magnum and Murano.

"Kubernetes and OpenStack are the path to cloud native, and now it's time to work together as a community," said McLuckie. "If you are not building open source, you are at a disadvantage to those who are."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

To bring the Kubernetes and OpenStack story to life, Boris Renski, Mirantis co-founder and CMO introduced Lachlan Evenson, team lead of the cloud platform engineering team at Lithium Technologies.

"You need to look at OpenStack as a platform to enable you to go forward," Evenson said. "It was really easy to answer our container story with OpenStack as a platform."

Lithium is working to transition from its VM-based application running on an OpenStack private cloud to a sleeker, container-based model, using Docker and Kubernetes container orchestration and clustering on top of OpenStack.

With any emerging technology, there are myths that surface during the rise in popularity. Alex Polvi, CoreOS CEO, took the stage to debunk four myths related to containers.

  • Containers replace virtual machines (VMs)
  • Legacy applications won't work in containers
  • You can only run stateless applications
  • Containers are not secure

The container chatter continued with barrage of tweets -- including what may be the next hot drinking game.

You can catch more of the conversation and the debates from this week by following #OSSV15 or #OpenStackSV on Twitter.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Photo by Jed Sullivan // CC BY NC

by Allison Price at August 27, 2015 07:58 PM

Ravello Systems

Multi Node Openstack Kilo lab on AWS and Google Cloud with Externally Accessible Guest Workload – How to configure Openstack networking on Ravello Systems Part 1

OpenStack Cloud Software

Last week we went into how to prep an image for Ravello/AWS/Google/ESXi. This week we're going to leapfrog ahead a bit and talk about networking and OpenStack.

OpenStack is highly complicated for a number of reasons, chief amongst them is that what it seeks to do is replace a bunch of highly complex silos. Second, but not far behind, is that it does this via a collection of independently developed microservices.

OpenStack as an organization of projects has a consensus culture, not a strong central authority / command culture. Without a central authority laying down standards, everything is based on consensus, first within a project (IE: neutron) and then within the community of projects, with project trumping community The most frequent manifestation of this is inconsistencies in command and API syntax between projects, but you’ll also find instances where someone has snuck a change into a not-really related project because they couldn’t get it into the relevant project.

All that being said, for this week I've attempted to publish a one-click OpenStack blueprint in Ravello.

Get it on Repo

REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

With great hubris, I attempted to get a multi-node install setup where one could simply click "start instance" and have an externally accessible cirrOS. I couldn’t quite pull that off, but I got pretty close.

When you copy and go to launch the blueprint you’re going to get a blueprint error as follows:

image00

To resolve it, you can do one of two things. Assign an elastic IP address or a public IP address to the secondary IP on the neutron node. This is done by selecting network, scrolling down the first interface (eth0) and clicking advanced. You should see something like this.

image01

To resolve the error, you can either assign an elastic IP by clicking select or shift it to a public IP.

While we’re here, let’s also look at the security rule that delegates firewall rules to OpenStack:

image06

Note this is mapped to the static IP above and allows access to everything in.

You can now publish the application. It’s going to take a bit to spin up, but when it does so, you can view the public IP on the neutron node under the general tab in the dropdown for eth0/1:

image05

90% of the time, this works every time. If everything goes well, you’ll have a horizon dashboard up at the public IP for the controller node and will be able to navigate to it via https. Log in as “user” / “openstack”, go to instances, and launch the instance. When it boots, you will be able to reach the cirrOS image on the public or elastic IP you assigned to it earlier via ssh / icmp (user: “cirros”, password: “cubswin:)” ):

image04

This works because a floating IP, 192.168.0.33 has been assigned to the instance. This is in private IP space, but because a public or elastic IP has been associated with that static address on the Ravello side, it gets mapped end to end:

image07

Unfortunately, the other 10% of the time sometimes happens, it isn’t strictly deterministic. Horizon (the dashboard) sometimes just goes away for a bit (although it comes back). The instance also sometimes hangs on start (you’ll see it waiting on a virtual wire in the instance log) and requires a hard reboot through horizon. I’ve found deploying to Amazon more reliable than Google, but this is such an edge case that that is probably just feeling, not reality.

All of these little wrinkles, ironically, may make the lab the perfect intro to OpenStack. It works, but not quite perfectly, and sometimes needs a little intervention.

I’ll go more into general networking in a future post, including what the OpenStack network documentation images I have nicknamed the “arc of the covenant images” mean (somewhat):

image03

image02

Exciting huh?

The post Multi Node Openstack Kilo lab on AWS and Google Cloud with Externally Accessible Guest Workload – How to configure Openstack networking on Ravello Systems Part 1 appeared first on The Ravello Blog.

by Nicholas Cammorato at August 27, 2015 01:33 PM

Thierry Carrez

OpenStack Common Culture

We are 5 years into the OpenStack ride (and 3 years into the OpenStack Foundation ride), and the challenges for our community are evolving. In this article I want to talk about what I consider the most significant threat for our open source community today: the loss of our common culture.

Over the past year we evolved the OpenStack project model to adopt an inclusive approach. Project teams which work on deliverables that help us achieve the OpenStack Mission, and which follow our development and community practices should generally be accepted under the "big tent". As we explained in this presentation in Vancouver, we moved from asking "is this OpenStack ?" to asking "are you OpenStack ?".

What does it mean to be OpenStack ? We wrote down a set of principles, based on the original four opens that were defined at the very beginning of this journey. But "being OpenStack" goes beyond that. It is to be aligned on a common goal, be part of the same effort, be the same tribe. OpenStack relies on a number of individuals working cross-project (on infrastructure, QA, documentation, release processes, interoperability, user experience, API guidelines, vulnerability management, election organization...). It is because we belong to the same tribe that some people and organizations care enough about "OpenStack" as a whole to dedicate time to those essential cross-project efforts.

This is why we standardize on logged IRC channels as a communication medium, why we ask that every project change goes through Gerrit, and why we should very conservatively add new programming languages to the mix. Some people advocate letting OpenStack project teams pick whatever language they want, or letting them meet on that new trendy videoconferencing app, or letting them track bugs on separate JIRA instances. More freedom sounds good at first glance, but doing so would further fragment our community into specific silos that all behave differently. Doing so would make recruiting for those essential cross-project efforts even harder than it is today, while at the same time making the work of those cross-project efforts significantly more complex. Doing so would make our community crumble under its own weight.

We started this journey with a pretty strong common culture. It was mostly oral tradition. We assumed that as OpenStack grew, our culture would naturally be assimilated by new members. And it did, for quite some time. But today we are at a point where we dramatically expanded our community (we doubled the number of project teams over the last year) and our common culture did not naturally transmit to newcomers. Silos with local traditions have formed. Teams don't all behave in the same way anymore. Most team members only care about a single project team. We struggle to move from one project to another. We struggle to provide common solutions that work for everyone. We struggle to recruit for cross-project efforts more than we ever did. OpenStack's future as a community is at risk. It's time to hold the culture line, rather than time to further relax it.

It is also more than time that we document our common culture, so that it can be explicitly communicated to everyone in the OpenStack ecosystem (current and prospective members). We started a workgroup at the Technical Committee, held a virtual sprint to get a base version written, and now here it is: the first version of the OpenStack Project team guide. Read it, refer to it, communicate it to your OpenStack community fellows, propose changes to it. It is an essential tool for us to overcome this new challenge. It's certainly not the only tool, and I hope we'll be able to dedicate a cross-project session at the Mitaka Design Summit in Tokyo to further discuss this topic.

by Thierry Carrez at August 27, 2015 12:12 PM

Solinea

Solinea to Present Three Sessions at OpenStack Summit in Tokyo


openstack-cloud-software-vertical-large

Sincere thanks to everyone who voted for Solinea sessions to be presented at the OpenStack Summit in October. We look forward to seeing you in Tokyo. Please click the links below to add these to your summit schedule.

If you’re not going to be in Tokyo but want to see the presentations, please email info@solinea.com, and we’ll get in touch after the event.

by Irwin Soonachan (irwin@solinea.com) at August 27, 2015 12:29 AM

Cloudify Engineering

The OpenStack Interoperability Paradox and How to Bridge It

Last week I had the honor of moderating with my co-presenter, Sharone Zitzman, our fourth OpenStack & Beyond Podcast. This...

August 27, 2015 12:00 AM

August 26, 2015

OpenStack Superuser

Dive in deep with a new book dedicated to Trove

If you're ready to take a deep dive into Trove, OpenStack's database-as-a-service, things just got a little easier.

Two Active Technical Contributors (ATCs) to the project are here to help with the first book dedicated to it, titled "OpenStack Trove." The book aims to help you wade through downloading and installing Trove as well as plumbing the depths with APIs and orchestrations.

Launched ahead of Trove Day - a free mid-cycle marathon hosted in San Jose, California by Tesora -- the 313-page paperback is available online or at Trove Day, with some proceeds to benefit Women Who Code.

Superuser talked to Amrith Kumar and Douglas Shelley, who both work at Tesora, about why you should put this book on your shelf, how to become an OpenStack contributor and why getting women into coding is important to them.

alt text here

Who will this book help most?

Kumar: The book provides a description of database-as-a-service, describes how to install and use Trove, a very detailed architectural description of Trove and then an in-depth discussion of Trove’s important features.

The book will be very useful for people who want to get started with Trove and get it running quickly. Later, the same people will be able to use the book to not only understand many of the nuances of how Trove works but also how to use some of the more advanced features and troubleshoot problems with Trove.

What's the one thing most people don't get about Trove at first?

Shelley: Most people think Trove is about provisioning databases and assume that Trove is nothing more than automation around provisioning. This is not the case at all; Trove is about simplifying the management and operation of database servers throughout their lifetime.

Database servers tend to be long lived; they stay around for much longer than other servers in an application stack. They also store vital application data. Therefore managing a database server (or the database running on it) is a much more complex operation than managing the typical application server. When operating a large number of database servers, doing this manually can be a considerable burden, and highly error prone.

What are some of the most common errors/pitfalls?

Kumar: Managing a database is a complex activity and an IT organization is often responsible for hundreds of servers that are interconnected in complex topologies. Configuration skew, and incorrect setup of capabilities like replication or clustering are two common areas where errors are extremely costly.

Taking the guess work out of database management, and dramatically simplifying the provisioning process through common APIs that support most database workflows make Trove an incredible asset to any IT organization.

There are a lot of books about OpenStack - why is this one worth having on the shelf?

Kumar: That’s an easy question -- there are no other books on Trove!

Shelley: Trove is also a relatively new project (incubated in Havana, integrated in IceHouse) and is also a more complex project than the others in several ways. This book provides the reader with a lot of information that is not otherwise easily available elsewhere.

Both of you are OpenStack ATCs, what advice do you have for people who want to contribute?

Kumar: I have only two words of advice for people who want to contribute: “Do It.” There’s no point in “wanting” to contribute, and it isn’t as though it is hard.

Contributions don’t have to be complex code, features or blueprints. Reviews are valuable and useful to the community and they are a great way to get to know the community, the code, and Trove itself. Read the documentation, use the product, contribute bugs, contribute short write-ups that could be used in improving the documentation, and share your own tips and tricks about using Trove with others. There are numerous ways in which one can contribute and the investment and effort are not high. Most people who I’ve met who “want to contribute” believe that it is hard, or that they have to write a lot of code. That is just not true.

Shelley: It is definitely intimidating making one's first contribution to an open source project, especially if you are new to open source and come from a closed-source background. However, it is easier to get started than I think folks new to OpenStack believe. There is a wealth of information available via openstack.org to help someone get started. The best way to get going is to work through building and testing a project – look for “low-hanging fruit” bugs that are relatively easy to fix and can get you through the entire process quickly. Code review contributions are also very valuable and a great way to contribute and learn the code base at the same time.

Why are some of the proceeds going to Women Who Code?

Shelley: I’ve been working in information technology and software development for over 25 years and software development, in particular, is a pretty male-dominated field. My expectations were that this would improve over time; certainly women are breaking the “glass ceiling” in other professions. It seems like we as software engineers need to focus on helping out in our corner of the world.

I have two teenage daughters and while I don’t know if they want to be technology professionals, I would hate to think they would be discouraged from entering a field that I’ve enjoyed and benefited from...It is my hope that not only the funds from the books sales help the Women Who Code organization but also the focus we are putting on this can shine a light on this diversity challenge.

Kumar: For at least two generations, members of my family have given their lives to working on causes relating to women’s rights, and this contribution to [Women Who Code] is consistent with the values that I was brought up with...

I believe that there are many talented women who are not getting their fair share of opportunities in the workplace and through the work of organizations like Women Who Code, this injustice is being addressed. Read the recent interview with Victoria Martinez De La Cruz where she recounts her journey and how Outreachy changed her life. Victoria is a fellow member of the Trove Core team and is also on the core team of Zaqar.

Cover Photo by Wiredforsound23 // CC BY NC

by Nicole Martinelli at August 26, 2015 09:16 PM

Mastering containers with OpenStack: a white paper

Suddenly, everyone is getting their head around containers.

One recent survey projects that 69 percent of companies will embrace them in the next year for production environments.

To better understand why containers are gaining so much mindshare, OpenStack just published a paper you can download titled "Exploring Opportunities: Containers and OpenStack."

With insight contributed from experts at Mirantis, Rackspace and Cisco, the 18-page .PDF details the value of container use within an OpenStack infrastructure and provides an overview of how to build a container-hosting environment with OpenStack Nova, the OpenStack Compute project, as well as use cases for containers today and tomorrow.

Take Lithium Technologies, for example. They power social-platforms-as-a-service for over 400 companies -- including Google, AT&T and Sephora -- running on an OpenStack private cloud. The paper outlines how they are transitioning to a sleeker, container-based model, using Docker and Kubernetes container orchestration and clustering.
Containers haven't gained this new ground without sparking controversy, however, leading some to question how containers will impact the need for OpenStack. Those taking the long view argue that containers will peacefully coexist with OpenStack as savvy firms create a hybrid environment with a mix of different technologies.

This enthusiasm, in part, led to a whole day blocked out for containers at the OpenStack Summit in Vancouver this past May. OpenStack Foundation COO Mark Collier talked a lot about containers in his keynote at the event, in which he explained that just as OpenStack excels in helping enterprises manage their VM deployments and virtualize their data centers, it’s just as set to do the same thing with container technologies.

“The important thing for us as a community is to think about OpenStack as an integration engine that’s agnostic,” Collier said. “That puts users in the best position for success. Just as we didn’t reinvent the wheel when it comes to computing, storing, and networking, we’ll do the same with containers.”

For the uninitiated, here are the basics: containers are portable and isolated environments that make it possible for developers to package applications with all the dependencies and libraries they require. While containers are cousins to virtual machines, there are important differences. For instance, containers share a number of resources with virtual machines, such as the OS kernel. They differ, though, in how they keep applications and other services separated.

alt text hereOpenStack container-as-a-service support architecture, from the white paper.

If you want to understand how OpenStack can help you power container efforts at your company, this new paper provides a thorough look into container management with OpenStack, as well as a peek into the various container-related services that are being built as first-class resources in current and upcoming OpenStack releases.

Here are the most compelling reasons to adopt containers today, according to the authors of the paper:

  • To obtain deterministic software packaging that would fit nicely with an immutable infrastructure model.
  • To encapsulate microservices.
  • To enable portability of containers on top of OpenStack virtual machines as well as bare-metal servers ([Ironic](https://wiki.openstack.org/wiki/Ironic)) using a single, lightweight image.

Last year, the OpenStack community decided that OpenStack was going to support containers and third-party container orchestrators such as Docker Swarm, Kubernetes and Mesos. OpenStack describes these technologies as Container Orchestration Engines (COEs), and all three of these COE systems are supported in the OpenStack Magnum containers service.

Today, OpenStack supports Linux Containers (LXC) and Virtuozzo system containers. Docker application containers and Docker Swarm, Kubernetes, and Mesos container orchestration are available with the Liberty release of [Magnum.] The paper also details what users can expect to see regarding containers and container management in the near future and details the continuous evolution toward full-fledged OpenStack container support in the near future.

And for those getting ready to do so, the paper provides highlights on how to build a container-hosting environment with OpenStack Compute.

Just as organizations needed a way to manage virtual machines and virtual machine sprawl, the same is true for containers – and many today are realizing that OpenStack is a viable option to provide the additional agility they need within their current architecture by using containers without having to create a separate container-specific infrastructure.

Cover Photo by Katsrcool // CC BY NC

by George V. Hulme at August 26, 2015 05:14 PM

Opensource.com

8 new tutorials for OpenStack users and developers

Every month, Opensource.com compiles the very best of recently published how-tos, guides, tutorials, and tips into one handy collection.

by Jason Baker at August 26, 2015 08:00 AM

OpenStack @ NetApp

Flocker, Docker, Cinder, & NetApp!

Being able to run about four-to-six times the number of server application instances in comparison to traditional VMs on the same hardware is one of the reasons why the adoption rate of Docker containers has skyrocketed. The advantage of containers in packaging applications, dependencies, and configurations not only allows portable and flexible packaged applications that are fast and easy to deploy, update, and scale, but can also help cloud and database service providers cut down costs by providing higher infrastructure utilization. Without the overhead of an additional OS, or the abstraction of a hypervisor, containers are able to be more densely placed on server resources, leading to higher resource utilization and greater efficiency.

One limitation of using containers is that their associated volumes, housing the application data, exist for only as long as the containers do. This makes containers lose their value for applications that require data-retention, such as a database application. Flocker, an open-source container data volume-manager from ClusterHQ, tries to fill that gap by allowing a Flocker data volume, called a dataset, to be used with any container in the cluster. Unlike Docker volumes which are tied to a single server, Flocker datasets are portable, and can be easily moved between nodes in a cluster. You can learn more about Flocker and how you can get started on the ClusterHQ website.

With containers quickly evolving into a new application deployment paradigm, it makes perfect sense to leverage their benefits with OpenStack. NetApp’s OpenStack journey begins with connecting the legendary NetApp experience to cloud storage through openness, community engagement, and absolute customer focus. NetApp brings proven enterprise-level storage solutions to OpenStack by providing improved performance, efficiency, and data protection and recovery – thus making deployments of cloud services simpler, faster, and more scalable.

Flocker provides support for multiple storage back-ends using plugins, including a driver for Cinder, the block storage provisioning and management framework for OpenStack. This backend driver can be used with the Flocker dataset agent nodes, deployed as virtual machines by OpenStack Nova. After installing the Flocker service on your control host and your nodes, you will need to create a configuration file on the nodes at /etc/flocker/agent.yml to start the Flocker agent and ensure it can take advantage of the Cinder integration. The configuration file must include version and control-service items. You may refer to the Flocker documentation for the contents of those blocks. When configuring the Cinder driver, the dataset block, which houses the configuration parameters for the dataset backend, must be configured for your environment. It’s also important to note that all the nodes must be configured to use the same dataset backend.

You can learn more about the configuration file on ClusterHQ website, but here’s a sample of the dataset block in your agent.yml which uses the Cinder integration:

dataset:
 backend: openstack
 auth_plugin: password
 auth_url: <horizon auth url>
 region: <region name>
 tenant_name: <tenant>
 username: <tenant user>
 password: <password>

You can see a demonstration of Flocker being used in conjunction with NetApp’s Unified Cinder driver to manage container volumes in an OpenStack environment here:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/SxF1-JYF1R0" width="560"></iframe>

Using Flocker with NetApp’s Cinder driver for OpenStack makes all our enterprise-level benefits available to you, thus bringing the NetApp efficiency, security, and flexibility to various operational tasks:  

  • Move databases to a different virtual machine, or bare metal
  • Minimal downtime while upgrading server hardware
  • Manually failover containers to a new machine, reattaching existing data volumes with no data loss
  • Speed up time-to-recovery by attaching a new database container to an existing data volume
  • Run containers on bare metal but with the manageability of VMs
  • Migrate databases from spinning disk to SSD (currently experimental)

NetApp believes that containers will drastically alter the way that applications are deployed, providing a simplified, scalable, and efficient manner in which to distribute and instantiate applications on demand. Empowering the community by combining Docker, Flocker, and the NetApp storage portfolio, including clustered Data ONTAP and the E-Series platform, means that our customers have a choice of storage which meets the needs of their application, and the business, while providing enterprise class reliability, availability, and serviceability.

If you’re interested in learning more about using Docker, Flocker, or other container technology with NetApp storage, please reach out to Brendan.Wolfe@netapp.com. We’d love to hear how you plan on using Flocker with containers, and how NetApp can evolve and be a better companion to help you go further, faster.

August 26, 2015 12:00 AM

August 25, 2015

OpenStack Superuser

OpenStack Taiwan Day: rapid growth ahead

The second annual OpenStack Day Taiwan was held in the Taipei International Convention Center‎, just a block away from Taiwan’s tallest building, Taipei 101.

It’s definitely the most anticipated annual IT event in Taiwan this year and has received the highest level of attention from a wide range of industries with over 1,700 people registered, 32 local and international speakers, and 20 event sponsors.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

OpenStack Foundation COO Mark Collier and community manager Tom Fifield kicked things off with a giant cake to celebrate OpenStack's fifth birthday. Aptira CTO Kavit Munshi was invited to give talks at event and interviewed by local media to share his comprehensive community experience and deep insights into OpenStack technology. He was amazed by the active participation by local community and exponential growth in OpenStack interest in Taiwan.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Aptira has engaged in nourishing the open community and promoting OpenStack technology in Taiwan since 2014. It seems not long ago that the very first OpenStack meetup took place in April 2014 with around 100 attendees. This August, we are happy to say we have successfully reached the goal of community development at the national level with collaborative efforts from local community.

Based on our initial statistics result, OpenStack has diverse uses in Taiwan and its users range from telcos to electronics and semiconductor giants. There are at least 200 companies in Taiwan that have started proof-of-concept projects or are building dedicated teams to introduce OpenStack technologies to help reduce costs for IT infrastructure deployment and increase the efficiency and productivity of physical resources.

The following is the list of some users from our survey:

  • Video game industry
  • Global IT companies
  • Advertising tech companies
  • Mobile tech firms
  • Semiconductor companies
  • Electronic commerce companies
  • Telcos
  • Research laboratories
  • Government
  • Internet-based retailers
  • Electronics manufacturers
  • Travel agencies
  • …more

Overall, Taiwan firms have been relatively conservative in their uptake of open-source products due to some concerns about lacking of enterprise solution to support their needs. However, some of the most-profitable companies in Taiwan such as Foxconn and Mediatek have been convinced to come aboard to expedite their innovation with OpenStack technology.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The Taiwanese market is ready for a change. It’s time for vendors to showcase the clear proof to their customers to give them the confidence to deploy OpenStack in real production settings.

This post first appeared on Aptira's blog. Superuser is always interested in how-tos and other contributions, get in touch at editor@superuser.org

Cover Photo // CC BY NC

by Joanna Huang at August 25, 2015 10:45 PM

August 24, 2015

Gal Sagie

Kuryr - Bringing Containers Networking to OpenStack Neutron

In this post i am going to introduce you with a very interesting project i am involved with, Project Kuryr, which is part of OpenStack Neutron big stadium.

First, you must be wondering what is the meaning of the name, Kuryr is named after the Czech word which means a courier. This is exactly what Kuryr is trying to be, it tries to bring containers and Docker networking specifically to use and leverage Neutron solutions and services for networking, and close the gap to make this happen.

What is the problem Kuryr is trying to solve ?

OpenStack and Neutron are no longer the new kid in the block, Neutron has matured and its popularity in OpenStack deployments over nova-network is increasing, it has a very rich ecosystem of plugins and drivers which provide networking solutions and services (like LBaaS, VPNaaS and FWaaS). All of which implement the Neutron abstraction and hopefully can be interchange by the cloud deployers.

What we noticed in regards to containers networking, and specifically in environments that are mixed for containers and OpenStack is that every networking solution tries to reinvent and enable networking for containers but this time with Docker API (or any other abstraction) OpenStack Magnum, for example, has to introduce an abstraction layer for different libnetwork drivers depending on the Container Orchestration Engine used. It would be ideal if Kuryr could be the default for Magnum COEs

The idea behind Kuryr, is to be able to leverage the abstraction and all the hard work that was put in Neutron and its plugins and services and use that to provide production grade networking for containers use cases. Instead of each independent Neutron plugin or solution trying to find and close the gaps, we can concentrate the efforts and focus in one spot - Kuryr.

Kuryr aims to be the “integration bridge” between the two communities, Docker and Neutron and propose and drive changes needed in Neutron (or in Docker) to be able to fulfill the use cases needed specifically to containers networking.

It is important to note that Kuryr is NOT a networking solution by itself nor does it attempt to become one. The Kuryr effort is focused to be the courier that delivers Neutron networking and services to Docker.

Kuryr Architecture

Kuryr has few parts which are already under working process and some which are being discussed and designed as i write these lines.

Map Docker libnetwork to Neutron API

The following diagram shows the basic concept of Kuryr architecture, to map between Docker and libnetwork networking model to Neutron API.

Kuryr maps libnetwork APIs and creates the appropriate objects in Neutron which means that every solution that implements Neutron API can now be used for containers networking. All the additional features that Neutron provides can then be applied to containers ports, for example security groups, NAT services and floating IP’s.

But the potential of Kuryr doesn't stop in the core API and basic extensions, Kuryr can leverage the networking services and their features for Example LBaaS to provide abstraction for implementing Kubernetes services and so on.

Kuryr is also going to close the gaps when API’s mapping is not very obvious and if needed drive changes in the Neutron community. Recent examples of this are the tags addition to Neutron resources, to allow API clients like Kuryr to store mapping data and port forwarding, to be able to provide port exposing Docker-style (Of course all of this is still in review and approval process)

Provide generic VIF-Binding infrastructure

One of the common problems for Neutron solutions that want to support containers networking is that in these environments there is a lack of nova port binding infrastructure and no libvirt support.

Kuryr tries to provide a generic VIF binding mechanism for the various port types which will receive from Docker the namespace end and attach it to the networking solution infrastructure depending on its type (or pass it to it to finalize the binding).

You can read more about this work in the following blueprint and also check review process for Kuryr project here

VIF binding is also needed for cases of running containers nested inside VM's which is described in the next sections. These VM's are not managed directly by nova and hence don't have any OpenStack agent in them, which means there needs to be some mechanism to perform the VIF binding and it can be initiated from the local Docker remote driver calling a shim Kuryr layer. (But once again this is something that is still being discussed)

The following diagrams depicts the VIF Binding with Kuryr

Provide containerized images of Neutron plugins and Kolla Integration

Kuryr aims to provide containerized images of the various Neutron plugins which hopefully is integrated with kolla process as well. If you dont know what OpenStack Kolla project is about, visit this link.

This means that we will have various images for the various Neutron plugins like OVS L2 Agent, Midonet, OVN with the Kuryr layer. Kolla integration will bring the much needed ease of deployment that operators desire without losing control over inter service authentication nor configurability.

Nested VMs and Magnum use cases

Another use case that Kuryr aims at solving is the common pattern of deploying containers on user owned OpenStack created VMs, i.e. VM-containers. This deployment pattern provides an extra layer of security to container deployments by separating the user owned container from the operations owned Nova Compute machine.

This use case is also consumed by OpenStack Magnum, Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Docker and Kubernetes available as first class resources in OpenStack.

There are already Neutron plugins that support this use case and provide an elegant way to attach the nested containers into different logical networks than the network of the VM itself, and apply Neutron features on it, for example OVN. You can read more about this at my blog post about OVN and containers here.

Kuryr aims to provide the missing parts to support such solutions, for example defining neutron port and attaching sub ports to it (done as part of the VLAN trunk VM’s blueprint which can be reviewed here.)

Using Kuryr, OVN and other Neutron vendors like MidoNet can leverage a common infrastructure to interact with Docker and keep their focus on implementing Neutron APIs and driving networking forward.

I will dwell more on this topic on my next post regarding Kuryr progress as it's still early to discuss concrete details (This part in Kuryr is suppose to be in the next milestone).

Summary

Kuryr mission in my eyes and i hope by now in yours too is very critical and important, there are some interesting challenges ahead and we welcome any contribution/feedback/help that you can provide :)

Come and visit us in the weekly IRC meeting and share your ideas/comments. You can find details about our meeting time/place here

Stay tuned for the next update on Kuryr...

August 24, 2015 11:25 PM

Opensource.com

Reports from mid-cycle meetups, NFV at scale, and more OpenStack news

Interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at August 24, 2015 09:00 AM

Mirantis

A Cloud Oligopoly or a Vibrant Open Ecosystem?

The post A Cloud Oligopoly or a Vibrant Open Ecosystem? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Even as innovation drives today’s thriving cloud ecosystem, there are early warning signs of a “cloud” oligopoly. Though all of the public cloud providers combined run less than 10 percent of all enterprise workloads, the value of the cloud has largely been demonstrated by AWS and its competitors, and many pundits are beginning to question the viability of in-house alternatives. CIOs find themselves under increasing pressure to provide an alternative to the value proposition offered by the public cloud.

CIOs have two choices: build the cloud infrastructure in-house (using their own resources or those of a managed cloud partner) or send their workloads to the public cloud.

Everyone who has ever touched AWS knows how easy and user friendly it is, and Amazon and its competitors continue to destroy costs by innovating at every layer of the stack, taking advantage of the economies of scale and homogeneity of their infrastructure. This feels like a breath of fresh air when compared to the behavior of legacy vendors such as IBM and HP or traditional software providers such as VMware, Microsoft and Red Hat. These vendors have a disincentive to reduce costs, and they need every penny of their astronomically high margins to justify their lofty market capitalization.  

The trillion dollar question is: Where will the enterprise workloads run and who will own the cloud? A betting man would be sorely tempted to put his chips on the public cloud providers to win.

Unfortunately, while attractive on the surface, such a victory would come at a cost. If the public cloud providers win that bet, the cloud could easily be dominated by an oligopoly of three to five vendors operating a massive array of homogeneous computer farms. Such a world will have no place for traditional hardware vendors such as HP, IBM and EMC, or the software vendors such as VMware and Red Hat. Gone will be not just these traditional vendors, but also the vibrant and diverse community of innovators designing new storage, network and application infrastructure.

For those of us who do not want a dystopian oligopoly, what is a viable alternative?

An open source cloud platform is the only answer. A large open source community can pull together the cumulative resources sufficient to compete with the incumbents. The most successful ones use the right blend of leadership and anarchy. They attract the free thinkers that we need to drive innovation, yet have enough discipline and focus to deliver an open computing platform that can be easily consumed and operated. This allows for the whole world to integrate their innovations without worrying that a guard at the door will turn them away.

As utopian as it sounds, this is the reality of OpenStack. Forrester recently wrote that OpenStack is positioned to become the fifth major public cloud camp, against AWS, Azure, Google and VMware. With more than 170 companies and 2,300+ engineers contributing to the Kilo release, OpenStack is the only viable open platform that has the industry mindshare and the requisite capital investment to compete with the incumbent public cloud providers.

But where does it stand in the way of leadership? For any organic movement to succeed, the motivation of the leadership is also especially critical. The right leader is the one who understands the purpose of the movement and has no other agenda other than helping the movement realize its full potential.

In the last few weeks, the OpenStack ecosystem witnessed two important announcements, both featuring Intel as the protagonist: a collaboration with Rackspace to form an OpenStack Innovation Center and a co-development collaboration with Mirantis that injects another $100 million into OpenStack engineering to achieve enterprise readiness at scale.

It is good to see that Intel (which was recently voted a Platinum member of the OpenStack Foundation) is starting to commit substantial resources to the OpenStack ecosystem. By making OpenStack an integral part of the “Cloud for All” initiative, Intel (which owns 90+ percent of the server market worldwide) sends a strong message to enterprise buyers that OpenStack and Intel-based commodity hardware will constitute a natural union.

The choice of these two development partners is also quite telling. Rackspace is the original founder of OpenStack, and together with NASA, seeded the technology, started a community movement and kept it truly open by spinning it out to its own Foundation. Today Rackspace is using OpenStack as a platform for its public cloud and offers its customers a number of managed cloud offerings, some of which are based on OpenStack. It also remains a top-5 contributor to the OpenStack code base.  

Mirantis (which is currently a top-3 contributor to OpenStack) is the only major pure-play OpenStack vendor that is focused on delivering enterprise-grade OpenStack to its customers. Mirantis does not use OpenStack as a channel to sell other proprietary products, and its only motivation is to have upstream OpenStack work well out of the box.

Intel, Rackspace and Mirantis pulling together substantial human and financial resources to make upstream Openstack a first-class cloud computing platform is a major step towards retaining the heterogeneity of computing innovation. Other companies who believe in choice and free innovation should step up as well.

Together, we can make cloud’s future the vibrant, innovative space we all know it can be.


(Photo by Jerry Wooster.)

The post A Cloud Oligopoly or a Vibrant Open Ecosystem? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Alex Freedland at August 24, 2015 04:01 AM

August 22, 2015

Doug Hellmann

Migrating back to WordPress

I’ve migrated my personal blog from Tinkerer back to WordPress, which may introduce repeated articles into the various RSS feeds, since the URLs have changed. The primary reason I decided to change blogging tools is because with more than 500 posts, the site build time under Tinkerer was unacceptably long. It works great for someone … Continue reading Migrating back to WordPress

by doug at August 22, 2015 08:40 PM

August 21, 2015

Aptira

OpenStack Day Taiwan 2015: An Exciting Time to Witness OpenStack’s Rapid Growth in Taiwan

The 2nd annual OpenStack Day Taiwan was held in Taipei International Convention Center‎, just a block away from Taiwan’s tallest building Taipei 101, and it’s definitely the most anticipated annual IT event in Taiwan this year and has received the highest level of attention from a wide range of industries with over 1,700 people registered, 32 local and international speakers, and 20 event sponsors.

Kavit Munshi OpenStack Taiwan Day 2015

Kavit Munshi, Aptira CTO, GM Greater Asia, and OpenStack Foundation Board member, blesses another OpenStack PoC.

The event was opened by OpenStack Foundation COO Mark Collier and community manager Tom Fifield with a giant cake to celebrate the 5th OpenStack birthday. Our CTO Kavit Munshi (a.k.a Taiwanese tea lover) was invited to give talks at event and interviewed by local media to share his comprehensive community experience and deep insights into OpenStack technology. He was amazed by the active participation by local community and exponential growth in OpenStack interest in Taiwan.

Aptira has engaged in nourishing open community and promoting OpenStack technology in Taiwan since 2014. It seems not long ago the very first OpenStack meetup took place in 2014 April and had around 100 attendees. This August, we are happy to say we have successfully reached the goal of community development at the national level with collaborative efforts from local community. Based on our initial statistics result, OpenStack has diverse uses in Taiwan and its users range from the telecoms space to IT to electronics and semiconductor giants. There are at least 200 companies in Taiwan have started PoC projects or building dedicated teams to introduce OpenStack technologies to help reduce costs for IT infrastructure deployment and increase the efficiency and productivity of physical resources. The following is the list of some user range shown in our statistics data:

  • Video game industry
  • Global IT companies
  • Advertising technology companies
  • Mobile technology firms
  • Semiconductor companies
  • Electronic commerce companies
  • Telecoms
  • Research laboratory
  • Government
  • Internet-based retailer
  • Electronics manufacturer
  • Travel agency
  • …more

In our long term observation, Taiwan firms overall have been relatively conservative in their uptake of open-source products due to some concerns about lacking of enterprise solution to support their needs. However some most-profitable companies in Taiwan such as Foxconn and Mediatek have been convinced to come aboard to expedite their innovation with OpenStack technology.

The Taiwan market is ready for a change. It’s time for vendors to showcase the clear proof points to their customers to gain confidence to deploy OpenStack in real production settings.

Photos:

Joanna Huang chairs panel at OpenStack Taiwan Day 2015

Aptira GM for East Asia, Joanna Huang, chaired a popular panel discussion about how OpenStack has been utilised in R&D department, the innovation engine of an IT company, in Taiwan.

Mark Collier and the incredibly awesome Tom Fifield ay OpenStack Day Taiwan 2015

Mark Collier and Tom Fifield celebrated the 5th OpenStack birthday in Taiwan with over 1,200 attendees in keynote session.

1200 attendees squeezed into the OpenStack Taiwan Day 2015 keynote There were 1700 registrants for OpenStack Taiwan Day 2015

The post OpenStack Day Taiwan 2015: An Exciting Time to Witness OpenStack’s Rapid Growth in Taiwan appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Joanna Huang at August 21, 2015 08:12 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Aug., 15 – 21)

Make your cloud sing with OpenStack’s Community App Catalog 

The Catalog can get you rocking with containers in just a few clicks – how it’s evolving and how your feedback can shape its future.

The Road to Tokyo 

Reports from Previous Events 

Deadlines and Contributors Notifications 

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

Other News 

The weekly newsletter is a way for the community to learn about all the various activities in the OpenStack world.

by Jay Fankhauser at August 21, 2015 06:25 PM

OpenStack Superuser

Takeaways from OpenStack’s Mid-Cycle Ops Meetup, Liberty edition

PALO ALTO, CA. -- We've had a fantastic couple of days at the OpenStack Operators Mid-Cycle Meeting. Two hundred operators, users, and developers came together to discuss how they're deploying and maintaining their OpenStack clusters.

The free meetup took place halfway through the development cycle for the upcoming [Liberty OpenStack release. Sponsored by Hewlett-Packard Helion and GoDaddy, the event drew participants from across North America and as far away as Japan and the U.K. The meetup used a collaborative model, where a moderator led the room in a shared discussion about best practices, stumbling blocks and operations innovations.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The first day started with a discussion lead by Joe Topjian in which operators shared their tips for getting the best performance out of their hypervisors. The conversation -- you can check out the proceedings on the session etherpad -- ranged from tuning disk, scheduling, memory, and kernel parameters, with tips for getting the best performance out of your OpenStack hypervisor based on your compute needs and underlying technology.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

This talk was followed by a number of breakout sessions. The Ops Guide team planned an update their excellent operations guide, with plans to update the technical information to reflect the current state of OpenStack add new user stories. The next Operator Mid-Cycle will feature an extra day devoted to updating the Ops Guide in a face-to-face sprint.

The Logging Working Group continued their mission of refining and rationalizing logs. Working with the Jim Blair, the Infra Project Team Lead (PTL), they covered a range of OpenStack log monitoring tools. The Infra team is working on repackaging their own logging tools into a new logging library meant for downstream consumption by operators. They worked on the request/return id spec and hashed out how to document log configuration. The working group has a weekly working group meeting, and encourages people interested in building out logging standards and analysis tools to join them.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The Upgrades Working Group met to share stories and best practices around migrating a production system from one version of OpenStack to another.

The consensus was that most of the "upgrade pain" is around API upgrades that can disrupt running services. Operators are using heavily customized home-grown tools and scripts to successfully manage software, service, and database migrations. They offered feedback to the development community about adding information to release notes about feature deprecation and dropping, critical bugs, and inter-dependency issues. They cited the Cinder Release Notes as a particularly good example. Cinder, along with Nova, was also cited as leading a successful charge in making database migrations compatible across neighboring versions of OpenStack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The Large Deployments and Public Cloud Working Groups continued a conversation they started in Vancouver. They offered blueprints and interactions with the Neutron developers about needs of large deployments, including network segmentation. Kyle Mestery, the Neutron PTL, attended from remote to work with the operators on their needs and experiences. Other issues including the scale of clusters and the difficulty of naming things (for example, the meaning of the word "region" may differ between operators). The session wrapped up by choosing talking points for the Tokyo Summit. With such tremendous progress made and fruitful collaboration with the networking team, they now face the task of determining their new collective angst.

Working from a list of brainstormed in the morning introductory session, the Burning Issues group covered a range of topics -- from "smoldering" to "tire fire." The lively session started with the state of Neutron, reaching a consensus that more hands-on work and tutorials are essential for an understanding of not only how to set up a dynamic network stack, but to also debug and maintain it.

From there, the conversation moved to strategies for capacity management and monitoring. Anish from RabbitMQ spoke about the roadmap for the message queue, and Morgan Fainberg, Keystone PTL, also talked about the upcoming release, the need for more granular roles and how to scale Keystone to larger deployments. The session wrapped up with discussions about compliance and tricks to troubleshoot problems.

After lunch, a full session was dedicated to container-based deployments. Those using containers to manage their deployments praised the ability to control conflicting Python dependencies during upgrades through container isolation, scale out services, stage and test new systems, and build a single artifact for development, testing, and production. Containers aren't suitable for full deployments, though, and the suggestion was to go with more traditional deployment methods for things that are still difficult to do with containers.

Day One wound up with a set of lightning talks with stories about deployments, upgrades, the infra cloud, billing, testing, client libraries, and how the operator community has contributed back to the development community. A full list of the talks and their slides is available on the lightning talks etherpad.

Day Two kicked off with a session on integrating OpenStack deployments into configuration management databases, followed by deployment tips lead by Matt Fischer of Time Warner Cable. Config management, orchestration, database configuration, message queue tuning and load balancing were just some of the covered topics.

The config management session naturally flowed into a full session devoted to networking led by Edgar Magana of Workday. The maturity of Neutron was on full display, with only one deployment of those surveyed still running nova-net (mainly because it meets their internal needs, and there's no pressure to upgrade yet). There's a wide variety of neutron deployments out there, using almost every type of network backend available. High availability with DVR still isn't widely adopted, but is one of the most eagerly-awaited features.

Next up was a session devoted to the work of the User Committee, the official group that reports to both the Board and the Technical Committee about the issues and needs facing the user community. Topics included updates to the user survey, product and working group feedback, and how to better recognize the contributions that Operators make to the OpenStack community that go beyond patches and reviews -- free Summit tickets and stickers, anyone?

The working sessions concluded with another round of breakout workshops. The Tags Working Group continued their analysis of to contribute to the new project tagging process. During the session, they proposed a new tag: "containerizable," truly a sign of the times.

The Product Working Group made great progress on identifying and refining user stories, defining a personal taxonomy for consistent user experience evaluation, and drafting recommendations for future cross project work, including a proposal to break the new Graffiti project out of Glance.

The Packaging Group shared how they manage packaging the source, system admin, and configurations for their OpenStack deployments. This included how to manage packages across multiple versions, testing, package lifecycles and external dependencies. They expressed a common goal of being able to manage the complex an ever-shifting dependency tree, as well as easily deploy bug-fixes, security patches, and backports into running systems.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Matt Young of HP led the Tools and Monitoring session. Participants covered an impressive number of topics -- 24 four in 90 minutes, or about three-and-a-half minutes per topic. Capacity planning, live migration, metering, and testing were a few of the tools and techniques you can use to keep your OpenStack cloud healthy.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The meetup wound down with a feedback and planning session lead by Anupriya Ramraj, also of HP. Thanks to everyone who attended and participated in the sessions. The Operators Meetup truly embodies the collaborative spirit of the OpenStack community. Special thanks goes out to the OpenStack Foundation staff, Tom Fifield and Allison Price, for organizing and running the meetup.

If you missed this one, you can get involved by signing up for the operators mailing list and sharing your own experiences with setting up and running your OpenStack cloud.

Cover Photo by Bigal101 // CC BY NC

by Chris Hoge at August 21, 2015 05:05 PM

August 20, 2015

Rackspace Developer Blog

Install OpenStack from source Part 6

This is the sixth and final installment in a series demonstrating how to install OpenStack from source. The five previous articles:

Previously we installed the Identity service (keystone), Image service (glance), Networking service (neutron), and the Compute service (nova) onto the controller node. We also installed neutron onto the network node and nova and neutron onto the compute node. In this section, we turn our attention to finishing up by installing the Volume service (cinder) and dashboard (horizon) onto the controller node.

As we finish up our OpenStack intallation, we turn our attention to both cinder and horizon. First, we start with cinder, and, like we have done before, we create the cinder user and the directories that the cinder user needs:

mkdir /var/cache/cinder
chown cinder:cinder /var/cache/cinder/
chmod 700 /var/cache/cinder

Now clone the cinder repo and install cinder:

git clone https://github.com/openstack/cinder.git -b stable/kilo
cd cinder
pip install --requirement requirements.txt
python setup.py install
cd ~

Copy the cinder configuration files to the etc directory:

cp -R cinder/etc/cinder/* /etc/cinder
mv /etc/cinder/cinder.conf.sample /etc/cinder/cinder.conf

Set up sudo access for the newly created cinder user, again using the rootwrap sudo wrapper help to control sudo access:

cat > /etc/sudoers.d/cinder_sudoers << EOF
Defaults:cinder !requiretty

cinder ALL = (root) NOPASSWD: /usr/local/bin/cinder-rootwrap  /etc/cinder/rootwrap.conf *
EOF

chmod 440 /etc/sudoers.d/cinder_sudoers

Create the cinder configuration file

cat > /etc/cinder/cinder.conf << EOF
[DEFAULT]
rpc_backend=rabbit
osapi_volume_listen=$MY_PRIVATE_IP
api_paste_config = /etc/cinder/api-paste.ini
rootwrap_config=/etc/cinder/rootwrap.conf/rootwrap.conf
auth_strategy = keystone

[DATABASE]
connection = mysql://cinder:cinder@$MY_PRIVATE_IP/cinder?charset=utf8

[keystone_authtoken]
auth_uri = http://$MY_PRIVATE_IP:5000/v2.0
identity_uri = http://$MY_PRIVATE_IP:35357/
admin_user = cinder
admin_password = cinder
admin_tenant_name = service
signing_dir = /var/cache/cinder

[oslo_messaging_rabbit]
rabbit_host=$MY_PRIVATE_IP

EOF

Set proper ownership for the cinder configuration files:

chown cinder:cinder /etc/cinder/*.{conf,json,ini}

Create the database that cinder uses:

mysql -u root -pmysql -e 'CREATE DATABASE cinder;'
mysql -u root -pmysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';"
mysql -u root -pmysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"

Create the cinder database structure:

cinder-manage db sync

The sample data script that we used to populate data into keystone does not insert data for the cinder user, so we do that here. Create the cinder service user in keystone:

keystone user-create --tenant service --name cinder --pass cinder

Grant the 'admin' role to the cinder service user:

keystone user-role-add --user cinder --tenant service --role admin

Add a cinder service and a v2 endpoint in keystone:

openstack service create --type volumev2 \
  --description "OpenStack Block Storage" cinderv2

openstack endpoint create \
  --publicurl http://10.0.1.4:8776/v2/%\(tenant_id\)s \
  --internalurl http://10.0.1.4:8776/v2/%\(tenant_id\)s \
  --adminurl http://10.0.1.4:8776/v2/%\(tenant_id\)s \
  --region RegionOne \
  volumev2

The back-end store for cinder, that we use in this example, is iSCSI. This installation assumes that the iSCSI has a volume group named cinder-volumes created on the controller node to use as actual store for cinder. Cinder then creates logical volumes for the various cinder volumes that are created. Install the open-iscsi and tgt packages:

apt-get install -y open-iscsi tgt

Configure tgt so that cinder can add volumes when they are requested to be created:

cat >> /etc/tgt/conf.d/cinder_tgt.conf << EOF
include /var/lib/cinder/volumes/*
EOF

Restart the open-iscsi and tgt service so that they see the configuration file changes:

service tgt restart
service open-iscsi restart

Create the cinder upstart scripts. First for the API service:

cat > /etc/init/cinder-api.conf << EOF
description "Cinder API"

start on runlevel [2345]
stop on runlevel [!2345]


chdir /var/run

pre-start script
        mkdir -p /var/run/cinder
        chown nova:root /var/run/cinder/

        mkdir -p /var/lock/cinder
        chown nova:root /var/lock/cinder/

end script

exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-api -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log
EOF

Next for the cinder scheduler service:

cat > /etc/init/cinder-scheduler.conf << EOF
description "Cinder Scheduler"

start on runlevel [2345]
stop on runlevel [!2345]


chdir /var/run

pre-start script
        mkdir -p /var/run/cinder
        chown nova:root /var/run/cinder/

        mkdir -p /var/lock/cinder
        chown nova:root /var/lock/cinder/

end script

exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-scheduler -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log
EOF

And lastly for the cinder volume service:

cat > /etc/init/cinder-volume.conf << EOF
description "Cinder Volume"

start on runlevel [2345]
stop on runlevel [!2345]


chdir /var/run

pre-start script
        mkdir -p /var/run/cinder
        chown nova:root /var/run/cinder/

        mkdir -p /var/lock/cinder
        chown nova:root /var/lock/cinder/

end script

exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-volume -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log
EOF

At this point, we can start all the cinder services:

start cinder-api
start cinder-volume
start cinder-scheduler

Wait 20 to 30 seconds and verify that everything started:

ps aux|grep cinder

You should see something like:

root@controller:~# ps aux|grep cinder
cinder   14633  0.5  2.6 243092 55244 ?        Ss   Aug17 823:42 /usr/bin/python /usr/local/bin/cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log
cinder   14652  0.0  3.3 256064 69176 ?        S    Aug17   0:10 /usr/bin/python /usr/local/bin/cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log
cinder   14664  0.5  2.4 216680 49456 ?        Ss   Aug17 835:51 /usr/bin/python /usr/local/bin/cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log
cinder   14671  0.1  2.5 219392 52968 ?        S    Aug17 165:47 /usr/bin/python /usr/local/bin/cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log
cinder   14684  0.1  2.7 225120 56544 ?        Ss   Aug17 151:50 /usr/bin/python /usr/local/bin/cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log

If you don't see all the services started, use the following commands to determine why it didn't start.

sudo -u cinder cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-api.log
sudo -u cinder cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-scheduler.log
sudo -u cinder cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log

Even though cinder is running you won't be able to create cinder volumes without a volume group named cinder-volumes. If you want to use cinder you need to manually create that volume group on the controller node. If you don't have an existing volume group with free space, you need to add a new disk partition to the controller node. Use the pvcreate and vgcreate commands to create a volume group with the correct name.

Start the horizon install by installing the package prerequisites for horizon:

apt-get install -y apache2 libapache2-mod-wsgi memcached python-memcache gettext

Add a horizon user to use for the Apache server:

useradd --home-dir "/usr/local/lib/python2.7/dist-packages/openstack_dashboard" \
        --create-home \
        --system \
        --shell /bin/false \
        horizon

Clone the horizon git repo:

git clone https://github.com/openstack/horizon.git -b stable/kilo
cd horizon

Now, use the python installer to install horizon:

python setup.py install

Create a directory where horizon can store lock files:

mkdir /var/lib/openstack-dashboard
chown horizon:horizon /var/lib/openstack-dashboard/

Make the openstack_dashboard configuration directory and copy the configuration file there:

mkdir /etc/openstack_dashboard
cp ~/horizon/openstack_dashboard/local/local_settings.py.example /etc/openstack_dashboard/local_settings.py
chown -R horizon:horizon /etc/openstack_dashboard/local_settings.py

Remove the Openstack dashboard Python files that were installed earlier (they will be copied back in the next step):

rm -rf /usr/local/lib/python2.7/dist-packages/openstack_dashboard/*

Due to the way the Django WSGI file is written, create a openstack_dashboard subdirectory and copy the dashboard files there:

cp -r openstack_dashboard/ /usr/local/lib/python2.7/dist-packages/openstack_dashboard/
chown -R horizon:horizon /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/static
ln -s /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/static /usr/local/lib/python2.7/dist-packages/openstack_dashboard/static
ln -s /etc/openstack_dashboard/local_settings.py /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/local/local_settings.py

Create the Apache configuration files for horizon:

cat >> /etc/apache2/sites-available/openstack.conf << EOF
<VirtualHost *:80>
    WSGIScriptAlias / /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/wsgi/django.wsgi
    WSGIDaemonProcess horizon user=horizon group=horizon processes=3 threads=10 home=/usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard display-name=%{GROUP}
    WSGIApplicationGroup %{GLOBAL}

    SetEnv APACHE_RUN_USER horizon
    SetEnv APACHE_RUN_GROUP horizon
    WSGIProcessGroup horizon

    DocumentRoot /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/.blackhole/
    Alias /media /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/static

    <Directory />
        Options FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory /usr/local/lib/python2.7/dist-packages/openstack_dashboard/openstack_dashboard/>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        # Apache 2.4 uses mod_authz_host for access control now (instead of
        #  "Allow")
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
    </Directory>

    ErrorLog /var/log/apache2/horizon_error.log
    LogLevel warn
    CustomLog /var/log/apache2/horizon_access.log combined
</VirtualHost>
WSGISocketPrefix /var/run/apache2
EOF

Enable the newly created Openstack horizon web virtual server and disable the default Apache server:

a2ensite openstack
a2dissite 00-default

Set Apache to run as the newly created horizon user:

sed -i 's/export APACHE_RUN_USER=www-data/export APACHE_RUN_USER=horizon/g' /etc/apache2/envvars
sed -i 's/export APACHE_RUN_GROUP=www-data/export APACHE_RUN_GROUP=horizon/g' /etc/apache2/envvars

Make some configuration setings to local_settings.py:

sed -e 's/DEBUG = True/DEBUG = False/g' /etc/openstack_dashboard/local_settings.py
sed -i "s/#ALLOWED_HOSTS = \['horizon.example.com', \]/#ALLOWED_HOSTS = \['horizon.example.com', \]\nALLOWED_HOSTS = \['*' \]/g" /etc/openstack_dashboard/local_settings.py

Look for the following lines in /etc/openstack_dashboard/local_settings.py and change them from:

SECRET_KEY = secret_key.generate_or_read_from_file(
os.path.join(LOCAL_PATH, '.secret_key_store'))

to:

SECRET_KEY = secret_key.generate_or_read_from_file('/var/lib/openstack-dashboard/secret_key')

Change the CACHES section to:

CACHES = {
    'default': {
        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION' : '127.0.0.1:11211',
    }
}

#CACHES = {
#    'default': {
#        'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
#    }
#}

The default Ubuntu Apache configuration uses /horizon as the application root, but we will use / Configure auth redirects:

sed -i "s|# LOGIN_URL = WEBROOT + 'auth/login/'|LOGIN_URL='/auth/login/|g" /etc/openstack_dashboard/local_settings.py
sed -i "s|# LOGOUT_URL = WEBROOT + 'auth/logout/'|LOGOUT_URL='/auth/logout/'|g" /etc/openstack_dashboard/local_settings.py
sed -i "s|# LOGIN_REDIRECT_URL = WEBROOT|LOGIN_REDIRECT_URL='/'|g" /etc/openstack_dashboard/local_settings.py

Download the novnc software so that we can use the dashboard to access the console for each VM:

git clone git://github.com/kanaka/noVNC

Now move the novnc files to a location where horizon expects them:

mkdir /usr/share/novnc
cp -r noVNC/* /usr/share/novnc

Install novnc prerequisites:

apt-get install libjs-jquery libjs-sphinxdoc libjs-swfobject libjs-underscore

Generate the nova novnc proxy upstart script:

cat >> /etc/init/nova-novncproxy.conf << EOF
description "Nova novnc proxy worker"

start on runlevel [2345]
stop on runlevel [!2345]

chdir /var/run

pre-start script
        mkdir -p /var/run/nova
        chown nova:root /var/run/nova/

        mkdir -p /var/lock/nova
        chown nova:root /var/lock/nova/

        modprobe nbd
end script
EOF

Start the nova-novncproxy service and restart Apache to read the configuration changes:

start nova-novncproxy
service apache2 restart

Verify that the nova-novncproxy service is running:

ps aux|grep nova-novncproxy

You should see output similar to:

root@controller:~# ps aux|grep nova-novncproxy
nova      2764  0.0  1.8 144432 37288 ?        Ss   Aug17   0:35 /usr/bin/python /usr/local/bin/nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novnc.log

If the service didn't start, use the following command to test nova-novncproxy and get log output:

sudo -u nova nova-novncproxy --config-file=/etc/nova/nova.conf

That finishes up the OpenStack install. You can point your browser to the outside interface on the controller node and access the OpenStack dashboard. The login credentials are in the openrc file in the /root directory on the controller node. Good luck with your newly created OpenStack setup.

August 20, 2015 11:59 PM

Aptira

Aptira Appoints Expert Advisors to Capitalise on Global Growth Opportunity


IT industry veterans, Max McLaren and Ben Kepes, join Simon Anderson on Aptira’s advisory panel


SYDNEY, AUSTRALIA, AUGUST 19, 2015 – Australian managed hosting, public cloud infrastructure and enterprise private cloud solutions provider, Aptira, has appointed two additional advisors to help steer the company’s strategic direction as it moves to expand its operations from Asia-Pacific (APAC) into Europe and the US.

Founded in 2009, Aptira has established itself as a leading player in the global OpenStack market through the strong reputation of its leadership and execution in delivering infrastructure, private cloud and virtualisation projects. While the OpenStack market has seen some global consolidation in recent months, Aptira has continued to expand its business and gain further credibility by providing a high level of service. Aptira is also deeply engaged with the OpenStack project and community with two seats on the OpenStack Foundation Board of Directors and employs three of the 12 OpenStack Ambassadors. Its customers include Red Hat, Cisco, Telstra and Singtel Optus.

IT industry heavyweights, Max McLaren, Regional Vice President and General Manager Australia and New Zealand (A/NZ) at Red Hat, and Ben Kepes, investor and industry commentator, recently joined Aptira’s advisory panel, to provide guidance on strategy, messaging and marketing as the company continues to extend its leadership in APAC, and now Europe.

McLaren is a senior executive with almost 30 years in the IT industry, including 12 years with IBM and Lotus. He now runs Red Hat’s A/NZ business, a role he earned in 2011 following a successful five-year stint as general manager for the region. McLaren’s experience in the Open Source community will support Aptira in remaining at the forefront of the OpenStack movement.

 

“Being part of Red Hat for the past 10 years has reinforced my belief in the power of community-based, collaborative development which is the cornerstone of innovative, reliable and cost-effective alternatives to the traditional way IT has been delivered in our industry,” said McLaren.

“With the increased connectivity that characterises the world in which we work, collaborate, consume, and entertain ourselves today, I am convinced Open Source and the cloud will revolutionise every organisation and every market. I look forward to supporting the Aptira team in leading this change, and partnering with them to continue helping Australian organisations successfully realise the opportunities these trends provide.”

Kepes is a globally-recognised, subject matter expert with an extensive following across multiple channels, whose commentary is regularly published in prominent titles and publications. He is also an active member of Clouderati, a global group of thought leaders in cloud computing. Aptira will leverage Kepes’ expertise in cloud, infrastructure and software to underpin its strategic decision-making and operational investments.

“I consider a few questions before becoming involved with a company: is the company in an important space, is the company doing really good things in that space and, most importantly, is the company full of truly good people?” said Kepes. “Aptira ticks all three boxes and I’m proud to be helping them with their journey.”

McLaren and Kepes join Simon Anderson, Chief Executive Officer, DreamHost, who has been an advisor to Aptira since 2013.

“During my time with Aptira, I’ve witnessed the company build a great reputation for OpenStack expertise and operational excellence in APAC and, more recently, Europe,” said Anderson. “I’m excited to be joined by Max and Ben to support the Aptira team through its work in driving the adoption of OpenStack for cloud computing worldwide.”

Tristan Goode, Chief Executive Officer, Aptira, said Anderson, McLaren and Kepes’ expertise will enable Aptira to identify and capitalise on industry opportunities as it continues to expand. Building upon its success as the leading provider of OpenStack in APAC, Aptira has opened its first European office and intends to further the use of Open Source solutions in other markets, including the US.

 

About Aptira

Aptira is the leading provider of OpenStack in Asia-Pacific, providing cloud solutions and technology consultancy to meet the most demanding technology specifications for a wide range of organisations in telecommunications, media, finance, retail, utilities and government. With offices in Australia, India, Taiwan and Hungary, Aptira is a growing global business as its reputation for high quality services expands. As the founder and prime motivator of the OpenStack community in Australia and India, the company is committed to the idea that what it is doing for its customers today will be mainstream tomorrow. For more information, please visit aptira.com or follow Aptira on Twitter: @aptira.

The post Aptira Appoints Expert Advisors to Capitalise on Global Growth Opportunity appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Tristan at August 20, 2015 11:00 PM

OpenStack Superuser

Get the latest message on OpenStack's Zaqar service

OpenStack messaging service Zaqar has made vast improvements since the project launched in 2013 as Marconi.

Rich Bowen, RDO community liaison at Red Hat, spoke with Flavio Percoco, project team lead (PTL) of the Zaqar project about its improvements in Kilo and what's coming in Liberty.

I was hoping you could tell us what has been happening in the Kilo cycle, and what we can expect to see in Liberty.

Many things have happened in these last few years. We developed new APIs, we’ve added new features to the project.

At that time, we had version 1 of the API, and we were still figuring out what the project was supposed to be like, and what features we wanted to support, and after that we released a version 1.1 of the API, which was pretty much the same thing, but with a few changes, and a few things that would make consuming Zaqar easier for the final user.

Some other things changed. The community provided a lot of feedback to the project team. We’ve attempted to graduate two times, and then the Big Tent discussion happened, and we just fell into the category of projects that would be a good part of the community – of the Big Tent discussion. So we are now officially part of OpenStack. We’re part of this Big Tent group.

We changed the API a little bit. The impression that the old API gave was that it was a queuing service, whereas what we really wanted to do was a messaging service. There is a fundamental difference between the two. Our focus is to provide a messaging API for OpenStack that would not just allow users to send messages from one point to another, but it would also allow users to have notifications right away from that API. So we’ll take advantage of the common storage that we’ll use for both features, for different services living within the same service. That’s a big thing, and something we probably didn’t talk about back then.

The other thing is that in Kilo we dedicated a lot of time to work on these versions of the API and making sure that all of the feedback that we got from the community was taken care of and that we were improving the API based on that feedback, and those long discussions that we had on the mailing list.

In Liberty, we’ve dedicated time to integrating with other project, as in, having other projects consume the API. So we’re very excited to say that in Liberty a few patches have landed in Heat that rely on Zaqar for having notifications, or to send messages, and communicate with other parts of the Heat service. This is very exciting for us, because we have some stories of production environments, but we didn’t have stories of other projects consuming Zaqar, and this definitely puts us in a better position to improve the service, and get more feedback from the community.

In terms of features for the Liberty cycle, we’ve dedicated time to improve the websocket transport which we started in Kilo, but didn’t have enough time to complete there. This websocket transport will allow for persistent connections to be made against the Zaqar service, so you’ll just connect to the service once, and you’ll keep that connection alive. This is ideal for several scenarios, and one of those is connecting to Zaqar from a browser and having Javascript communication directory to Zaqar, which is something we really want to have.

Another interesting feature that we implemented in Liberty is called pre-signed URLs, and what it does is something very similar – if folks are familiar with Swift temp URLs –

http://docs.openstack.org/kilo/config-reference/content/object-storage-tempurl.html

This is something very similar to that. It generates a URL that can expire. You will share that URL with people or services that don’t have an username in Zaqar, so that they can connect to the service and still send messages. This URL is limited to a single tenant and a single queue, and it has privileges and policies attached to it so that we can protect all the data that is going through the service.

I believe those are the two features that excite me the most from the Liberty cycle. But what excites me the most about this cycle is that we have other services using Zaqar, and that will allow us to improve our service a lot.

Looking forward to the future, is there anything that you would like to see in the M cycle? What is the next big thing for Zaqar?

In the M cycle, I still see us working on having more projects consuming Zaqar. There’s several use cases that we’ve talked about that are not being taken care of in the community. For instance, talking to guest agents. We have several services that need to have an agent running in the instances. We can talk about Trove, we can talk about Sahara, and Murano. We are looking forward to address that use case, which is what we built pre-signed URLs for. I’m not sure we’re going to make it in Liberty, because we’re already on the last milestone of the cycle, but we’ll still try to make it in Liberty. If we can’t make it in Liberty, that’s definitely one of the topics we’ll need to dedicate time to in the M cycle.

But as a higher-level view, I would really like to see a better story for Zaqar in terms of operations support and deployment – make it very simple for people to go there and say they want Zaqar, this is all I need, I have my Puppet manifest, or Anisible play-books, or whatever people are using now – we want to address that area that we haven’t paid much attention to. There is already some effort in the Puppet community to create manifests for Zaqar, which is amazing. We want to complete that work, we want to tell operations, hey, you don’t have to struggle to make that happen, you don’t have to struggle to run Zaqar, this is all you need.

And the second thing that I would like to see Zaqar doing in the future is to have a better opinion of what storage it wants to rely on. So far, we have support for two storages that are unicode based and there’s a proposal to support a third storage, but in reality what we would really like to do is have a more opinionated Zaqar instance of storage, so that we can build a better API, make it consistent, and make sure it is dependable, and provide specific features that are supported and that it doesn’t matter what storage you are using, it doesn’t matter how you deploy Zaqar, you’ll always get the same API, which is something that right now it’s not true. If you deploy Redis, for instance, you will not have support for FIFO queues, which are optional right now in the service. You won’t be able to have them because that’s something that’s related to the storage itself. You don’t get the same guarantees that you’d get with other storage. We want to have a single story that we can tell to users, regardless of what storage they are using. This doesn’t mean that ops cannot use their own storage. If you deploy Zaqar and you really want to use a different storage, that’s fine, we’re not going to remove plug-ability from the service. But in terms of support, I would like Zaqar to be more opinionated.

To get involved with Zaqar, you can subscribe to the mailing lists., chat with the community directly in the #openstack-zaqar channel on irc.freenode.org or nswer and ask questions on Ask OpenStack.

This post first appeared on Rich Bowen's blog. You can follow him on Twitter at @rbowen.

Superuser is always interested in how-tos and other contributions, get in touch at editor@superuser.org.

Cover Photo by Kevin Dooley // CC BY NC

by Rich Bowen at August 20, 2015 06:50 PM

OpenStack Days India reflects local tech boom: attendance up 150%

If the need for an extra job board and sessions with standing room only were any indication, the OpenStack community in India is thriving.

More than 300 people gathered in Bangalore, the Silicon Valley of India on August 8 for the third annual OpenStack Day India to learn more about OpenStack and share recent contributions.

Content spanned two days with a hands-on workshop that detailed how to get started with an OpenStack deployment and getting your hands dirty with Devstack on Friday, August 7. Twenty sessions divided into two tracks delivered content ranging from writing an app in OpenStack to Getting Started with OpenStack Upstream Contribution filled the agenda the following day.

Day one kicked off with a sold out, hands-on workshop designed to teach attendees how to get started with deploying Devstack and OpenStack multi-node deployment.

Salman Memon, a cloud engineer at Aptira, led the workshop. Memon first got started with OpenStack after attending a workshop during college, then securing an internship with Aptira. A few short years later, he lead the entire workshop.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Memon encourages students who want to get involved with OpenStack to follow in his footsteps and reach out to the leaders within their local community and attend events like workshops, meetups and OpenStack Days India.

“They will guide you and let you know what’s going on and how to contribute to OpenStack. Students should always talk to any community members and use IRC to get a clear idea of what you want to do,” he said.

System integrators also came out in full force at the OpenStack India Day, with both WiPro and Infosys hosting sessions.

"At the center of open source, OpenStack plays an even stronger role as the binding glue for compute, storage and networking," said Vasanth Kumar, open source practice manager at WiPro, an Indian multinational IT consulting and system Integration services company. "That's fundamentally a transformational advantage for Indian heritage companies that have actually specialized in operational excellence."

To sustain this open source movement, WiPro has invested in OpenConnect, where they have 1,200 members contributing back to OpenStack projects including Ironic, Magnum, Manila and Sahara.

"Our goal was to host a community driven event which provides a platform for organizations and professionals new to OpenStack, to get to know the community and the ecosystem better," said Kavit Munshi, an organizer for the event, as well as a member of the OpenStack board of directors. "We have had an excellent response to our new job boards initiative this year and look forward to further engaging people and welcoming them to the OpenStack community."

Cover Photo by Subith Premdas // CC BY NC

by Allison Price at August 20, 2015 04:36 AM

August 19, 2015

Red Hat Stack

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK

Written by: Andrew Theurer, Principle Software Engineer

There is a lot of talk about NFV and OpenStack, but frankly not much hard data, showing us how well OpenStack can perform with technologies like DPDK. We at Red Hat want to know, and I suspect many of you do as well. So, we decided to see what RDO Kilo is capable of, by testing multiple Virtual Network Functions (VNFs), deployed and managed completely by OpenStack.

Creating the ultimate NFV compute node

In order to scale NFV performance to incredible levels, we need to start with a strong foundation -the hardware which makes up the compute nodes. A NFV compute node needs incredible I/O capability and very fast memory. We selected a server with 2 Intel Haswell-EP processors, 24 cores, 64GB memory @2133 MHz, and seven available PCI gen3 slots. We populated six of these PCI slots with Intel dual-port 40Gb adapters -that’s twelve 40Gb ports in one server!

Exploiting high performance hardware with Nova

The compute node we choose has the potential for amazing NFV performance, but only if it is configured properly. If you were not using OpenStack to deploy virtual machines, you need to ensure your deployment process properly chooses resources correctly -from node-local CPU, memory and I/O, to backing VM memory with 1GB pages. All of these are essential to getting top performance from your VMs. The good news is that OpenStack can do this for you. No longer are you required to get this “right”. The user only needs to prepare for PCI passthrough and then specify the resources via Nova flavor-key:

nova flavor-key pci-pass-40Gb set “hw:mem_page_size=1048576”

nova flavor-key pci-pass-40Gb set “pci_passthrough:alias”=”XL710-40Gb-PF:2”

When creating a new instance with this flavor, Nova will then ensure that the resources are node-local and the VM is backed with 1GB huge pages.

The Network Function under test

We deployed six VMs, using RHEL 7.1 and DPDK 2.0, each of them performing a basic VNF role: forwarding of layer-2 packets. DPDK (data plane development kit) is a set of libraries and drivers for incredibly fast packet processing. More information on DPDK is available here. Each VM includes of 2 x 40Gb interfaces, 3 vCPUs, and 6GB of memory. Forwarding of network packets was enabled for both ports (in one port, out the other), in both directions. You can think of this network function as a bridge or the base function of a firewall, to be located somewhere between your computer and a destination:

basic-network-function

In this scenario, the “processing” we choose is packet forwarding, handled by the application, “testpmd”, which is included in the DPDK software. We choose this because we wanted to test the I/O throughput at the highest possible levels to confirm whether OpenStack Nova made the correct decisions regarding resource allocation. Once these VMs are provisioned, we have a compute node with:

NFV-node

We use a second system to generate network traffic, which happens to have identical hardware configuration as the compute node. This system acts as both the “computer/phone/device” and the “server” in our test scenario. For each VM, the packet generator generates traffic, sending to both of the VM’s ports, and the packet generator also receives traffic that the VM forwards. For our test metric, we count how many packets per second are transmitted, forwarded by the VM and finally returned to the packet generator system.

NFV-test-bed

The test results

Note that we conduct this test with all six VMs processing packets at the same time. We used a packet size of 64 bytes in order to simulate the worst possible conditions for packet processing overhead. This allows us to drive to highest levels of packets-per-second without prematurely hitting a bandwidth limit. In this scenario, we are able to achieve 213 Million packets per second! Openstack and DPDK is operating at nearly the theoretical maximum packet rate for these network adapters! In fact, when we tested these two systems without Openstack or any virtualization, we observed 218 Million packets per second. Openstack with KVM is achieving 97.7% of bare-metal!

One other important aspect to consider is how much CPU are we using for this test. Is there enough to spare for more advanced network functions? Could we scale to more network functions? Below is a graph of CPU usage as observed from the compute node:

cpuall_cpuall

Although processing 213 Million packets per second is an incredible feat, this compute node still has ½ the system’s CPU unused! Each of the VMs are using 2 out of 3 vCPUs to perform packet forwarding, leaving 1 vCPU for more advanced packet processing. These VMs could also be provisioned with 4 vCPUs without over-committing host CPUs, providing even more compute resource to them.

Real results, and more to come

We will continue reporting performance tests like this, showing actual performance of NFV and OpenStack that we achieve in our tests. We are also working with groups like OPNFV to help standardize benchmarks like this, so stay tuned. We have a lot more to share!

by jeffja at August 19, 2015 07:39 PM

Kenneth Hui

The Easy Button For Using VMware vSphere With An OpenStack Cloud

The company I work for, Platform9, made two announcements this week. The first is that we have closed a Series B round of funding and the second is that support for VMware vSphere is now generally available (GA). Platform9 CEO and Co-founder, Sirish Raghuram, has blogged about the significance of the funding news. We also view the GA of vSphere support as an important milestone because of the number of customers who currently run vSphere in their data center, many of whom are looking to make their organizations more agile through the use of private clouds. While there are a number of notable private cloud technologies and vendors that integrate with the vSphere hypervisor, none of them offer the ease of management or support for existing vSphere infrastructures that is now being offered by Platform9.

 

You can read about what we are delivering now and in future releases on the Platform9 support page, which has a number of related articles and release notes. You can also read more about what we are delivering with the GA of vSphere support by reading my blog post on the announcement.

Infrastructure Discovery


Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud, Virtualization, VMware Tagged: Cloud, Cloud computing, OpenStack, Platform9, Private Cloud, VMware, VMware vSphere, vSphere

by kenhui at August 19, 2015 04:00 PM

Rackspace Developer Blog

Containers in the OpenStack Ecosystem

Container technology is evolving at a very rapid pace. The purpose of the webinar talk in this post is to describe the current state of container technologies within the OpenStack Ecosystem. Topics we will cover include:

  • How OpenStack vendors and operators are using containers to create efficiencies in deployment of the control plane services
  • Approaches OpenStack consumers are taking to deploy container-based applications on OpenStack clouds

In this approx. 1 hour webinar talk, we will specifically discuss:

  • An overview of Docker
  • Native OpenStack container options
  • How containers differ from PaaS
  • Container architecture patterns
  • Container Service Registration & Discovery
  • Container Orchestration Engines
  • Container Networking

Link

Feel free to leave comments or questions below! Enjoy.

August 19, 2015 12:26 PM

Cloudify Engineering

OpenStack & Beyond Podcast - Episode 4 | Is OpenStack Really Ready for the Enterprise

We’re back with the newest episode of OpenStack & Beyond, the podcast that discusses everything OpenStack. We are really excited...

August 19, 2015 12:00 AM

Rackspace Developer Blog

Introducing Rackspace.NET

The Rackspace .NET SDK beta is now available! This is the first step towards improving the .NET experience for OpenStack and Rackspace developers. Rackspace.NET enables you to work with both Rackspace services, which are based on OpenStack, and unique Rackspace offerings, such as hybrid cloud. This is in the same spirit as the new Rack CLI which was announced last week.

OpenStack users will have a clean SDK dedicated to their needs and moving at the pace of OpenStack. Rackspace customers will have a native experience, seeing only functionality that is supported by Rackspace, using Rackspace terminology.

For more details on how this will improve OpenStack.NET, checkout Rackspace.NET and OpenStack.NET: Peas and Carrots.

Roadmap

Rackspace.NET is built on top of OpenStack.NET, because many of Rackspace's solutions use OpenStack. We are in the process of moving Rackspace specific solutions out of OpenStack.NET. When this migration is completed, OpenStack.NET v2.0 will be pure OpenStack and Rackspace.NET v1.0 pure Rackspace.

The project's beta milestones outline the full roadmap. Here's a peek at the first few releases:

  • v0.1 - Cloud Networks. This coincides with the release of OpenStack.NET v1.5.0 with support for OpenStack Networking v2.
  • v0.2 - RackConnect Public IPs
  • v0.3 - Cloud Servers
  • v0.4 - Cloud Identity

Cloud Networks Support

Rackspace Cloud Networks enable you to create isolated networks and provision server instances with Rackspace networks or the isolated networks that you created.

The following small example helps you to get started. The QuickStart for Rackspace Cloud Networks has a complete walk-through, and you can download the sample project from the Rackspace.NET repository.

var networkService = new CloudNetworkService(identityService, region);

var networkDefinition = new NetworkDefinition { Name = "{network-name}" };
var network = await networkService.CreateNetworkAsync(networkDefinition);

var subnetDefinition = new SubnetCreateDefinition(network.Id, IPVersion.IPv4, "{cidr}");
await networkService.CreateSubnetAsync(subnetDefinition);

var portDefinition = new PortCreateDefinition(network.Id) { Name = "{port-name}" };
await networkService.CreatePortAsync(portDefinition);

August 19, 2015 12:00 AM

August 18, 2015

RDO

RDO blog roundup, week of August 17th

Here's what RDO enthusiasts have been writing about over the past week.

If you're writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you're not on my list, please let me know!

Flavio Percoco, PTL of the Zaqar project by Rich Bowen

Zaqar (formerly called Marconi) is the messaging service in OpenStack. I recently had an opportunity to interview Flavio Percoco, who is the PTL (Project Technical Lead) of that project, about what’s new in Kilo, and what’s coming in Liberty.

... read more at http://tm3.org/1x

Tokenless Keystone by Adam Young

Keystone Tokens are bearer tokens, and bearer tokens are vulnerable to replay attacks. What if we wanted to get rid of them?

... read more at http://tm3.org/1y

Upgrades are dying, don’t die with them, by Maxime Payant-Chartier

We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.

... read more at http://tm3.org/1z

The OpenStack Big Tent, by Rich Bowen

OpenStack is big and complicated. It’s composed of many moving parts, and it can be somewhat intimidating to figure out what all the bits do, what’s required, what’s optional, and how to put all the bits together.

... read more at http://tm3.org/1-

Provider external networks (in an appropriate amount of detail) by Lars Kellogg-Stedman

In Quantum in Too Much Detail, I discussed the architecture of a Neutron deployment in detail. Since that article was published, Neutron gained the ability to handle multiple external networks with a single L3 agent. While I wrote about that back in 2014, I covered the configuration side of it in much more detail than I discussed the underlying network architecture. This post addresses the architecture side.

... read more at http://tm3.org/20

Logging configuration in OpenContrail by Numan Siddique

We know that all software components and services generate log files. These log files are vital in troubleshooting and debugging problems. If the log files are not managed properly then it can be extremely difficult to get a good look into them.

... read more at http://tm3.org/21

Neutron in-tree integration tests by Assaf Muller

It’s time for OpenStack projects to take ownership of their quality. Introducing in-tree, whitebox multinode simulated integration testing. A lot of work went in over the last few months by a lot of people to make it happen.

... read more at http://tm3.org/22

Dims talks about the Oslo project, by Rich Bowen

This is the second in what I hope is a long-running series of interviews with the various OpenStack PTLs (Project Technical Leads), in an effort to better understand what the various projects do, what's new in the Kilo release, and what we can expect in Liberty, and beyond.

... read (and listen) more at http://tm3.org/23

Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud by Joe Talerico

As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”

... read more at http://tm3.org/24

by rbowen at August 18, 2015 06:31 PM

Miguel Ángel Ajo

Neutron QoS service plugin

Finally, I’ve been able to record a video showing how the QoS service plugin works.

If you want to deploy this follow the instructions under the video. (open in vimeo for better quality: https://vimeo.com/136295066)

<figure class="tmblr-embed tmblr-full" data-orig-height="338" data-orig-width="540" data-provider="vimeo" data-url="https%3A%2F%2Fvimeo.com%2F136295066"><iframe frameborder="0" height="338" src="https://player.vimeo.com/video/136295066?title=0&amp;byline=0&amp;portrait=0" title="Neutron QoS service plugin" width="540"></iframe></figure>

Deployment instructions:

# setup neutronclient #########################################

cd /opt/stack/

git clone https://github.com/openstack/python-neutronclient.git
cd python-neutronclient
git fetch https://review.openstack.org/openstack/python-neutronclient \
         refs/changes/77/198277/22 && git checkout FETCH_HEAD

# setup the server & agents ###################################

  • checkout or cherry pick this devstack patch: https://review.openstack.org/#/c/212453/ 
  • add to your devstack local.conf:

enable_service q-qos

  • stack

# now create rules to allow traffic to the VM port 22 & ICMP #######

source ~/devstack/accrc/demo/demo

neutron security-group-rule-create  –direction ingress \
                                   –protocol tcp      \
                                   –port-range-min 22 \
                                   –port-range-max 22 \
                                   default
neutron security-group-rule-create –protocol icmp      \
                                    –direction ingress  \
                                   default

nova net-list
nova boot –image cirros-0.3.4-x86_64-uec –flavor m1.tiny –nic net-id=<private-net-id> qos-cirros
#wait….
nova show qos-cirros  # look for the IP
neutron port-list # look for the IP

# in another console, packet pusher ##############################
ssh cirros@$THE_IP_ADDRESS ‘dd if=/dev/zero  bs=1M count=1000000000’

# given a port id 49d4a680-4236-4d0c-9feb-8b4990ac35b9, look for the ovs port:
$ sudo ovs-vsctl show | grep qvo49d4
       Port “qvo49d4a680-42”
           Interface “qvo49d4a680-42”

# in yet another console: monitor  #############################
$ nload qvo49d4a680-42

  it will be pushing at max rate (100Mbps in my VM)


# finally, testing the QoS rules themselves ###################

source ~/devstack/accrc/admin/admin

neutron qos-policy-create bw-limiter
neutron qos-bandwidth-limit-rule-create bw-limiter –max_kbps 3000 –max_burst_kbps 300
neutron port-update <port id> –qos-policy bw-limiter

# it will quickly go down to 3Mbps

neutron qos-bandwidth-limit-rule-update <rule id> bw-limiter –max_kbps 5000 –max_burst_kbps 500

# it will go up to 5Mbps

neutron port-update <port id> –no-qos-policy

August 18, 2015 12:51 PM

Opensource.com

How open source helped one woman break into the tech industry

When society pushed her toward a job as an accountant, Victoria Martinez de la Cruz decided to forge her own path and pursue a career in IT.

by thatdocslady at August 18, 2015 10:00 AM

Mirantis

Yes, containers need OpenStack

The post Yes, containers need OpenStack appeared first on Mirantis | The #1 Pure Play OpenStack Company.

With this year’s OpenStack Silicon Valley focusing on the intersection between containers and OpenStack, it’s got me thinking about how they fit together. I remember when Docker burst on the scene, seemingly out of nowhere. The OpenStack community was used to being the darling of the vanguard set; what did this “new” paradigm mean? Even now, a year or so later, the conversations are still going on, as developers and architects try to decide on the “right” way to construct these applications — and whether and where OpenStack fits into the picture. Now that we’ve had time to look at the landscape, let’s get a reality check.

Containers, in case you’re not familiar, are these “self contained applications” that you can pack up and move here, there, and everywhere. Those of us old enough to hear “Write once, run anywhere” and think of Java without sneering will find this a familiar refrain. After all, being able to write a containerized application and move it around between machines easily is a powerful incentive; we developers want to write software, not engineer entire environments.

Google’s introduction of the Kubernetes container management orchestration system seemed, on the face of it, to make the situation even more muddled; after all, here was a way that you could manage all of those containers easily, moving them around, scaling them up and down, and so on.

“Tell us,” the skeptics said, “why do we need OpenStack again?”

Because you still needed computing resources on which to run those containers, that’s why.

“But can’t we just install Kubernetes or Docker Swarm or one of those other container management systems on our server and handle it that way?” the skeptics countered.

Well, you could have, but then not only would you have to worry about scaling — something OpenStack does well, but containers have yet to solidify — you would have to worry about the fact that every single container on that host has access to every other container on that host — whether it should or not.

You see, a container isn’t quite as “self-contained” as you might think.  For example, when you create a Docker volume, that’s an actual directory on the host that you can look at from outside the container. So far the container community hasn’t settled on how to manage this kind of security between different containers. Containers have the same issue when it comes to ports; any container can access any other container’s open ports.  Did you really want to share that environment between different apps and different developers, or even different users or entities?

Of course you didn’t.

That’s why so many people who are using containers are doing it in the context of virtual machines. A VM provides an opportunity to create a completely isolated environment for your container-based application, making it possible to provide the security and multi-tenancy you need in production applications. But how do you easily manage those VMs on today’s self-service-oriented data center environment?

With OpenStack, that’s how.

What’s more, it’s not just VMs that we’re talking about.  Any significantly advanced container-based application is going to need resources, such as databases, networking, and drive space. By keeping your applications in an OpenStack environment, you get the advantage of the Infrastructure as a Service capabilities it brings with it, such as being able to create storage volumes or networks on demand.

And we’re not just talking about resources directly consumed by the containers themselves, either. Containers are ideally suited to today’s microservices-based applications, which means that they will ideally be communicating with other resources, both container and non-container-based. That means a hybrid environment with a mix of different technologies.  In other words, the kind of environment where you’ll find OpenStack.

Plus, it’s a good thing that OpenStack is around, because containers are at their most useful when they’re combined, either with other containers or with other applications. And that means an orchestrator such as Kubernetes for purely containerized apps, or OpenStack Application Catalog (Murano) for compositing mixed applications. In fact, Murano makes it possible to deploy Kubernetes and add containerized applications with just a few clicks, bypassing all the pain of trying to manage it on your own — including provisioning resources.

For those who don’t want to use Murano, the OpenStack Magnum project is working to make container orchestration engines such as Kubernetes and Docker available as first class resources in OpenStack, deploying container-based resources on hypervisors, or even on bare metal resources, in much the same way it provisions VMs. Or you can run CoreOS, a stripped down Linux optimized for containers, as a VM. And you can do that today.

It’s been proposed that we should just skip all that OpenStack stuff and start over with a containers-only management system. “Sure,” these people said, “it doesn’t exist now, but we can build one!”  But this would skip over five years of development on a system that actually does what they want, in favor of starting over again and solving the problems that have already been solved. (And with a great deal of pain, I might add. There’s a reason that Joel Spolsky calls throwing out the code and starting again “the single worst strategic mistake that any software company can make”.)

It’s natural for a new technology to want to sweep everything away and start fresh.  Even OpenStack started that way, thinking it could replace virtualization giants such as VMware and public cloud behemoths such as Amazon Web Services. But eventually, when the giddiness wore off, we realized two things: first, that there was value in those approaches, and second, that enterprises were not going to suddenly get rid of every application they had in use and start over with OpenStack. So OpenStack adapted and found its place in the ecosystem as an integrator, taking advantage of what had come before.

Recently, the container folks have begun to realize that enterprises aren’t going to suddenly go all-container either; they need to look for a place in the existing ecosystem that will serve their needs — and OpenStack is where they’re going to find it.

Welcome, container folks, we’re glad to have you with us. It’s going to be a terrific ride.


Starting to take the intersection between containers and OpenStack seriously?  Head out to OpenStack Silicon Valley on August 26-27 to see luminaries in both fields, such as Google’s Craig McLuckie, CoreOS’s Alex Polvi, Battery Ventures’ Adrian Cockroft, and OpenStack’s Jonathan Bryce, talk about how things are coming together.

The post Yes, containers need OpenStack appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at August 18, 2015 04:55 AM

OpenStack:Now Podcast Ep 8: OpenStack Foundation’s Jonathan Bryce

The post OpenStack:Now Podcast Ep 8: OpenStack Foundation’s Jonathan Bryce appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/mdZVrvsG_JQ" width="560"></iframe>

Nick Chase and John Jainschigg talk to Jonathan Bryce of the OpenStack Foundation about what companies really want from OpenStack, and whether containers really need OpenStack.

The post OpenStack:Now Podcast Ep 8: OpenStack Foundation’s Jonathan Bryce appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at August 18, 2015 03:59 AM

August 17, 2015

Red Hat Stack

Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud

As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”  

These common questions are often difficult to answer because they rely on environment specifics. With every environment being different, often composed of products from multiple vendors, how does one go about finding answers to these generic questions?

To aid in this process, Red Hat Engineering has developed a reference architecture capturing  Guidelines and Considerations for Performance and Scaling of Red Hat Enterprise Linux OpenStack Platform 6-based cloud. The reference architecture utilizes common benchmarks to generate a load on a RHEL OpenStack Platform environment to answer these exact questions.

Where do I start?

With the vast amount of features that OpenStack provides, it also brings a lot of complexities to the table. The first place to start is not by trying to find performance & scaling results on an already running OpenStack environment, but to step back and take a look at the underlying hardware that is in-place to potentially run this OpenStack environment. This allows one to answer the questions “How much hardware do I need?” and “Is my hardware working as intended?” all while avoiding the complexities that can affect performance such as file systems, software configurations, and changes in the OS. A tool to answer these questions is the Automatic Health Check (AHC). AHC is a framework developed by eNovance to capture, measure and report a system’s overall performance by stress testing its CPU, memory, storage, and network. AHC’s main objective is to provide an estimation of a server’s capabilities and ensure its basic subsystems are running as intended. AHC uses tools such as sysbench, fio, and netperf and provides a series of benchmark tests that are fully automated to provide consistent results across multiple test runs. The test results are then captured and stored at a specified central location. AHC is useful when doing an initial evaluation of a potential OpenStack environments as well as post-deployment.  If a specific server causes problems, the same AHC non-destructive benchmark tests can be run on that server and the outcome could be compared with the initial results captured prior to deploying OpenStack. AHC is publically available open source project on GitHub via https://github.com/enovance/edeploy.

My hardware is optimal and ready, what’s next?

Deploy OpenStack! Once it is determined that the underlying hardware meets the specified requirements to drive an OpenStack environment, the next step is to go off and deploy OpenStack. While the installation of OpenStack itself can be complex,one of the keys to providing performance and scalability of the entire environment is to isolate network traffic to a specific NIC  for maximum bandwidth. The more NICs available within a system, the better. If you have questions on how to deploy RHEL OpenStack Platform 6, please take a look at Deploying Highly Available Red Hat Enterprise Linux OpenStack Platform 6 with Ceph Storage reference architecture.

Hardware optimal? Check. OpenStack installed? Check.

With hardware running optimally and OpenStack deployed, the focus turns towards validating the OpenStack environment using the open source tool Tempest.

Tempest is the tool of choice for this task as it contains a list of design principles for validating the OpenStack cloud by explicitly testing a number of scenarios to determine whether the OpenStack cloud is running as intended. The specifics on setting up Tempest can be found in this reference architecture.

Upon validating the OpenStack environment, the focus shifts to answering scalability and performance questions.  The two benchmarking tools used to do that are Rally and

Cloudbench (cbtool). Rally offers an assortment of actions to stress any OpenStack installation and the aforementioned  reference architecture has the details on how to use the benchmarking tools to test specific scenarios.

Cloudbench, cbtool, is a framework that automates IaaS cloud benchmarking by running a series of controlled experiments. An experiment is executed by the virtue of deploying and running a set of Virtual Applications (VApps). Within  our reference architecture, the workload VApp consists of two critical roles used for benchmarking, the orchestrator role and workload role.

Rally and CloudBench complement each other by providing the ability to benchmark different aspects of the OpenStack cloud thus offering different views on what to expect once the OpenStack cloud goes into production.

Conclusion

To recap, when trying to determine the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform installation make sure to follow these simple steps:

  1. Validate the underlying hardware performance using AHC
  2. Deploy Red Hat Enterprise Linux OpenStack Platform
  3. Validate the newly deployed infrastructure using Tempest
  4. Run Rally with specific scenarios that stress the control plane of OpenStack environment
  5. Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment

In our next blog, we will take a look at specific Rally scenario and discuss how tweaking the OpenStack environment based upon Rally results  could allow us to achieve better performance. Stay tuned and check out our blog site often!

 

by Joe Talerico - Senior Performance Engineer at August 17, 2015 09:20 PM

OpenStack Superuser

Got ideas or opinions for Superuser? Bring them on!

Superuser is heading to the Mid-Cycle Operators Meetup, OpenStack Silicon Valley and the inaugural OpenStack Seattle Day and we want to hear from you.

Join the ranks of your fellow community members by contributing your take on events, killer how-tos your opinions, or cool user stories from your clients.

Be on the lookout for me and Allison Price, though any of the OpenStack Foundation Staff will get word to us.

Last thing: if you aren't attending these events or if you don't catch us in person - we're always reachable on editor@openstack.org, too!

Cover Photo by Aaronth // CC BY NC

by Nicole Martinelli at August 17, 2015 08:14 PM

Make your cloud sing with OpenStack's Community App Catalog

OpenStack's Community App Catalog launched in May as a resource to help users put their clouds to work faster by deploying tools like big data, platform-as-a-service and container frameworks on OpenStack.

The Catalog is a place where community members can share apps and tools in the form of Glance images, Heat templates and Murano packages designed to integrate with OpenStack clouds.

With just a few clicks, you can experiment with emerging tools like Kubernetes and Docker by deploying packages that leverage the building blocks of OpenStack to handle authentication, networking, multi-tenant isolation and autoscaling.

To find out what's next, Superuser caught up with Christopher Aedo a cloud architect at IBM and part of the team that built the beta-version of the app catalog in just three weeks.

alt text here

What do people need to know before getting started?

Honestly, they don't need to know anything to get started - just visit http://apps.openstack.org and check it out!

Since launching in May, are there any statistics on participation?

There was an initial flurry of activity around adding content right before launching at the Vancouver summit, but the number of new submissions has dropped since then. A big piece of this is mainly because the Community App Catalog is still a beta application that was launched as a proof-of-concept, to quickly show an idea of something we could do, a way to showcase what folks can do with their OpenStack clouds. But that proof-of-concept had some limitations we knew we would have to resolve later (remember we built that in just three weeks!)

What are the most used apps?

We are not sure, unfortunately! One of the things we knew we would be solving after launch was how to provide voting, feedback on entries, and easily expose statistics like how many times an individual entry was downloaded.

We are tracking access information (like how many times any page was rendered for a visitor), but that doesn't really translate to how many times an app was used, and doesn't reflect what's really popular.

Is anyone doing something cool that we can take a look at?

Yes! Kevin Fox from the Pacific Northwest National Laboratory has been working really hard on a Horizon plugin that would not only allow browsing the App Catalog directly from Horizon, but has the right hooks to give users a "one-click import" for the assets in the catalog. So you could use that panel to search for something like a Heat template to deploy an app, and with one-click, bring that template into your environment and launch it.

The work is happening in the apps-catalog-ui repo (https://github.com/stackforge/apps-catalog-ui), and an early demo video can be found at: https://youtu.be/2UQ6xa6uDQY

How can people get involved?

At this phase, the thing we need most is feedback and support from some of the other projects. The App Catalog has the potential to benefit not just OpenStack as a whole, but multiple projects individually.

By helping us make the App Catalog the best place to share, find and deliver Apps (and application/service components), it becomes significantly easier for the potential consumers of those bits to find, retrieve and use them. The more OpenStack developers we can get engaged in this effort, the better it will be for the broader community.

What's next?

The next big steps for the Community App Catalog are all around shoring up the foundation to be sure it can comfortably grow both in terms of overall contents and in the size of the active user community. That includes the ability to rate assets and provide feedback with relative ease.

We're also starting to map out what an API would look like to make the catalog itself easier to interact with programmatically. We will also need that to add hooks for using the App Catalog from the command line interface - all of which is to say we are working to make this easily used with and from any Stack cloud!

Interested in getting more involved the Community App Catalog? Attend the weekly meetings scheduled for Thursdays at 17:00 UTC on #openstack-meeting-3

Cover Photo by Paul Hudson // CC BY NC

by Superuser at August 17, 2015 06:56 PM

Opensource.com

Upstream training, a commitment to interoperability, and more OpenStack news

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at August 17, 2015 07:00 AM

August 14, 2015

OpenStack Superuser

Jumpstart your OpenStack know-how with Upstream Training

New to OpenStack? You’re in good company. Roughly 60 percent of attendees at Summits are newcomers, that’s why the OpenStack Foundation developed Upstream Training, a free course taught before the OpenStack Summits.

Students can register with this online form for the October 25-26 training in Tokyo ahead of the four-day Summit. The visa process for Japan is a lengthy one, so apply early if you are interested in attending the course.

What will you learn? Upstream Training schools newcomers in the tools used to contribute code to OpenStack, making sure you know Gerrit from Jenkins as you join a community of over 3,300 developers from 250 different companies worldwide.

On the first day, students pick a real bug to work on, set up a development environment, get online accounts for all the tools, sign the contributor license agreements and learn about the workflow release cycle.

Day two focuses on soft skills and planning contributions — in the form of a role-playing exercise where you build Legos with classmates.

Tim Freund, an operator and application developer, is coordinating this edition of the course with new and returning volunteers, including veteran free software developer Loïc Dachary who created the Lego role-playing game.

For newcomers to OpenStack, the training is a great way to get your feet wet.

“Upstream training was a very easy-going way to get introduced to Openstack's workflow as well as the community,” says Peter Tran, an Upstream student in Vancouver. “It gave me everything I needed to become a contributor and on top of that it was a great way to start off the summit. I got to meet really interesting developers from all over the world and network with some great companies.”

alt text hereUpstream Training, Vancouver. Photo: Nicole Martinelli, OpenStack Foundation.

If you’re an OpenStack expert, organizers are always looking for assistants and mentors. These volunteers help students prepare for class, understand the material and mentor them through the contribution process after class. Here’s the form for mentor registration.

Organizer Freund — who was a student in Atlanta, an assistant in Paris, and a teacher in Vancouver — keeps coming back for more.

“I know OpenStack isn't conflict-free, but I get a tingly butterflies-in-my-stomach feeling at the Design Summits. It’s magical watching people from competing companies gather to design common components together,” he says. “That type of tight collaboration can even be rare within individual business units at proprietary software companies.”

Cover Photo: Vancouver Upstream Training by Nicole Martinelli for the OpenStack Foundation.

by Nicole Martinelli at August 14, 2015 11:46 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Aug., 8 – 14)

OpenStack continues to strengthen its commitment to interoperability

The latest on interoperability, Neutron, RefStack and how you can shape what’s next.

Jumpstart your OpenStack know-how with Upstream Training

Join experts for this free, fast-track course that combines work with play.

The Road to Tokyo

Reports from Previous Events

  • None this week

Deadlines and Contributors Notifications

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

Other News

The weekly newsletter is a way for the community to learn about all the various activities in the OpenStack world. 

by Jay Fankhauser at August 14, 2015 06:38 PM

OpenStack Superuser

Getting hands-on with OpenStack

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community. Got something you think we should highlight? Tweet, blog, or email us!

In case you missed it

The OpenStack community shared some great knowledge this week, here are some we wanted to highlight:

Full Stack Testing

"Time for OpenStack projects to take ownership of their quality. Introducing Neutron in-tree integration tests," says Assaf Muller, OpenStack Neutron team lead at Red Hat.

Getting started with CoreOS on OpenStack

Aptira's Alok Kumar shares the basics of CoreOS on an OpenStack infrastructure.

Total Newbie's Introduction to Heat Orchestration in OpenStack

"OpenStack is powerful, flexible, and is continuously developed. But best of all, it has a very rich API layer," Cisco says in this primer....

Top Ten Ways to Use OpenStack for Storage

Veteran IT journalist Drew Robb walks you through talking points in this part of a series of articles on OpenStack.

Interested in the back story on OpenStack and containers? Sean Michael Turner's got you covered with this interview with Adrian Otto, Magnum Project Team Lead (PTL).

Cloud … You’re Doing it Wrong!

And, in our favorite piece of punditry this week, Randy Bias, CEO of Cloudscaling, offers a great breakdown of what you need to know about cloud as you start your journey.

Industry watch

Here are some of the news items that crossed our radar this week....

HP snuggles up to OpenStack in cloud embrace

"Helion is vital for two reasons to understanding OpenStack as a whole. The first is that HP is the single largest contributor (by several metrics) to the latest OpenStack release, Liberty. HP's Helion needs to drive the code it creates, and HP is putting more effort into OpenStack than anyone else. The second reason that Helion is important is that it serves as a model for many other companies," The Register opines...

OpenStack is overkill for Docker New tooling is necessary for effectively managing Docker at scale, explains Matt Asay, vice president of mobile at Adobe.

OpenStack is redefining the business model for data solutions

Proof of this tectonic shift are the acquisitions of industry leading vendors of OpenStack-based companies, writes Orlando Bayter, CEO of Ormuco

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Getting started with CoreOS

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us!

Cover Photo by Shenghung Lin // CC BY NC

by Superuser at August 14, 2015 04:52 PM

RDO

Dims talks about the Oslo project

This is the second in what I hope is a long-running series of interviews with the various OpenStack PTLs (Project Technical Leads), in an effort to better understand what the various projects do, what's new in the Kilo release, and what we can expect in Liberty, and beyond.

You can listen to the recording --> here <--, and the transcript is below.

Rich: Hi, this is Rich Bowen. I am the OpenStack Community Liaison at Red Hat, and continuing my series on Project Technical Leads (PTLs) at OpenStack, I'm talking with Davanum Srinivas, who I've known for a few years outside of the OpenStack context, and he is the PTL for the Oslo project.

Oslo is the OpenStack Commons Library.

Thanks for speaking with me Davanum.

Dims: Thanks, Rich. You can call me Dims. You know me by Dims.

R: Yeah, I know. laughs

R: Give us a little bit of background. How long has the Oslo project been around?

D: We were doing things differently - we have a really old history, though. Some of the initial effort was started back in release B.

R: Oh, that long ago.

D: Yeah. So, what we were doing ... why did Oslo come about? Oslo came about because way back when Nova started, we started splitting code from Nova into separate projects. But these projects were sharing code, so we were trying to figure out the best way to synchronize code between these sibling or child projects. So we ended up with a single repository of source code, called Oslo Incubator, where you would have the master copy, and everybody would sync from there, but what was happening was, everybody had their own sync schedule. Some people were contributing patches back, and it was becoming hard to maintain those patches. We decided we had to change the method the team worked. And we started releasing libraries, for specific purposes. What you saw in Kilo was a big bang explosion of a huge number of libraries from the Oslo team. Most of it was code in Oslo Incubator. We just had to cut the modules in a proper shape, sequence, with an API, with a correct set of dependencies, and that's what we ended up releasing, one by one. Other projects started using things like oslo.config, oslo.messaging, oslo.db, oslo.log, and all these different libraries.

So that's where we are today.

R: What is it that you'll be doing in coming releases? Is it just the effort of identifying duplication, or are you actively developing new libraries.

D: Yes, we are. In Liberty, we have 5 new libraries coming up. Three of them start with 'oslo.' - like oslo.cache, oslo.reports, oslo.service. The other two do not have 'oslo' in their names. One is called Automaton, the other is called Futurist.

Automaton is a library for building state machines and things like that. Futurist is picking up some of the work that is done in upstream futures and things like that and making it available to all of the projects in the OpenStack ecosystem.

So these two projects can be used outside of Oslo, and outside of OpenStack by other people. That's why they don't have the Oslo name in them.

R: Do you see a lot of projects outside of OpenStack using these?

D: We hope so. For example, there is a project called Debt Collector, which we think fits well with how do you deprecate code and what are the primitives that we can provide that make it easy to mark their code being deprecated. A lot of people who work on Oslo also work in the overall Python ecosystem, so the hope is that if we design the libraries in such a way that it's reusable, other people will pick some of our stuff up. But that's a stretch goal. The real goal is to make sure that these libraries work well with the OpenStack projects.

And the other thing about these libraries is that they don't drag in the Oslo baggage. For example, if you take oslo.db or oslo.messaging, they pull in a lot of other Oslo libraries, and these little libraries are designed so that they don't drag in other Oslo libraries. So that's the other good thing about these.

The way the Oslo project has been for the last few cycles has been that we are doing a lot of experiments in Oslo, which have been rolled out to other projects in the OpenStack ecosystem. Oslo is slightly different from other projects in the sense that people don't work on it full time. So we have people work part time on it, but they focus mainly on other bigger projects. They come here when they need a feature or a fix, or things like that, and then they stay. We have a few cores who monitor reviews and bugs across all of the Oslo projects, but we also have people who specifically focus on individual little libraries, and they get core rights there.

People in the OpenStack ecosystem are experimenting with different structures, like for Neutron, they put everything into subrepos, and they experiement that way. And I think that what we are doing might be more useful to Nova, for example, and other projects, where they would like to keep a set of cores together, and also have subsystem maintainers and things like that.

Oslo is a good place to do this experimentation because the code base is not that huge, and the community is not that big as well. And the rate of churn, in terms of bugs and reviews, is not that high, as well. We are also experimenting with release versioning and things like that, and some of the things that you've seen recently driven by Doug, across the OpenStack ecosystem, we tested it here first, in terms of the versioning numbers, not having the Big Bang release, how do we do it, and things like that.

We lead the way.

The other big thing is, for example, Python 3.4 support. All the Oslo libraries have to be Python 3.4 complient first before they can be used and the other projects can adopt. So we end up being in the forefront trying to use libraries like websockify, or other libraries from the OpenStack ecosystem, which are not Python 3.4 complient, and we work with them to get them complient, and then use it in Oslo libraries, and then we roll it in. So we play an important role, I think, in the OpenStack ecosystem.

R: As PTL, is this a full time thing for you, or not? What are your responsibilities as PTL?

D: One of the very time consuming work is getting the releases out on a weekly basis. We try to make it predictable. At least, this cycle, we have started to make it predictable. Earlier, we heard complaints, we don't know when you guys are releasing, so we were not ready, and things like that. So we have a good process this time around, where, over the weekend, we run a bunch of tests, outside of the CI, as well as inside our CI system, to make sure that the master of all Oslo libraries works well with Nova, Neutron, Glance, and things like that, and come Monday morning, we decide which projects need releases, based on what has changed in them for the last week or so. We follow the release management guildelines, working with Doug and Thierry, to generate the releases during the day on Monday.

After Tuesday you don't have to worry about Oslo releases breaking the CI or your code. That has helped a lot of projects, especially Nova, for example. If they know there is a break late on Monday evening, they know who to ping, and we can start triaging the issue, and by Tuesday they are back on their feet.

That's the worst-case scenario. Best-case scenario, no problem happens, and we are good to go. But there's always one test case, or one scenario here or there. We always try to test beforehand, but like somebody said, it's a living thing - the ecosystem, the CI system, is like an emergent behavior, it's a living thing. It's hard.

by rbowen at August 14, 2015 03:12 PM

OpenStack Superuser

OpenStack continues to strengthen its commitment to interoperability

Following the OpenStack board of directors meeting in Austin on July 28, the DefCore committee met for a couple of days to continue evolving the OpenStack interoperability program tests.

Interoperability has been a major focus and priority for the OpenStack community and Foundation since its inception, and we've hit some big milestones this year. At the OpenStack Summit in Vancouver, we rolled out the revised "OpenStack Powered" program for products and services containing OpenStack software.

To participate in the program, public clouds, distributions, appliances, and other products or services must pass interoperability tests. We made a lot of progress with this program in 2015, and, as of April 1, all new “OpenStack Powered” products are tested against an interoperability standard. You can now see which products and services meet these standards in the OpenStack Marketplace. At the Vancouver Summit, 19 products from 16 vendors passed the interoperability standards. Today, that number of products is 23 with more in the pipeline.

If you're new to OpenStack and Interoperability, here's some quick background. The work for establishing, updating, and maintaining the interoperability guideline is carried out by the DefCore Committee, a board backed and community-driven group formed in November 2013. The committee defined guideline specifies the components and capabilities that a product must have. Components are defined by OpenStack code that must be present in the product, and capabilities are checked by API tests. In this way, the Foundation can use the OpenStack brand to ensure that the work of developers is preserved, and that operators and users can enjoy the benefits of a consistent API to deploy applications against.

At the latest board meeting, the Board of Directors approved the 2015.07 testing guideline, featuring two major changes:

  • a reorganization of the capabilities to better reflect what behavior the tests are checking for,
  • adding Keystone as a required component with tested capabilities.

After the board meeting,the DefCore Committee held its Liberty Sprint at the IBM campus in Austin. It was an incredibly productive two days during which we planned additions to the new standard, worked on solutions to existing procedural and testing issues, and refined how to best apply the trademark program to protect the interests of OpenStack developers and users.

Working with several project technical leaders, the committee decided to add Neutron networking as a required component (scheduled to arrive in 2016), and addressed existing interoperability issues. To encourage community involvement in evaluating the state of current OpenStack deployments, the committee also decided to adopt RefStack as a test result collection tool that operators and users can report interoperability test results to. RefStack provides and API and client for reporting test results to, and a UI for analyzing and comparing results.

The biggest change to come out of the meeting was the addition of Networking as a required component for the Compute program. Under direction from the Board, we are focusing on Neutron as the only approved networking component. We had the opportunity to have a working session with Neutron project team lead (PTL) Kyle Mestery about the current and future APIs, and plan for ways to encourage future development that will work consistently across the variety of possible Neutron configurations. Later, we did the initial scoring on proposed Neutron capabilities. You can take a look at and participate in the preliminary scoring here. Networking capabilities will be advisory in the upcoming 2016.01 standard, and will be required for all Compute and Platform products in the 2016.07 standard.

We also collaborated with the Nova and Glance PTLs, John Garbutt and Nikhil Komawar about how to manage the continued evolution of the APIs, focusing on cross-project dependencies and challenges. John Dickinson and Matthew Treinish, the Swift and Tempest PTLs, joined us for a conversation about how to expand API test coverage to take advantage of in-tree project testing expertise while providing a consistent testing interface through Tempest. Over the next few months, we will work on adding Swift tests as an external plugin to Tempest, merging multiple test suites under one framework.

On the testing side, Catherine Diep, PTL of the RefStack project, gave us a demonstration of the UI her team has developed. RefStack is a community project that runs tests, then collects and analyzes the results. The RefStack and Infra teams are working on a deployment to https://refstack.openstack.org, but you can access the site now at http://refstack.net. It is now the required site to report DefCore test results for trademark approval.

RefStack isn't just for reporting DefCore results, it's also a valuable resource for reporting all API test results as a way to identify widely deployed capabilities and compare them between vendors. We strongly encourage all OpenStack cloud operators to run the test suite and provide us with valuable and anonymous feedback on what capabilities are actively deployed right now. Those results will help us determine what APIs are most important for building interoperable applications on top of OpenStack.

During our sprint we continued to refine the administrative aspects of DefCore. We're settling into a six-month schedule to match the development and summit cycle. In 2015 we introduced four guidelines, but in 2016 and beyond will settle in to a slower 6-month cadence with a well-defined timeline.

In time for the OpenStack Summit Tokyo this October we expect to have a solid 2016.01 draft ready for community review. You can view the current status of the draft here, which will reflect the new components, capabilities, and the status of flagged tests. (A flagged test is one in which a capability is removed from testing because it is not widely supported or has some testing or upstream bug that needs to be fixed.)

For public clouds, we are proposing an addition to the program where continuous testing can automatically extend a trademark license in accordance with the licensing agreement. The goal is to encourage vendors to verify against the latest standard in a rapidly evolving ecosystem, where deployed code changes frequently to add upstream features or address bugs.

I'd like to thank the participants who came to the DefCore meeting, in particular:

  • Defcore co-chairs Rob Hirschfeld and Egle Sigler for their commitment to this interoperability program.
  • Catherine Diep for her efforts in test scoring and running the RefStack project.
  • Mark Voelker for his sharp attention to detail, in particular with preparing the addition of Neutron as a capability and with subtle procedural details.
  • Van Lindberg for organizing the capabilities and for his insightful analysis.
  • Also many thanks for Vince Brunssen, Catherine Diep, and Todd Moore for being our generous hosts at the IBM offices.

If you’d like to know more about the “OpenStack Powered” program and the interoperability efforts of the OpenStack Foundation, you can contact me directly at chris@openstack.org.

You can learn about the OpenStack Foundation Interoperability program at https://openstack.org/interop, and can participate in the DefCore committee by signing up for our mailing list at http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee, or find us on IRC in the #openstack-defcore room.

Cover Photo by Carsten Ullrich // CC BY NC

by Chris Hoge at August 14, 2015 02:10 PM

IBM OpenTech Team

Configuring Keystone with IBM’s Bluepages LDAP

In this post I’ll be configuring Keystone to have an LDAP identity backend. In this case, I’ll be using Bluepages, which is IBM’s internal LDAP. The majority of the steps should be the same for most organizations, a few LDAP specific configuration values will have to change. The post is divided up into four logical portions: 1) Setting up Keystone for LDAP, 2) Admin operations on LDAP, 3) Authenticating as an LDAP user, and 4) Logging in with Horizon.

Setting up Keystone for LDAP

To start, I launch DevStack to install the latest master version of Keystone and other OpenStack services. I do this simply because DevStack sets up the service accounts, endpoints and services for me. Though I’m using the master branch of OpenStack (and DevStack), this guide should work if you have a Kilo or Juno version of OpenStack, too. I should also mention that I am using OpenStackClient 1.6.0.

My local.conf config file for DevStack is pretty simple, just the essential services and some passwords:

RECLONE=yes
ENABLED_SERVICES=key,g-api,g-reg,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-cauth,horizon,mysql,rabbit
SERVICE_TOKEN=openstack
ADMIN_PASSWORD=openstack
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
LOGFILE=/opt/stack/logs/stack.sh.log

Once DevStack completes, to use the LDAP functionality of Keystone we have to install a few extra python libraries. These are not installed by default:

$ sudo pip install python-ldap
$ sudo pip install ldappool

Now it’s time to configure our environment variables to work with version 3 of the Identity API and create LDAP specific entries:

$ env | grep OS
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=openstack
OS_AUTH_URL=http://172.16.240.135:5000/v3
OS_USERNAME=admin
OS_PROJECT_NAME=admin

A brief interlude for background on the architecture…

Service accounts (like the admin account, or nova and glance accounts) must exist for services to authenticate via keystonemiddleware. User accounts will be stored in an LDAP. In most enterprise environments, adding Service accounts is probably not advised. To resolve this issue, deployers are recommended to store Service accounts in the Default domain, and create another domain for User accounts. This creates a logical division between the two identity sources. Furthermore, each domain can have it’s own identity source and be backed by either SQL or LDAP. In this post, we will be storing the Service accounts in SQL and the User accounts in LDAP. I’ve tried to display this in the image below:

Screen Shot 2015-08-14 at 1.46.38 AM

As a reminder, the Identity backend handles Users and Groups, the Resource backend handles Domains and Projects, and lastly, the Assignment backend handles Roles.

And we’re back…

By default, DevStack creates Service accounts in the Default domain (backed by SQL). We simply need to create a new domain for our User accounts (backed by LDAP). In the block below we’ll create domain ibm and a project ibmcloud within the domain.

$ openstack domain create ibm
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | 421370cd78234413bbeb5d7ce1c73077 |
| name    | ibm                              |
+---------+----------------------------------+

$ openstack project create ibmcloud --domain ibm
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | 421370cd78234413bbeb5d7ce1c73077 |
| enabled     | True                             |
| id          | 95ae2a9c259b4012b8b7e8ad7dc9a939 |
| name        | ibmcloud                         |
| parent_id   | None                             |
+-------------+----------------------------------+

Next up, we need to update keystone.conf, to enable domain specific identity drivers:

[identity]
domain_specific_drivers_enabled = True
domain_config_dir = /etc/keystone/domains

The domain_config_dir specifies where to house configuration files that are specific to a domain. They must be named in the following manner: keystone.DOMAIN_NAME.conf. Create /etc/keystone/domains/keystone.ibm.conf, and populate it with a whole bunch of LDAP specific values (seen below). If you are having trouble finding these values for your LDAP, use tools such as: ldapsearch or jxplorer:

[identity]
driver = keystone.identity.backends.ldap.Identity

[ldap]
url = ldap://bluepages.ibm.com
suffix = "ou=bluepages,o=ibm.com"
query_scope = sub

user_tree_dn = "ou=bluepages,o=ibm.com"
user_objectclass = ibmPerson
user_id_attribute = uid
user_name_attribute = mail
user_mail_attribute = mail
user_pass_attribute = userPassword
user_enabled_attribute = enabled

group_tree_dn = "ou=memberlist,ou=ibmgroups,o=ibm.com"
group_objectclass = groupOfUniqueNames
group_id_attribute = cn
group_name_attribute = cn
group_member_attribute = uniquemember
group_desc_attribute = description

user_allow_create = false
user_allow_update = false
user_allow_delete = false
project_allow_create = false
project_allow_update = false
project_allow_delete = false
role_allow_create = false
role_allow_update = false
role_allow_delete = false
group_allow_create = false
group_allow_update = false
group_allow_delete = false

The last step is to restart Keystone

$ sudo service apache2 restart

Admin operations on LDAP

With the same credentials we used previously, let’s try a few administrative tasks to ensure your configuration is working. Let’s try: 1) finding a user, 2) finding groups the user is a member of, 3) finding a specific group, 4) finding all users of a group, and 5) assigning a role to a group:

# 1. Find a user
$ openstack user show stevemar@ca.ibm.com --domain ibm
+-----------+------------------------------------------------------------------+
| Field     | Value                                                            |
+-----------+------------------------------------------------------------------+
| domain_id | c3c24e60d57e4461ad64b372c14128b7                                 |
| email     | stevemar@ca.ibm.com                                              |
| id        | c29cac8f7003e6de36a47b85f306f137bea8026e3d55c35aed27dfbf09c8fb28 |
| name      | stevemar@ca.ibm.com                                              |
+-----------+------------------------------------------------------------------+

# 2. Find groups the user is a member of
$ openstack group list --user stevemar@ca.ibm.com --user-domain ibm
+------------------------------------------------------------------+--------------------------------------+
| ID                                                               | Name                                 |
+------------------------------------------------------------------+--------------------------------------+
| 178e5df37393dd6695c4d93382c3fe46a048f60b21b393373ff8360a191b74bb | SWG_Canada                           |
| 5f620ab5abe2f0b14e3112357dd81069a03ca3ac637e7416b7d562152f73e938 | Toronto_Lab_VPN                      |
| ed9f9cb5dbefd6a6e5808ca97921734da3b90b5d160e0a7308c199680c595a96 | HiPODS-OpenCloud                     |

# 3. Find a specific group
$ openstack group show SWG_Canada --domain ibm
+-----------+------------------------------------------------------------------+
| Field     | Value                                                            |
+-----------+------------------------------------------------------------------+
| domain_id | c3c24e60d57e4461ad64b372c14128b7                                 |
| id        | ed9f9cb5dbefd6a6e5808ca97921734da3b90b5d160e0a7308c199680c595a96 |
| name      | HiPODS-OpenCloud                                                 |
+-----------+------------------------------------------------------------------+

# Find all the users of a group
$ openstack user list --group ed9f9cb5dbefd6a6e5808ca97921734da3b90b5d160e0a7308c199680c595a96
+------------------------------------------------------------------+--------------------------------+
| ID                                                               | Name                           |
+------------------------------------------------------------------+--------------------------------+
| c29cac8f7003e6de36a47b85f306f137bea8026e3d55c35aed27dfbf09c8fb28 | stevemar@ca.ibm.com            |
| f3ca1de06fbe100e71577aad68ccd588f9d792686ae75812becfe43d5b4aa09c | topol@us.ibm.com               |

# Note, for the previous command I've submitted a patch to reference groups by domain name.
# The following will work in the next release of openstackclient:
# $ openstack user list --group "HiPODS-OpenCloud" --domain ibm

# Assign a role of 'member' to users from the HiPODS group, to access the project 'ibmcloud'
$ openstack role add member --group "HiPODS-OpenCloud" --group-domain ibm --project ibmcloud --project-domain ibm

Note, if these list operations are not properly filtered, you will probably see an exception is raised. This is because Keystone will attempt to list ALL users or ALL groups, the LDAP connection will simply time out.

Authenticating as an LDAP user

Now to double check that everything works from a user’s perspective. Before we issue commands, open a new terminal and change your environment variables. The main difference in these new values (aside from the obvious change in username and password) are the addition of OS_USER_DOMAIN_NAME and OS_PROJECT_DOMAIN_NAME which are set to ibm:

$ env | grep OS
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=My5uper5ecretPa$$word
OS_AUTH_URL=http://172.16.240.134:5000/v3
OS_USERNAME=stevemar@ca.ibm.com
OS_USER_DOMAIN_NAME=ibm
OS_PROJECT_NAME=ibmcloud
OS_PROJECT_DOMAIN_NAME=ibm

Let’s try a few operations: 1) getting a token, 2) listing images, 3) listing flavors, and 4) creating a new VM:

# 1. Get a token:
$ openstack token issue
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| expires    | 2015-08-13T21:54:15.411376Z                                      |
| id         | 7c0df68c6b734735a22b2365f241b9b8                                 |
| project_id | e68bd223fb8045f6b7b2b7aa926433c8                                 |
| user_id    | c29cac8f7003e6de36a47b85f306f137bea8026e3d55c35aed27dfbf09c8fb28 |
+------------+------------------------------------------------------------------+

# 2. List images
$ openstack image list
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| 1f8a0809-5aa5-454a-916b-6077c00e20e1 | cirros-0.3.4-x86_64-uec         |
| 1d181a59-7ab7-4b5c-a351-118383d0b58b | cirros-0.3.4-x86_64-uec-ramdisk |
| b8aa23ee-1791-4a0e-b42a-cac6f7607b21 | cirros-0.3.4-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+

# 3. List flavors
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+

# 4. Launch a VM
$ openstack server create myVM --image cirros-0.3.4-x86_64-uec --flavor 1 
+--------------------------------------+------------------------------------------------------------------+
| Field                                | Value                                                            |
+--------------------------------------+------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                             |
| OS-EXT-STS:power_state               | 0                                                                |
| OS-EXT-STS:task_state                | scheduling                                                       |
| OS-EXT-STS:vm_state                  | building                                                         |
| OS-SRV-USG:launched_at               | None                                                             |
| OS-SRV-USG:terminated_at             | None                                                             |
| accessIPv4                           |                                                                  |
| accessIPv6                           |                                                                  |
| addresses                            |                                                                  |
| adminPass                            | PQDNiboGaNQ2                                                     |
| config_drive                         |                                                                  |
| created                              | 2015-08-14T07:35:36Z                                             |
| flavor                               | m1.tiny (1)                                                      |
| hostId                               |                                                                  |
| id                                   | fe2b4ab0-80cf-48fe-ad5b-3e8b9624edd1                             |
| image                                | cirros-0.3.4-x86_64-uec (1f8a0809-5aa5-454a-916b-6077c00e20e1)   |
| key_name                             | None                                                             |
| name                                 | myVM                                                             |
| os-extended-volumes:volumes_attached | []                                                               |
| progress                             | 0                                                                |
| project_id                           | e68bd223fb8045f6b7b2b7aa926433c8                                 |
| properties                           |                                                                  |
| security_groups                      | [{u'name': u'default'}]                                          |
| status                               | BUILD                                                            |
| updated                              | 2015-08-14T07:35:36Z                                             |
| user_id                              | c29cac8f7003e6de36a47b85f306f137bea8026e3d55c35aed27dfbf09c8fb28 |
+--------------------------------------+------------------------------------------------------------------+

Logging in with Horizon

For the last portion of this post, I’ll briefly show the changes necessary to Horizon to allow User accounts to authenticate.

Make the following changes to horizon/openstack_dashboard/local/local_settings.py:

OPENSTACK_API_VERSIONS = {
    "data-processing": 1.1,
    "identity": 3,
    "volume": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_URL="http://172.16.240.134:5000/v3"

Restart Horizon one last time…

$ sudo service apache2 restart

When refreshed, the landing page should now include a new field to log in with a domain of your choice and your LDAP credentials. Once logged in, your email should appear on the top right – Ta Da!

Landing page with new domain name option

Screen Shot 2015-08-13 at 9.09.56 PM

Once logged in, IBM email should identify the user at the top-right
Screen Shot 2015-08-13 at 9.09.17 PM

References:
1. Keystone Docs
2. Henry Nash’s Developer Works Article

The post Configuring Keystone with IBM’s Bluepages LDAP appeared first on IBM OpenTech.

by Steve Martinelli at August 14, 2015 06:10 AM

August 13, 2015

Assaf Muller

Neutron in-tree integration tests

It’s time for OpenStack projects to take ownership of their quality. Introducing in-tree, whitebox multinode simulated integration testing. A lot of work went in over the last few months by a lot of people to make it happen.

http://docs.openstack.org/developer/neutron/devref/fullstack_testing.html

We plan on adding integration tests for many of the more evolved Neutron features over the coming months.


by assafmuller at August 13, 2015 08:47 PM

Kenneth Hui

Understanding How VMware vSphere Integrates With OpenStack

Screen Shot 2013-06-28 at 12.03.11 PM (1)

One of the more interesting developments that we are seeing within the Platform9 customer base and among the users we speak with regularly is the growing desire to leverage OpenStack as a “manager of managers” for different types of technologies such as server virtualization and containers. This lines up with the messaging that the OpenStack Foundation has been putting forth regarding the emergence of OpenStack as an Integration Engine. At Platform9, we see the potential for leveraging OpenStack to not only integrate new technologies but to integrate legacy with next-generation, brownfield with greenfield, hypervisors with containers, etc..

In particular, we see growing interest in using OpenStack to automate the management of new and existing VMware vSphere infrastructures. In some cases, user are looking to  quickly”upscale” their current vSphere environment to an elastic, self-service infrastructure; in other cases, users are looking for a tool to help bridge their legacy infrastructures with their new cloud-native infrastructures. In many cases, Platform9 customers choose us because they want to accomplish these goals without having to invest in a great deal of professional services and bearing the burden of managing a complex platform.

In this respect, the choice Platform9 made to leverage OpenStack as the underlying technology we use to provide Cloud Management-as-a-Service was a no-brainer. VMware and others in the ecosystem have put a great deal of effort to contribute code to make vSphere a first class hypervisor option in OpenStack. In addition to gaining the benefits of a solution that has been embraced by the OpenStack ecosystem, Platform9 managed OpenStack customers also have access to the OpenStack APIs, which is the industry standard for private clouds.

However, there is still confusion in regards to understanding how vSphere integrates with OpenStack, particularly in terms of what is gained and what is lost. As an update to previous blog posts I’ve written on the subject, I just published a Platform9 blog post that provides a look under the hood of how vSphere integrates with OpenStack compute and the implications it has for architecting and operating an OpenStack powered cloud, both with vSphere as the single hypervisor and also in a mixed-mode environment. The way this integration works is exactly the same regardless of the OpenStack distribution that is being used to manage vSphere; this is true for Platform9, Red Hat Enterprise Linux OpenStack Platform (RHELOSP), Mirantis OpenStack, and even for VMware’s vSphere Integrated OpenStack (VIO). In future posts I will discuss additional capabilities Platform9 has added which we plan to contribute to the OpenStack project. You can find the latest blog post here.

vSphere with Nova Arch


Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud, Virtualization, VMware Tagged: Cloud, Cloud computing, ESXi, KVM, Nova Compute, OpenStack, Platform9, Private Cloud, VMware, VMware vSphere, vSphere

by kenhui at August 13, 2015 04:00 PM

eNovance Engineering Teams

Logging configuration in OpenContrail

We know that all software components and services generate log files. These log files are vital in troubleshooting and debugging problems. If the log files are not managed properly then it can  be extremely  difficult to get a good look into them.

Although system administrators cannot control the generation of logs, they can achieve  some level of log management by

  • having log rotators to get rid of the old log files.
  • using syslog to catch alerts.
  • archiving logs etc.

OpenContrail has several components, many of which can generate logs as well as store them in the log files. OpenContrail also provides the mechanism to configure the logging, so that the system administrators / DevOps can define the logging parameters to suite their own requirements.

In this blog post we will see logging support in OpenContrail components and what are the logging configuration mechanisms supported by it.

OpenContrail uses Sandesh protocol which provides the mechanism to exchange messages between various OpenContrail components. It also provides the functionality of logging those messages and the logs into the log files. You can read more about Sandesh in this great article

Logging can be configured by :

  • choosing the log file
  • selecting the log file size
  • defining custom formatters/loggers
  • using syslog etc.

OpenContrail has mainly Python components and C++ components.

Python components of OpenContrail are :

  • contrail API server
  • schema transformer
  • SVC monitor
  • discovery server
  • analytics Op server

C++ components of OpenContrail are :

  •  contrail vrouter
  •  contrail controller
  •  Query engine
  •  contrail analytics server
  •  contrail DNS

C++ components of OpenContrail use log4cplus for logging and python components use python logging.

OpenContrail versions

The configuration mechanisms defined in this post are supported by the master version of OpenContrail.

You need to cherry pick the below patches if you are using R2.2 or R2.1 version as these patches are still not merged yet.

OpenContrail R2.2

https://review.opencontrail.org/#/c/11106/

OpenContrail R2.1

https://review.opencontrail.org/#/c/11116/

https://review.opencontrail.org/#/c/11105/

Logging in OpenContrail python modules

First we will talk about logging in python components of OpenContrail. OpenContrail supports logging configuration for python components in three ways:

  1. Use the default logging provided by OpenContrail.
  2. Define your own log configuration file based on the python logging
  3. Define new logging mechanism by implementing a new logger or using other logging libraries like oslo.log

The configuration files of python components support the below logging parameters:

  • log_file
  • log_level
  • log_local
  • logging_conf
  • logger_class

In order to define custom logging configuration, we need to use the ‘logging_conf’ and ‘logger_class’ parameters. When these two parameters are defined, the other ones are ignored.

1. Use the default logging provided by OpenContrail.

You don’t have to do anything here. If you are not particular about logging configuration, then this is good enough.

2. Define your own log configuration file based on the python logging

You can define your own log configuration file. Please refer to the logging file format for more information on how to define the log config file for pythin logging.

Define the ‘logger_class’ and ‘logging_conf’ configuration parameters in the OpenContrail python component configuration files.

logger_class = pysandesh.sandesh_logger.SandeshConfigLogger
logging_conf = PATH TO THE LOG CONFIG FILE

Eg.

contrail-api.conf

[DEFAULT]
...
logger_class = pysandesh.sandesh_logger.SandeshConfigLogger
logging_conf = /etc/contrail/contrail-api-logger.conf

Format of the log configuration file

As mentioned above this has all the details about defining the log configuration file.

Log configuration file should have three main sections defined – [loggers],[handlers] and [formatters].

Below is a sample log configuration file format. This sample file can be used for all the OpenContrail python components. You can define one configuration file per module as well.

[loggers]
keys=root,contrail_api,contrail_svc_monitor,contrail_schema,contrail_discovery_handler,contrail_analytics_api

[handlers]
keys=root_handler,contrail_api_handler,contrail_syslog_handler,contrail_svc_handler,contrail_schema_handler,contrail_analytics_handler

[formatters]
keys=contrail_formatter,contrail_syslog_formatter,svc_formatter

[logger_root]
level=NOTSET
handlers=root_handler

[logger_contrail_api]
level=NOTSET
handlers=contrail_api_handler,contrail_syslog_handler
qualname=contrail-api
propagate=0

[logger_contrail_svc_monitor]
level=NOTSET
handlers=contrail_svc_handler,contrail_syslog_handler
qualname=contrail-svc-monitor
propagate=0

[logger_contrail_schema]
level=NOTSET
handlers=contrail_schema_handler,contrail_syslog_handler
qualname=contrail-schema
propagate=0

[logger_contrail_discovery]
level=NOTSET
handlers=contrail_discovery_handler,contrail_syslog_handler
qualname=contrail-discovery
propagate=1

[handler_root_handler]
class=StreamHandler
level=NOTSET
formatter=contrail_formatter
args=()

[handler_contrail_api_handler]
class=handlers.RotatingFileHandler
args=('/var/log/contrail/contrail-api.log', 'a', 3000000, 10)
formatter=contrail_formatter

[handler_contrail_svc_handler]
class=handlers.RotatingFileHandler
args=('/var/log/contrail/svc-monitor.log', 'a', 3000000, 8)
formatter=svc_formatter

[handler_contrail_schema_handler]
class=handlers.RotatingFileHandler
args=('/var/log/contrail/contrail-schema.log', 'a', 2000000, 7)
formatter=contrail_formatter

[handler_contrail_discovery_handler]
class=handlers.RotatingFileHandler
args=('/var/log/contrail/contrail-discovery-conf.log', 'a', 3000000, 0)
formatter=contrail_formatter

[handler_contrail_analytics_handler]
class=handlers.RotatingFileHandler
args=('/var/log/contrail/contrail-analytics.log', 'a', 3000000, 0)
formatter=contrail_formatter


[handler_contrail_syslog_handler]
class=handlers.SysLogHandler
level=ERROR
formatter=contrail_syslog_formatter
args=('/dev/log', handlers.SysLogHandler.LOG_USER)

[formatter_contrail_formatter]
format= %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p
class=logging.Formatter

[formatter_contrail_syslog_formatter]
format=contrail : %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p
class=logging.Formatter

[formatter_svc_formatter]
format=SVC MON %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p
class=logging.Formatter

As you can see above, a logger is defined for each of the OpenContrail components.

[logger_contrail_api]
level=NOTSET
handlers=contrail_api_handler,contrail_syslog_handler
qualname=contrail-api
propagate=0

‘qualname’ should match the OpenContrail component name, otherwise the logger defined for the OpenContrail component would not get reflected.

Below are the ‘qualname’ for each of the OpenContrail components.

 

Component name qualname
Contrail Api server contrail-api
SVC Monitor contrail-svc-monitor
Schema Transformer contrail-schema
Contrail Discovery contrail-discovery
Contrail Analytics API contrail-analytics-api

 

Defining your own logging configuration file gives you the flexibility to choose the logging parameters as per your requirements.
You can choose the logging handlers supported by the python logging like RotatingFileHandler, TimedRotatingFileHandler, WatchedFileHandler, MemoryHandler etc.

You can also choose a simple handler like FileHander and use logrotate or other external log rotaters to rotate the log files.

3. Define your own custom logging mechanism or use existing logging libraries.

If you’r someone who likes to define your new logging mechanism, this can also be done.

In order to do this you need to first:

  • write your custom logging class
  • define the custom logging class in the ‘logger_class’ configuration parameter.

Make sure that your custom python class is loadable. Your custom python class should be derived from ‘sandesh_base_logger.SandeshBaseLogger’.

Contrail Oslo Logger

You can find one custom logger – Contrail Oslo Logger here. Contrail Oslo logger uses the oslo.log and oslo.config modules of OpenStack.

You can define the log configuration options supported by oslo.log in a configuration file and provide the name of the file in the ‘logging_conf’ configuration parameter.

You can find the logging options supported by oslo.log here and here.

If you would like to have your own logging mechanism please see the code of contrail oslo logger as reference.

Logging in OpenContrail C++ components

OpenContrail C++ components use log4cplus  for logging.

OpenContrail supports the below logging parameters in the component configuration files :

  • log_disable : Disable logging
  • log_file    : Name of the log file
  • log_property_file : Path of the log property file.
  • log_files_count : Maximum log file roll over index
  • log_file_size  : Maximum size of the log file
  • log_level  : Severity level for local logging of sandesh messages

Similar to the python logging configuration file, you can define a log configuration file for the C++ components and give the path of the configuration file in the ‘log_property_file’ configuration parameter. When ‘log_property_file’ is defined, other logging parameters are ignored by the OpenContrail C++ components. log4cplus uses the term property file for the log configuration file.

The log property file should be defined in the format described here.

You can refer to this link to understand the format of the log4cplus log property file.

Define ‘log_property_file’ in the DEFAULT section of the C++ component configuration files to use the log property file defined by you.

Eg. contrail-control.conf

[DEFAULT]
log_property_file=/etc/contrail/control-log.properties

Sample lop property file

log4cplus.rootLogger = DEBUG, logfile, syslog

log4cplus.appender.logfile = log4cplus::FileAppender
log4cplus.appender.logfile.File = /var/log/contrail/contrail-collector.log
log4cplus.appender.logfile.Append = true
log4cplus.appender.logfile.ImmediateFlush = true

log4cplus.appender.logfile.layout = log4cplus::PatternLayout

log4cplus.appender.logfile.layout.ConversionPattern = %D{%Y-%m-%d %a %H:%M:%S:%Q %Z} %h [Thread %t, Pid %i]: %m%n

log4cplus.appender.syslog = log4cplus::SysLogAppender
log4cplus.appender.syslog.Threshold=ERROR
log4cplus.appender.syslog.FACILITY=USER
log4cplus.appender.syslog.layout = log4cplus::PatternLayout
log4cplus.appender.syslog.layout.ConversionPattern = %D{%Y-%m-%d %a %H:%M:%S:%Q %Z} %h [Thread %t, Pid %i]: %m%n

You can refer to the Appenders supported by log4cplus here.

Conclusion

You’ve now hopefully seen how logging is supported in OpenContrail and how you can define your own custom logging configuration files. With this knowledge, it should be possible for system admins/DevOps to manage the log files properly and help them quickly and efficiently troubleshoot problems.

 

by Numan Siddique at August 13, 2015 11:24 AM

Alessandro Pilotti

How to easily deploy OpenStack Kilo with Puppet – Part 1

There are plenty of online resources about Puppet and OpenStack, but after a quick search I noticed that none of them was actually providing what people might actually be looking for: a simple manifest to deploy the latest and greatest OpenStack (Kilo at the time of this writing). This post is meant to solve this precise request starting with the easiest scenario: an “all in one” deployment (AiO).

All in One OpenStack on Ubuntu Server 14.04 LTS

OpenStack has a very modular architecture, with a lot of individual projects dealing with different aspects of a cloud, for example Keystone (identity), Nova (compute), Neutron (networking), Cinder (block storage) and so on. All in one simply means that all those components are deployed on a single host.

A detailed description of the OpenStack architecture goes beyond the scope of this post, but you can find all the documentation you need to get you started on the OpenStack foundation’s site.

The OpenStack community provides also official OpenStack Puppet modules:

What are missing are some samples showing how to bring all the pieces together and, to complicate things for the neophyte, there are tons of other Puppet modules available on Stackforge or on the Puppet forge with similar aims. The result is that getting lost when looking for a simple quickstart is very easy!

To put things straight, this post is not planning to showcase yet another Puppet module, but it’s rather focused on making good use of the existing official ones with a simple manifest, targeting the Kilo release and one of the most popular Linux OS choices: Ubuntu Server 14.04 LTS.

Let’s get started. Use git to clone your copy of the manifest:

git clone https://github.com/cloudbase/openstack-puppet-samples/tree/master/kilo
cd openstack-puppet-samples/kilo

Install all the dependencies (note that version 6 of the OpenStack Puppet modules refer to the Kilo release):

sudo apt-get install puppet -y
sudo puppet module install openstack/keystone --version ">=6.0.0 <7.0.0"
sudo puppet module install openstack/glance --version ">=6.0.0 <7.0.0"
sudo puppet module install openstack/cinder --version ">=6.0.0 <7.0.0"
sudo puppet module install openstack/nova --version ">=6.0.0 <7.0.0"
sudo puppet module install openstack/neutron --version ">=6.0.0 <7.0.0"
sudo puppet module install example42/network
sudo puppet module install saz/memcached

Now, just edit openstack-aio-ubuntu-single-nic.pp and replace the following variable values based on your environment, in particular $public_subnet_gateway and $public_subnet_allocation_pools if you want to have external network access from your OpenStack instances:

$interface = 'eth0'
$ext_bridge_interface = 'br-ex'
$dns_nameservers = ['8.8.8.8', '8.8.4.4']
# This is the network to assign to your instances
$private_subnet_cidr = '10.0.0.0/24'
# This must match a network to which the host is connected
$public_subnet_cidr = '192.168.209.0/24'
# The gateway must be in the range defined in $public_subnet_cidr
$public_subnet_gateway = '192.168.209.2'
# Must be a subset of the $public_subnet_cidr range
$public_subnet_allocation_pools = ['start=192.168.209.30,end=192.168.209.50']

Next, if you have a host with a single network adapter and configured with DHCP, the manifest will get the IP address from eth0 and do all the work for you, assigning a static address based on the discovered networking information (you may want to exclude this IP from your DHCP lease range afterwards), alternatively just assign the following variables:

$local_ip = "your host ip"
$gateway = "your gateway"

Note: this manifest expects that virtualization support is available and enabled on your host and KVM will be used as the hypervisor option in Nova. Although not recommended, this can be changed by setting “libvirt_virt_type” to “qemu“.

The basic configuration is done, let’s get started with the deployment:

sudo puppet apply --verbose openstack-aio-ubuntu-single-nic.pp

Access your OpenStack deployment

Your OpenStack dashboard is now accessible at http://<openstack_host>/horizon.

Yopu can login using one of the predefined users created by the manifest: admin or demo. All passwords are set to Passw0rd (this can be changed in the manifest of course).

Alternatively, using the shell just source one of the following files to access your admin or demo environments:

source /root/keystonerc_demo
source /root/keystonerc_admin

Create a keypair if you don’t have one already:

test -d ~/.ssh || mkdir ~/.ssh
nova keypair-add key1 > ~/.ssh/id_rsa_key1
chmod 600 ~/.ssh/id_rsa_key1

You can now boot an instance:

NETID=`neutron net-show private | awk '{if (NR == 5) {print $4}}'`
nova boot --flavor m1.tiny --image "cirros-0.3.4-x86_64" --key-name key1 --nic net-id=$NETID vm1

What’s next?

This is the first post in a series about Puppet and OpenStack. Expect to see more complex multi-node configurations and how to add Hyper-V compute nodes to work side by side with KVM!

The post How to easily deploy OpenStack Kilo with Puppet – Part 1 appeared first on Cloudbase Solutions.

by Alessandro Pilotti at August 13, 2015 11:00 AM

Tesora Corp

Catch the Exclusive Preview of the State of DBaaS Survey Results

There’s no denying the popularity of Database as a Service in the public cloud. When it was launched in 2012, Amazon’s NoSQL DBaaS offering, DynamoDB quickly became the fastest growing service in the history of AWS.  That then remained true until this year when they reported that their data warehousing service, Redshift had eclipsed DynamoDB as their fastest growing service.  And of course we can’t forget the biggest DBaaS business on Amazon, RDS, which Forrester Research reported was in use by a whopping 45% of AWS customers.

The exciting thing is that some early survey data seems to indicate that, as well as Amazon has done in rolling out database services on AWS, the impact of DBaaS in OpenStack may be even greater with many users preferring to keep their sensitive data inside their own data centers while still delivering an on-demand experience to their development users that compares to what they would get on a public cloud.

To explore this trend, Tesora has teamed up with 451 Research to poll enterprises on their current and planned use of DBaaS technologies, including use cases, expected benefits, plans for implementation, and challenges for adoption.  The full report will be published in the fall, but we’ll be sharing some of the early results at the upcoming OpenStack Trove Day in San Jose.  If you’re anywhere near Silicon Valley on August 25th, (or can get there,) you should definitely join us to learn more!

Get a sneak peak from our CTO, Amrith Kumar, of what to expect at OpenStack Trove Day!

The event is free and you can register here.

 

The post Catch the Exclusive Preview of the State of DBaaS Survey Results appeared first on Tesora.

by Ken Rugg at August 13, 2015 07:30 AM

Catch the Exclusive Preview of the State of DBaaS Survey Results

There’s no denying the popularity of Database as a Service in the public cloud. When it was launched in 2012, Amazon’s NoSQL DBaaS offering, DynamoDB quickly became the fastest growing service in the history of AWS.  That then remained true until this year when they reported that their data warehousing service, Redshift had eclipsed DynamoDB as their fastest growing service.  And of course we can’t forget the biggest DBaaS business on Amazon, RDS, which Forrester Research reported was in use by a whopping 45% of AWS customers.

The exciting thing is that some early survey data seems to indicate that, as well as Amazon has done in rolling out database services on AWS, the impact of DBaaS in OpenStack may be even greater with many users preferring to keep their sensitive data inside their own data centers while still delivering an on-demand experience to their development users that compares to what they would get on a public cloud.

To explore this trend, Tesora has teamed up with 451 Research to poll enterprises on their current and planned use of DBaaS technologies, including use cases, expected benefits, plans for implementation, and challenges for adoption.  The full report will be published in the fall, but we’ll be sharing some of the early results at the upcoming OpenStack Trove Day in San Jose.  If you’re anywhere near Silicon Valley on August 25th, (or can get there,) you should definitely join us to learn more!

Get a sneak peak from our CTO, Amrith Kumar, of what to expect at OpenStack Trove Day!

The event is free and you can register here.

 

The post Catch the Exclusive Preview of the State of DBaaS Survey Results appeared first on Tesora.

by Ken Rugg at August 13, 2015 07:30 AM

Catch the Exclusive Preview of the State of DBaaS Survey Results

There’s no denying the popularity of Database as a Service in the public cloud. When it was launched in 2012, Amazon’s NoSQL DBaaS offering, DynamoDB quickly became the fastest growing service in the history of AWS.  That then remained true until this year when they reported that their data warehousing service, Redshift had eclipsed DynamoDB […]

The post Catch the Exclusive Preview of the State of DBaaS Survey Results appeared first on Tesora.

by Ken Rugg at August 13, 2015 07:30 AM

Rackspace Developer Blog

Install OpenStack from source Part 5

This is the fifth installment in a series of installing OpenStack from source. The four previous articles can be found here:

We installed the Identity service (keystone), Image service (glance), Networking service (neutron) and the Compute service (nova) onto the controller node, and then we turned our attention to the network node to install the neutron agents to support the network layers two and three. Now, we turn our attention to the compute node to install both neutron and nova.

We are close to finishing our install of OpenStack - this section finishs the basic OpenStack install. We will be able to create networks and start VMs with this article, leaving only cinder and horizon for the last artice.

Install the following packages, which are prerequisites for some of the pip packages installed next.

apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libxml2-dev libxslt1-dev libmysqlclient-dev libffi-dev libssl-dev
apt-get install -y libvirt-bin qemu-kvm libpq-dev python-libvirt genisoimage kpartx parted vlan multipath-tools sg3-utils libguestfs0 python-guestfs python-dev sysfsutils pkg-config
pip install pbr

Set some shell variables that we use:

cat >> .bashrc << EOF
MY_IP=10.0.1.6
MY_PRIVATE_IP=10.0.1.4
MY_PUBLIC_IP=10.0.0.4
LOCAL_DATA_IP=10.0.2.6
EOF

Note: if your networking environment is different from this one, the IPs used above may have to be adjusted. The variables MY_PRIVATE_IP and MY_PUBLIC_IP refer to the interface with the API access on the controller node. The variable MY_IPis the interface on the compute node that is connected to the API interface on the controller node. The variable LOCAL_DATA_IP is the IP of the interface on the compute node over which tenant network traffic travels and is connected to the corresponding interface on the network node. Refer to the graphic in the first article in the series.

Run the following command to set the variables in the current shell session:

source .bashrc

Like we have done on the controller and network nodes, we need to create users under which the associated services run. The following script creates these services, along with the directories that they need and provides the configuration file to rotate the log files.

for SERVICE in neutron nova
do

useradd --home-dir "/var/lib/$SERVICE" \
        --create-home \
        --system \
        --shell /bin/false \
        $SERVICE
if [ "$SERVICE" == 'nova' ]
  then
    usermod -G libvirtd $SERVICE
fi

mkdir -p /var/log/$SERVICE
mkdir -p /var/lib/$SERVICE
mkdir -p /etc/$SERVICE

chown -R $SERVICE:$SERVICE /var/log/$SERVICE
chown -R $SERVICE:$SERVICE /var/lib/$SERVICE
chown $SERVICE:$SERVICE /etc/$SERVICE

if [ "$SERVICE" == 'neutron' ]
  then
    mkdir -p /etc/neutron/plugins/ml2
    mkdir -p /etc/neutron/rootwrap.d
fi

cat >> /etc/logrotate.d/$SERVICE << EOF
/var/log/$SERVICE/*.log {
        daily
        missingok
        rotate 7
        compress
        notifempty
        nocreate
}
EOF

done

Set Ubuntu defaults to use a config option, when using the upstart scripts, to start the neutron processes:

cat > /etc/default/neutron << EOF
--config-file=/etc/neutron/plugins/ml2/ml2_conf.ini
EOF

Create some additional needed directories for nova and set the proper permissions:

mkdir /var/lib/nova/keys
mkdir /var/lib/nova/locks
mkdir /var/lib/nova/instances
chown -R nova:nova /var/lib/nova

Now clone the neutron repo:

git clone https://github.com/openstack/neutron.git -b stable/kilo

Copy the downloaded configuration files from the cloned repo:

cp neutron/etc/* /etc/neutron/
cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d

Install the neutron Python scripts:

cd neutron
python setup.py install
cd ~

Give the neutron sudo access, limited by rootwrap, to the commands for which neutron needs root privileges to execute:

cat > /etc/sudoers.d/neutron_sudoers << EOF
Defaults:neutron !requiretty

neutron ALL = (root) NOPASSWD: /usr/local/bin/neutron-rootwrap  /etc/neutron/rootwrap.conf *
EOF
chmod 440 /etc/sudoers.d/neutron_sudoers

Now build the neutron.conf file. Like we did on the controller and network nodes, we are not going to use the neutron.conf file that came when we cloned the neutron repo. Instead, it is built from scratch (this one is much shorter that the ones on the controller and network nodes):

rm /etc/neutron/neutron.conf
cat > /etc/neutron/neutron.conf << EOF
[DEFAULT]
verbose = True
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

[oslo_messaging_rabbit]
rabbit_host = $MY_PRIVATE_IP

[oslo_concurrency]
lock_path = /var/lock/neutron

[agent]
root_helper=sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
EOF

Like we have done on the previous nodes, we configure the neutron ML2 plugin agent to use GRE tunnels for project network isolation. The ml2 plugin configuration file:

rm /etc/neutron/plugins/ml2/ml2_conf.ini
cat > /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
local_ip = $LOCAL_DATA_IP
enable_tunneling = True

[agent]
tunnel_types = gre
EOF

chown neutron:neutron /etc/neutron/*.{conf,json,ini}
chown -R neutron:neutron /etc/neutron/plugins

Lastly, for the network node we create the neutron upstart script files, for only the Open vSwitch agent:

cat > /etc/init/neutron-openvswitch.conf << EOF
# vim:set ft=upstart ts=2 et:

#start on runlevel [2345]
#stop on runlevel [!2345]

script
  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server
  [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG"
exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-openvswitch-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/openvswitch-agent.log \$CONF_ARG
end script
EOF

With the agents configured and the startup scripts in place, let's start everything up:

start neutron-openvswitch

Wait 20 to 30 seconds and verify that everything started:

ps aux|grep neutron

If you don't see a line of output for the neutron-openvswitch-agent process, use the following command to start the process. The output provides information why the neutron-openvswitch-agent is not running.

sudo -u neutron neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/openvswitch-agent.log

Next we turn our attention to installing nova on the compute node. We have already created the nova user and the major required directories. So clone the nova repo:

git clone https://github.com/openstack/nova.git -b stable/kilo

Copy the downloaded (cloned) configuration files to their proper location in the etc directory:

cd nova
cp -r etc/nova/* /etc/nova/

And install the nova Python scripts:

python setup.py install
cd ~

Give the nova sudo access, limited by rootwrap, to the commands for which nova needs root privileges to execute:

cat > /etc/sudoers.d/nova_sudoers << EOF
Defaults:nova !requiretty

nova ALL = (root) NOPASSWD: /usr/local/bin/nova-rootwrap  /etc/nova/rootwrap.conf *
EOF

chmod 440 /etc/sudoers.d/nova_sudoers

Now create the nova.conf file. Use the OpenStack Config Reference Guide to familarize yourself with each of the parameters being set:

cat > /etc/nova/nova.conf << EOF
[DEFAULT]
#verbose = True
dhcpbridge_flagfile = /etc/nova/nova.conf
dhcpbridge = /usr/local/bin/nova-dhcpbridge
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
force_dhcp_release = True
iscsi_helper = tgtadm
libvirt_use_virtio_for_bridges = True
connection_type = libvirt
root_helper = sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf
ec2_private_dns_show_ip = True
api_paste_config = /etc/nova/api-paste.ini
volumes_path = /var/lib/nova/volumes
enabled_apis = ec2,osapi_compute,metadata
network_api_class=nova.network.neutronv2.api.API
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
auth_strategy = keystone
force_config_drive = always
my_ip = $MY_PRIVATE_IP
fixed_ip_disassociate_timeout = 30
enable_instance_password = False
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = openstack
novncproxy_base_url = http://$MY_PUBLIC_IP:6080/vnc_auto.html
vncserver_proxyclient_address = $MY_PRIVATE_IP
vncserver_listen  = 0.0.0.0

[glance]
host = 10.0.1.4

[keystone_authtoken]
auth_uri = http://$MY_PRIVATE_IP:5000
auth_host = $MY_PRIVATE_IP
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova

[neutron]
url=http://10.0.1.4:9696
admin_username = neutron
admin_password = neutron
admin_tenant_name = service
admin_auth_url = http://10.0.1.4:5000/v2.0
auth_strategy = keystone

[oslo_concurrency]
lock_path = /var/lock/nova

[oslo_messaging_rabbit]
rabbit_host = 10.0.1.4

EOF

Since this is the compute node, we need to configure the nova-compute.conf file and to set the proper privileges for the file:

cat > /etc/nova/nova-compute.conf << EOF
[DEFAULT]
compute_driver=libvirt.LibvirtDriver
[libvirt]
virt_type=kvm
EOF

chown nova:nova /etc/nova/*.{conf,json,ini}

On compute nodes load the nbd module. The start script does this also, but we do this here to ensure that there are no problems with loading the module:

modprobe nbd
depmod

And lastly, create the needed nova upstart script:

cat > /etc/init/nova-compute.conf << EOF
description "Nova compute worker"
author "Soren Hansen <soren@linux2go.dk>"

start on runlevel [2345]
stop on runlevel [!2345]


chdir /var/run

pre-start script
        mkdir -p /var/run/nova
        chown nova:root /var/run/nova/

        mkdir -p /var/lock/nova
        chown nova:root /var/lock/nova/

        modprobe nbd
end script

exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-compute -- --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf
EOF

Now start the nova-compute process and verify that it stays running:

start nova-compute

Wait 20 to 30 seconds and verify that everything started:

ps aux|grep nova

And you should see something like this:

root@compute:~# ps aux|grep nova
nova      2026  0.3 38.1 2830936 1546308 ?     Ssl  May04 525:17 /usr/bin/python /usr/local/bin/nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf

If by chance the nova-compute service didn't start or stay running, use the following command to test and get log output to debug the reason for the failure:

sudo -u nova nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf

With nova-compute running, lets test everything by booting a VM. Log into the controller node and source the admin cerdentials:

source adminrc

Create a network named private:

root@controller:~# neutron net-create private
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 90c0dea3-425a-47f5-8df1-8d3fa57067ba |
| name                      | private                              |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 100                                  |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 9d314f96330a4e459420623a922e2c09     |
+---------------------------+--------------------------------------+

On the network named private, attach a subnet named private-subnet with a CIDR 10.1.0.0/28:

root@controller:~#     neutron subnet-create --name private-subnet private 10.1.0.0/28
Created a new subnet:
+-------------------+-------------------------------------------+
| Field             | Value                                     |
+-------------------+-------------------------------------------+
| allocation_pools  | {"start": "10.1.0.2", "end": "10.1.0.14"} |
| cidr              | 10.1.0.0/28                               |
| dns_nameservers   |                                           |
| enable_dhcp       | True                                      |
| gateway_ip        | 10.1.0.1                                  |
| host_routes       |                                           |
| id                | 6f3f8445-e558-4bba-9521-90b2c0a8e850      |
| ip_version        | 4                                         |
| ipv6_address_mode |                                           |
| ipv6_ra_mode      |                                           |
| name              | private-subnet                            |
| network_id        | 90c0dea3-425a-47f5-8df1-8d3fa57067ba      |
| tenant_id         | 9d314f96330a4e459420623a922e2c09          |
+-------------------+-------------------------------------------+

And boot an instance named MyFirstInstance, using the previously loaded image named cirros-qcow2, using a flavor with id 1:

root@controller:~# nova boot --image cirros-qcow2 --flavor 1 MyFirstInstance
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          | nova                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000014                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | 3bt5L2Jr5uYv                                        |
| config_drive                         |                                                     |
| created                              | 2015-08-12T12:24:14Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | 48c0066a-f16e-414d-89ed-4b93496d0d8f                |
| image                                | cirros-qcow2 (394aee69-53bc-4290-af3b-05fb4150023b) |
| key_name                             | -                                                   |
| metadata                             | {}                                                  |
| name                                 | MyFirstInstance                                     |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 9d314f96330a4e459420623a922e2c09                    |
| updated                              | 2015-08-12T12:24:14Z                                |
| user_id                              | 2f9cf7c3b3674c0e9cff5143ea633a59                    |
+--------------------------------------+-----------------------------------------------------+

Finally, use the nova list command to verify that the image booted. You may have to run this several times, or wait a few seconds, to give your newly created OpenStack system time to boot its first VM.

root@controller:~# nova list
+--------------------------------------+-----------------+--------+------------+-------------+------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks         |
+--------------------------------------+-----------------+--------+------------+-------------+------------------+
| 48c0066a-f16e-414d-89ed-4b93496d0d8f | MyFirstInstance | ACTIVE | -          | Running     | private=10.1.0.2 |
+--------------------------------------+-----------------+--------+------------+-------------+------------------+

Congratulations, you have successfully gotten OpenStack running from a source install. Want to update a service with the latest patches? It's simple! In the following section, we update nova on the controller node.

cd nova
git pull

Now update the installed Python scripts:

python setup.py install

And lastly, restart the associated services:

restart nova-api
restart nova-cert
restart nova-consoleauth
restart nova-conductor
restart nova-scheduler

That was easy. In the next and concluding article of this series, we go to the controller node and install the Volume service (cinder) and the web based dashboard (horizon).

August 13, 2015 06:59 AM

Aptira

Getting started with CoreOS on OpenStack

What is CoreOS and Why?

CoreOS is an open-source lightweight operating system based on the Linux kernel and is designed to provide infrastructure for clustered deployments.

Microservices architecture have their advantages. In case you are building/managing your stack as containerized microservices, CoreOS is the perfect operating system. CoreOS provides only the minimal functionality required for deploying applications inside software containers, together with built-in mechanisms for service discovery and configuration sharing.

Getting started with coreos on openstack (using HEAT)This blog post shares my experiences while learning the basics of CoreOS on an OpenStack infrastructure. On completion of the article, we will have a three node cluster comprised of one control node and two worker nodes. Provisioning of the cluster will be done using OpenStack Heat templates.

On right is a simple graphics of our end cluster setup.

The code snippets used in the post are available in my public github account https://github.com/rajalokan/coreos-openstack-beginner for reference.

Prerequisites:

To follow this tutorial, we need to install some binaries on the local machine.

etcdctl

etcdctl is a command line client for etcd. CoreOS’s etcd is a distributed, consistent key-value store for shared configuration and service discovery.

Our control node will have the etcd service running, so on the local machine we need the etcd client installed in order to talk to the CoreOS cluster. We can install this in one step.

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.1.1/etcd-v2.1.1-linux-amd64.tar.gz -o /tmp/etcd-v2.1.1-linux-amd64.tar.gz

$ tar xzvf /tmp/etcd-v2.1.1-linux-amd64.tar.gz -C /tmp/

$ mv /tmp/etcd-v2.1.1-linux-adm64/etcdctl /usr/local/bin && chmod +x /usr/local/bin/etcdctl

Make sure to add /usr/local/bin to the system PATH or call it directly using `/usr/local/bin/etcdctl`.

fleetctl

fleet ties together systemd and etcd into a simple distributed init system. Think of it as an extension of systemd that operates at the cluster level instead of at the machine level.

fleet provides a command-line tool called `fleetctl`. We will use this to communicate with our cluster. To install, run the following commands.

$ curl -L https://github.com/coreos/fleet/releases/download/v0.11.2/fleet-v0.11.2-linux-amd64.tar.gz -o fleet-v0.11.2-linux-amd64

$ tar xzvf /tmp/fleet-v0.11.2-linux-amd64 -C /tmp/

$ mv /tmp/fleet-v0.11.2-linux-amd64/fleetctl /usr/local/bin && chmod +x /usr/local/bin/fleetctl

python-heatclient & python-glanceclient

We will use the OpenStack heat client to spin up a VM and the OpenStack glance client to create a CoreOS image in our OpenStack infrastructure. We can install both of these client tools using pip:

$ pip install python-heatclient python-glanceclient

Next, we want to add the stable CoreOS image to glance

$ wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2

$ glance image-create –name CoreOS –container-format bare  –disk-format qcow2  –file coreos_production_openstack_image.img

$ glance image-list

Finally, we should verify that all the binaries are installed correctly. Each of the following commands `fleetctl –version` , `etcdctl –version`, `heat –version` & `glance –version` should return output with the version of the binary installed.

Start a single node cluster

Lets keep things simple and start by spinning up a single node cluster (only the control node). This will start a single CoreOS node with fleet and etcd running on it.

Start the cluster

Starting a cluster through HEAT requires three parameters.

Discovery token:

For a group of CoreOS machines to form a cluster, their etcd instances need to be connected. We are creating a discovery token for single node to help connect etcd instances together by storing a list of peer addresses, metadata and the initial size of the cluster under a unique address, known as the discovery URL.

In our example, we are using CoreOS’s discovery service to generate token ( curl -q https://discovery.etcd.io/new?size=1). We can as well use our own mechanism to generate token.

public network uuid

Pass your unique identification of your public network of your OpenStack infrastructure. This is needed to create router inside public network.

key_name

Also provide your nova keypair’s key_name for ssh access.

The exact command to start a coreos cluster is shown below:

alok@remote $ heat stack-create -f heat-template-control.yaml -P discovery_token_url=`curl -q https://discovery.etcd.io/new?size=1` -P public_network_uuid=87cb4819-33d4-4f2d-86d2-6970c11962da trycoreos

+————————————————————+—————–+——————————+————————————-+
| id                                                                       | stack_name | stack_status                 |  creation_time                      |
+————————————————————+—————–+——————————+————————————-+
| 897f08ad-4beb-4000-a871-aaa0231ade90     | trycoreos      | CREATE_IN_PROGRES   | 2015-07-30T22:11:35Z        |
+————————————————————+—————–+——————————+————————————-+

The initial ‘CREATE_IN_PROGRESS’ stack_status shows that VM provisioning has started. The stack status can be checked with heat stack-show trycoreos. A ‘CREATE_COMPLETE’ stack_status means that our cluster is up. To get the floating ip address of the control node use the command heat output-show trycoreos control_ip.

Check the cluster status

Now that we have our node ready, we can ssh to it directly by `ssh core@<ip_address>`. But let’s try using fleetctl instead to see the status of our cluster and to ssh into the VMs.

alok@remote $  FLEETCTL_TUNNEL=”108.182.62.205″ fleetctl list-machines

+—————————–+—————————-+————————+
|  MACHINE                    |        IP                        |       METADATA       |
+—————————–+—————————-+————————+
|  0315e138…                |  192.168.222.2           |       role=control   |
+—————————–+—————————-+————————+

This lists the nodes in our cluster. Currently we have a single node cluster and that node has the control role. fleetctl talks to the host running the fleet service through ssh and gets information about the cluster, so the host running the fleet service must be accessible remotely via ssh.

The FLEETCTL_TUNNEL parameter specifies that the fleet service is running on a remote server with ip 108.182.62.205 (our control node). Use the floating ip address from the last section for this parameter. More information about configuring fleetctl can be found using the fleetctl client.

fleetctl can be used to monitor and to start/stop different services on our cluster. Node the ID above and use that to tell fleet to ssh to the control node:

alok@remote $ fleetctl ssh 0315e138

Lets set some keys and get their values. ETCDCTL_PEERS is a comma separated list of all etcd peers. Currently we have single etcd server running on standard 2379, so we specify https://108.182.62.205:2379 below.

alok@remote $ ETCDCTL_PEERS=”https://108.182.62.205:2379″ etcdctl ls

alok@remote $ etcdctl ls –recursive

alok@remote $ etcdctl set topic coreos

alok@remote $ etcdctl get topic

coreos

We can use –debug option to understand what API is being called. This gives an overview on etcd’s RESTful api.

alok@remote $ etcdctl –debug get topic

Cluster-Endpoints: http://108.182.62.205:2379

Curl-Example: curl -X GET http://108.182.62.205:2379/v2/keys/topic?quorum=false&recursive=false&sorted=false

coreos

More information about API can be found at coreos-api.

Insight

Let’s understand cluster setup and see how CoreOS cluster talks to each other.

heat template
The heat template is pretty self-explanatory. It consists of all the required services and uses cloud config to initialise data during the CoreOS bootstrap.

cloud-init
As part of heat-template-control.yaml, we are provisioning a single node with cloud config:

#cloud-config
coreos:
fleet:
etcd_servers: http://127.0.0.1:2379
metadata: role=control
etcd2:
name: etcd2
discovery: $token_url
advertise-client-urls: http://$public_ipv4:2379
initial-advertise-peer-urls: http://$public_ipv4:2380
listen-client-urls: http://0.0.0.0:2379
listen-peer-urls: http://0.0.0.0:2380
units:
– name: etcd2.service
command: start
– name: fleet.service
command: start
update:
group: alpha
reboot-strategy: reboot

This follows the standard CoreOS cloud-init guide to initialize a system. Fleet and etcd2 services are already present within CoreOS alpha channel. We try to override the default etcd2 and fleet configuration with custom parameters.

First two lines are for configuring fleet & etcd2. This fleet running on node about etcd servers (a comma separated servers) and their role. These config are placed in etcd2 service at  /run/systemd/system/fleet.service.d/20-cloud-init and they override during etcd startup. Similarly etcd config are placed at /run/systemd/system/etcd2.service.d/20-cloud-init inside control node for overriding.

In the unit section, we are passing the command to start both these services.

CoreOs has an update strategy that consists of three channels; alpha, beta and stable. The alpha channel is the most recent release and it closely tracks current development work. It is released very frequently. The beta channel consists of promoted alpha releases that have received more testing and is released less often that alpha. The stable channel should be used for a stable production CoreOS cluster.

To see what channel is being used, look into /etc/coreos/update.conf.

Conclusion

This concludes the basic single node setup of a CoreOS cluster. This doesn’t do much but gives us a brief understanding of the underlying concepts of CoreOS. We can verify the status of both the services after sshing into a node and using the following commands: alok@remote $ systemctl status etcd2 &alok@remote $ systemctl status fleet

Start a multi node cluster

Now that we are confident with CoreOS cloud init, systemd and HEAT templates, lets run a cluster with one control and two worker nodes.

Delete the old openstack cloud

# Delete old stack created with single node
alok@remote$ heat stack-delete trycoreos

Run stack-create for new cluster setup.
# Create another stack for three nodes
alok@remote $ heat stack-create -f heat-template.yaml -P discovery_token_url=`curl -q https://discovery.etcd.io/new?size=3` -P public_network_uuid=87cb4819-33d4-4f2d-86d2-6970c11962da trycoreos
+——————————————————–+———————+——————————+——————————-+
| id                                                                  | stack_name       | stack_status                | creation_time                |
+——————————————————–+———————+——————————+——————————-+
| 897f08ad-4beb-4000-a871-aaa0231ade90 | trycoreos          | CREATE_IN_PROGRESS | 2015-07-30T22:11:35Z |
+——————————————————–+———————+——————————+——————————-+

This provisions three node cluster and assuming that control node has ip address 108.182.62.205, lets list machines in cluster

alok@remote $ FLEETCTL_TUNNEL=”108.182.62.205″ fleetctl list-machines

The authenticity of host ‘108.182.62.205’ can’t be established.

RSA key fingerprint is 48:17:d4:4f:fe:33:0d:b5:44:b3:5b:11:fa:b0:e6:03.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘108.182.62.205’ (RSA) to the list of known hosts.

MACHINE         IP              METADATA

93c797b0…     192.168.222.2   role=node

e7d9f87f…     192.168.222.4   role=control

ee9c8044…     192.168.222.5   role=node

Let’s ssh to one of the nodes

alok@remote $ fleetctl ssh ee9c8044

The authenticity of host ‘192.168.222.5’ can’t be established.

RSA key fingerprint is 95:59:c9:ed:ee:ae:4c:5d:b1:db:95:5a:5e:7a:f2:20.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘192.168.222.5’ (RSA) to the list of known hosts.

CoreOS alpha (752.1.0)

Failed Units: 0

core@coreos-control $

Sharing/accessing keys across cluster

Lets use etcd to list keys, set key and see them from different machines.

alok@remote $ etcdctl ls –recursive

# We see there is no keys set as of now.

alok@remote $ etcdctl set coreos/network/config “192.168.3.0/24”

alok@remote $ etcdctl ls –recursive

/coreos

/coreos/network

/coreos/network/config

Let’s verify that these are same inside the control node. To do so we SSH into the worker control node and do a recursive list of keys.

alok@remote $ fleetctl ssh e7d9f87f

core@coreos-control ~ $ etcdctl ls –recursive

/coreos

/coreos/network

/coreos/network/config

core@coreos-control ~ $ etcdclt get /coreos/network/config

192.168.3.0/24

Conclusion:

This brings to an end our very basic CoreOS cluster setup and how to talk to each node in the cluster. We can now use this CoreOS cluster to host applications inside docker containers and to manage them.

Credit: This is inspired by coreos-heat templates from https://github.com/sinner-/heat-coreos .

The post Getting started with CoreOS on OpenStack appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Alok Kumar at August 13, 2015 04:21 AM

Lars Kellogg-Stedman

Provider external networks (in an appropriate amount of detail)

In Quantum in Too Much Detail, I discussed the architecture of a Neutron deployment in detail. Since that article was published, Neutron gained the ability to handle multiple external networks with a single L3 agent. While I wrote about that back in 2014, I covered the configuration side of it in much more detail than I discussed the underlying network architecture. This post addresses the architecture side.

The players

This document describes the architecture that results from a particular OpenStack configuration, specifically:

  • Neutron networking using VXLAN or GRE tunnels;
  • A dedicated network controller;
  • Two external networks

The lay of the land

This is a simplified architecture diagram of the network connectivity in this scenario:

Everything on the compute hosts is identical to my previous article, so I will only be discussing the network host here.

For the purposes of this article, we have two external networks and two internal networks defined:

$ neutron net-list
+--------------------------------------+-----------+----------...------------------+
| id                                   | name      | subnets  ...                  |
+--------------------------------------+-----------+----------...------------------+
| 6f0a5622-4d2b-4e4d-b34a-09b70cacf3f1 | net1      | beb767f8-... 192.168.101.0/24 |
| 972f2853-2ba6-474d-a4be-a400d4e3dc97 | net2      | f6d0ca0f-... 192.168.102.0/24 |
| 12136507-9bbe-406f-b68b-151d2a78582b | external2 | 106db3d6-... 172.24.5.224/28  |
| 973a6eb3-eaf8-4697-b90b-b30315b0e05d | external1 | fe8e8193-... 172.24.4.224/28  |
+--------------------------------------+-----------+----------...------------------+

And two routers:

$ neutron router-list
+--------------------------------------+---------+-----------------------...-------------------+...
| id                                   | name    | external_gateway_info ...                   |...
+--------------------------------------+---------+-----------------------...-------------------+...
| 1b19e179-5d67-4d80-8449-bab42119a4c5 | router2 | {"network_id": "121365... "172.24.5.226"}]} |...
| e2117de3-58ca-420d-9ac6-c4eccf5e7a53 | router1 | {"network_id": "973a6e... "172.24.4.227"}]} |...
+--------------------------------------+---------+-----------------------...-------------------+...

And our logical connectivity is:

+---------+    +----------+    +-------------+
|         |    |          |    |             |
|  net1   +----> router1  +---->  external1  |
|         |    |          |    |             |
+---------+    +----------+    +-------------+

+---------+    +----------+    +-------------+
|         |    |          |    |             |
|  net2   +----> router2  +---->  external2  |
|         |    |          |    |             |
+---------+    +----------+    +-------------+

Router attachments to integration bridge

In the legacy model, in which an L3 agent supported a single external network, the qrouter-... namespaces that implement Neutron routers were attached to both the integration bridge br-int and the external network bridge (the external_network_bridge configuration option from your l3_agent.ini, often named br-ex).

In the provider network model, both interfaces in a qrouter namespace are attached to the integration bridge. For the configuration we've described above, the configuration of the integration bridge ends up looking something like:

Bridge br-int
    fail_mode: secure
    Port "qvoc532d46c-33"
        tag: 3
        Interface "qvoc532d46c-33"
    Port br-int
        Interface br-int
            type: internal
    Port "qg-09e9da38-fb"
        tag: 4
        Interface "qg-09e9da38-fb"
            type: internal
    Port "qvo3ccea690-c2"
        tag: 2
        Interface "qvo3ccea690-c2"
    Port "int-br-ex2"
        Interface "int-br-ex2"
            type: patch
            options: {peer="phy-br-ex2"}
    Port "tapd2ff89e7-16"
        tag: 2
        Interface "tapd2ff89e7-16"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "int-br-ex1"
        Interface "int-br-ex1"
            type: patch
            options: {peer="phy-br-ex1"}
    Port "qr-affdbcee-5c"
        tag: 3
        Interface "qr-affdbcee-5c"
            type: internal
    Port "qr-b37877cd-42"
        tag: 2
        Interface "qr-b37877cd-42"
            type: internal
    Port "qg-19250d3f-5c"
        tag: 1
        Interface "qg-19250d3f-5c"
            type: internal
    Port "tap0881edf5-e5"
        tag: 3
        Interface "tap0881edf5-e5"
            type: internal

The qr-... interface on each router is attached to an internal network. The VLAN tag associated with this interface is whatever VLAN Neutron has selected internally for the private network. In the above output, these ports are on the network named net1:

Port "qr-affdbcee-5c"
    tag: 3
    Interface "qr-affdbcee-5c"
        type: internal
Port "tap0881edf5-e5"
    tag: 3
    Interface "tap0881edf5-e5"
        type: internal

Where qr-affdbcee-5c is router1's interface on that network, and tap0881edf5-e5 is the port attached to a dhcp-... namespace. The same router is attached to the external1 network; this attachment is represented by:

Port "qg-09e9da38-fb"
    tag: 4
    Interface "qg-09e9da38-fb"
        type: internal

The external bridges are connected to the integration bridge using OVS "patch" interfaces (the int-br-ex1 on the integration bridge and the phy-br-ex1 interface on the br-ex1).

From here to there

Connectivity between the qg-... interface and the appropriate external bridge (br-ex1 in this case) happens due to the VLAN tag assigned on egress by the qg-... interface and the following OpenFlow rules associated with br-ex1:

# ovs-ofctl dump-flows br-ex1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=794.876s, table=0, n_packets=0, n_bytes=0, idle_age=794, priority=1 actions=NORMAL
 cookie=0x0, duration=785.833s, table=0, n_packets=0, n_bytes=0, idle_age=785, priority=4,in_port=3,dl_vlan=4 actions=strip_vlan,NORMAL
 cookie=0x0, duration=792.945s, table=0, n_packets=24, n_bytes=1896, idle_age=698, priority=2,in_port=3 actions=drop

Each of these rules contains some state information (like the packet/byte counts), some conditions (like priority=4,in_port=3,dl_vlan=4) and one or more actions (like actions=strip_vlan,NORMAL). So, the second rule there matches packets associated with VLAN tag 4 and strips the VLAN tag (after which the packet is delivered to any physical interfaces that are attached to this OVS bridge).

Putting this all together:

  1. An outbound packet from a Nova server running on a compute node enters via br-tun (H)

  2. Flow rules on br-tun translate the tunnel id into an internal VLAN tag.

  3. The packet gets delivered to the qr-... interface of the appropriate router. (O)

  4. The packet exits the qg-... interface of the router (where it is assigned the VLAN tag associated with the external network). (N)

  5. The packet is delivered to the external bridge, where a flow rule strip the VLAN tag. (P)

  6. The packet is sent out the physical interface associated with the bridge.

For the sake of completeness

The second private network, net2, is attached to router2 on the qr-b37877cd-42 interface. It exits on the qg-19250d3f-5c interface, where packets will be assigned to VLAN 1:

Port "qr-b37877cd-42"
    tag: 2
    Interface "qr-b37877cd-42"
        type: internal
Port "qg-19250d3f-5c"
    tag: 1
    Interface "qg-19250d3f-5c"
        type: internal

The network interface configuration in the associated router namespace looks like this:

# ip netns exec qrouter-1b19e179-5d67-4d80-8449-bab42119a4c5 ip a
30: qg-19250d3f-5c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:01:e9:e3 brd ff:ff:ff:ff:ff:ff
    inet 172.24.5.226/28 brd 172.24.5.239 scope global qg-19250d3f-5c
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe01:e9e3/64 scope link 
       valid_lft forever preferred_lft forever
37: qr-b37877cd-42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:4c:6c:f2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.1/24 brd 192.168.102.255 scope global qr-b37877cd-42
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe4c:6cf2/64 scope link 
       valid_lft forever preferred_lft forever

OpenFlow rules attached to br-ex2 will match these packets:

# ovs-ofctl dump-flows br-ex2
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=3841.678s, table=0, n_packets=0, n_bytes=0, idle_age=3841, priority=1 actions=NORMAL
 cookie=0x0, duration=3831.396s, table=0, n_packets=0, n_bytes=0, idle_age=3831, priority=4,in_port=3,dl_vlan=1 actions=strip_vlan,NORMAL
 cookie=0x0, duration=3840.085s, table=0, n_packets=26, n_bytes=1980, idle_age=3742, priority=2,in_port=3 actions=drop

We can see that the second rule here will patch traffic on VLAN 1 (priority=4,in_port=3,dl_vlan=1) and strip the VLAN tag, after which the packet will be delivered to any other interfaces attached to this bridge.

by Lars Kellogg-Stedman at August 13, 2015 04:00 AM

August 12, 2015

Matthew Treinish

Using subunit2sql with the gate

A little over a year ago I started writing  subunit2sql, which is a project to collect test results into a SQL DB. [1]The theory behind the project is that you can extract a great deal of information about what’s under test from the results of tests when you look at the trend data from it over a period of time. Over the past year the project has grown and matured quite a bit so I figured this was a good point to share what you can do with the project today and where I’d like to head with it over the next year.

I won’t go into the details too much about what subunit2sql is or some of the implementation details, I’ll save that for a follow-on post. In the meantime you can read the docs at: http://docs.openstack.org/developer/subunit2sql/ I also gave a talk earlier this year on  subunit2sql for the Developer, Testing, Release and Continuous Integration Automation at LCA:  https://www.youtube.com/watch?v=rGyDlExOs94 (although some of the details are a bit dated as things have evolved further since then)

At this point it’s mostly still just me contributing to the project, which is fine because I enjoy it and find it interesting. But, my time to experiment and work on this is limited and I know other people would likely be interested in contributing. I also think having diversity in contributors really helps a project come into it’s own, just by having different ideas coming to the table. I always get a little concerned whenever I’m basically the sole contributor to something. I figured I should share how the project is used today to try and drum up interest and fix this.

This is really a project I’m passionate about and if you have any interest in it or more questions, please feel free to reach out to me via email, the ML, or on irc.

[1] Note while the CLI tooling and the name imply the DB will only work with subunit v2 as a protocol for communicating test results, there actually isn’t anything inherent to subunit in the database schema or the library api provided by the package

subunit2sql in infra

Right now the subunit2sql is actively only doing 2 things, collecting test results from tempest in the gate queue and injecting results into testrepository for each tempest run.

Collecting results

Since the final day of the Paris design summit we’ve been running a subunit2sql DB in openstack-infra that collects all the test results from tempest runs in the gate queue. The mechanism behind how for all this machinery is documented at: http://docs.openstack.org/infra/system-config/logstash.html it’s under the logstash page because the way subunit streams are collected from the test runs and eventually get stored in the database is the same mechanism and architecture that Clark Boylan created to store the logs from test runs into logstash. I just added a different worker which uses subunit2sql to handle subunit and store it in a mysql database. The basic overview diagram of how this works is:

flowchart

One thing to note with the data collection is that it only collects data from tempest runs in the gate queue. We don’t collect results for check because it’s really  too noisy to be useful and it would generate exponentially more data. I expect we’ll likely consider changing this at some point in the future as the UI around the data improves in the future. (and when we create a web interface for visualizing the data)

Injecting results into tempest runs

If you’ve looked at the console output of a tempest run in the gate at any point in the past 6-7 months you might have noticed something like:

2015-08-11 01:04:40.171 | Loading previous tempest runs subunit streams into testr
2015-08-11 01:04:40.171 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:40.509 | Ran 92 tests in 193.700s
2015-08-11 01:04:40.509 | PASSED (id=0, skips=37)
2015-08-11 01:04:40.520 | /opt/stack/new/devstack
2015-08-11 01:04:40.520 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:40.908 | Ran 92 tests in 180.853s (-12.847s)
2015-08-11 01:04:40.908 | PASSED (id=1, skips=32)
2015-08-11 01:04:40.922 | /opt/stack/new/devstack
2015-08-11 01:04:40.922 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:41.502 | Ran 388 (+296) tests in 1579.986s (+1399.133s)
2015-08-11 01:04:41.502 | PASSED (id=2, skips=32)
2015-08-11 01:04:41.552 | /opt/stack/new/devstack
2015-08-11 01:04:41.552 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:41.991 | Ran 130 (-258) tests in 295.809s (-1284.176s)
2015-08-11 01:04:41.991 | PASSED (id=3, skips=17)
2015-08-11 01:04:42.005 | /opt/stack/new/devstack
2015-08-11 01:04:42.005 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:42.491 | Ran 130 tests in 304.811s (+9.002s)
2015-08-11 01:04:42.491 | PASSED (id=4, skips=17)
2015-08-11 01:04:42.505 | /opt/stack/new/devstack
2015-08-11 01:04:42.506 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:42.792 | PASSED (id=5)
2015-08-11 01:04:42.801 | /opt/stack/new/devstack
2015-08-11 01:04:42.801 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:43.193 | Ran 130 (+130) tests in 346.516s
2015-08-11 01:04:43.193 | PASSED (id=6, skips=22)
2015-08-11 01:04:43.204 | /opt/stack/new/devstack
2015-08-11 01:04:43.204 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:43.695 | Ran 130 tests in 299.969s (-46.548s)
2015-08-11 01:04:43.695 | PASSED (id=7, skips=17)
2015-08-11 01:04:43.705 | /opt/stack/new/devstack
2015-08-11 01:04:43.705 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:44.119 | Ran 130 tests in 337.152s (+37.183s)
2015-08-11 01:04:44.119 | PASSED (id=8, skips=22)
2015-08-11 01:04:44.129 | /opt/stack/new/devstack
2015-08-11 01:04:44.129 | /opt/stack/new/tempest /opt/stack/new/devstack
2015-08-11 01:04:44.509 | Ran 130 tests in 348.047s (+10.895s)
2015-08-11 01:04:44.509 | PASSED (id=9, skips=22)

in the console output from the run. This is devstack-gate loading the results from 10 previous results into testrepository before we start executing tempest. This is done to give testr a chance to optimize the test grouping to make the tests run on each worker a bit more balanced. It’s honestly a half feature, because testr’s scheduler is limited and only stores the most recent execution of a test in it’s local timing dbm file and we’re unable to use the avg. execution times from the DB because of: https://bugs.launchpad.net/testrepository/+bug/1416512 To get around that bug in the meantime we just preload the nodepool images with the subunit streams of the 10 most recent runs in the database. (using sql2subunit) This actually doesn’t really affect the performance of test runs at all or really have any noticeable effect. Long term we’ll hopefully improve this a bit and make it more useful.

Elastic-recheck

A future use case in openstack-infra for subunit2sql is also to use the subunit2sql DB as a secondary data source for elastic recheck. This would enable some additional filtering or checking in e-r queries based on test results, which might be useful in certain circumstances. I have a patch up that is starting work on this here: https://review.openstack.org/209712

Playing with things on your own

Talking to the gate’s DB

This is probably a terrible idea because it’s just asking for a DOS attack on the “production” DB which runs on a tiny trove node on  RAX’s public cloud, but to connect to the mysql server  you can use the following public read only credentials:

  • hostname: logstash.openstack.org
  • username: query
  • password: query
  • database: subunit2sql

All of the data we’ve collected about tests since Nov. of last year is stored there. As I said before the only test data in the database is from tempest runs in the gate. (there was a brief period in where the filtering wasn’t perfect and there are some check queue jobs that got into the DB)

subunit2sql-graph

This is where we start to get into the fun part and generate some pretty pictures. subunit2sql-graph is a CLI tool for generating visualizations from the data in the DB. It’s got a few different graph types right now that mainly use pandas and matplotlib to generate a image file of a graph using data in the database. More details on the graph command can be found at the command’s doc page: http://docs.openstack.org/developer/subunit2sql/graph.html#subunit2sql-graph I’ll only cover a couple of different graphs right now but you can play around with them all using the public DB credentials.

Test Run Time

This is the first graphing I really added to subunit2sql. It lets you graph the run time of an individual test as a time series. Like:
test-volume-boot-pattern

Which is a graph I generated some time ago to test what is now the current output formatting. (although the x axis labels were still wrong back then) The light blue shading is standard deviation of the test run time. I used this as an  example because it’s a good example to show off how variable the performance is for things running in the gate.

Performance analysis

However, the real advantage of  generating these graphs is it lets you identify performance regressions and also verify that they were fixed. Here is a real world example,
the first line graph of run time I ever generated using subunit2sql  was (which is why the format is different):

total-times

Looking at this graph there is a  clear bifurcation in the data. We can see that most of the time the test takes about 150 secs. (with the normal gate variance) but a noticeable amount of times it  was running considerably faster, taking < 50 sec.

So I wrote a separate tool (although this predates the plugin interface so it was just a script) that grouped the tests on arbitrary boundaries graphed them separately and then printed the metadata from the runs that made up each group:

split-times

The resulting graph shows a clear split between the bottom group and the top groups. What’s more important is the metadata showed that the bottom group, which ran considerably faster, was completely comprised of jobs running on the  stable/icehouse branch (which also ran on an older ubuntu) This means that at some point during juno we introduced a change somewhere that made this test perform quite a bit slower. (or less likely there was a regression in ubuntu between precise and trusty)

It turns out we were already tracking this issue in this bug, because we’d see test failures in certain cases because volume deletes were too slow. It was eventually fixed which is actually where this gets really cool. If you run the graph command today on this test with:

$ subunit2sql-graph –start-date 2014-12-01 –database-connection=”mysql://query:query@logstash.openstack.org/subunit2sql” –title “test_rescued_vm_detach_volume” –output perf-regression.png run_time 0291fc87-1a6d-4c6b-91d2-00a7bb5c63e6

You get:

perf-regression

The first part of shows what the graphs above were showing, the run times clearly divided between two groups, but averaging much slower. However, in March you can clearly see where that trend flipped on it’s head and the average runtime became much faster with consistent outliers being slower. That turning point happens to match up perfectly with: http://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=4bf861c76c220a98a3b3165eea5448411d000f3a which was the fix for this particular performance regression. The slow outliers after that commit are actually stable/juno runs because that’s never been backported from kilo to juno.

Run Metadata Variance Graphing:

This is a feature still in progress here: https://review.openstack.org/#/c/210569/ but I figured I share it because it helps explore one of the themes from the previous section. Mainly that one thing that subunit2sql enables you to show quantitatively is how variable performance can actually be, especially in the gate. This is honestly completely expected when you consider both OpenStack’s architecture of being a distributed asynchronous message passing system and the gate jobs are deploying and running clouds inside VMs in public clouds of various flavors. Looking at a single run (or even a small set of runs) in isolation won’t be able to ever give you real information about how things actually perform. I’m adding this graph to try and visualize how different the variability is in total run time depending on the run metadata. For example:

scater-test4

This was the result result of running the graph command on about a full day’s worth of runs. The missing y axis label is time in sec., although the actual number is a bit misleading, since it’s the sum of the individual test run times for the run. Which by itself  has 2 issues, that it doesn’t account for setUp or tearDown, which can be quite expensive in tempest, and that it doesn’t take into account things are executed in parallel it’s basically cpu time. But for showing the variance of runs these issues should not change the effectiveness of the graph. Also, I’m biased against box and whiskers so I might try some other type of visualization before this merges. But, at the very least this enables you to see how variable performance can be based on job type at a high level.

Make your own graphs

The other aspect of subunit2sql-graph is that it has a plugin interface. This lets you write your own plugins for subunit2sql-graph to generate whatever graphs you want. There are likely cases where it’s not really possible to make a graph generic enough to be considered for upstream inclusion. (like if it depends on specific contents of metadata) So having a plugin interface makes it easy to bake it into the same tooling as the other graphs being generated and share common configuration between them.

Manually querying data

Unfortunately, the CLI tooling around interacting with a subunit2sql DB isn’t as mature I’d really like. I also think having a web interface to the data will help a lot here too. But, unfortunately this is all still pretty much nonexistent at this point. So you might have to end up manually querying the DB to get the information you’re looking for. For example one of the things missing is average run failure rates from the CLI. (you can get per test failure rates from subunit2sql-graph) You will have to query that information manually with:

$ mysql -u query --password=query --hostname logstash.openstack.org subunit2sql
MySQL [subunit2sql]> select count(id) from runs where fails>0;
+-----------+
| count(id) |
+-----------+
| 2482 |
+-----------+
1 row in set (0.09 sec)

MySQL [subunit2sql]> select count(id) from runs where passes>0 or fails>0;
+———–+
| count(id) |
+———–+
| 143127 |
+———–+
1 row in set (0.21 sec)

so in the gate queue we’ve had a average tempest run failure rate of ~1.73% which is actually higher than I thought it was. (also realize this only counts runs that failed tempest. Devstack failures, other setup and infra failures, or any other test job failures do not factor into this number) It’s also not really a fair measurement because some tempest jobs fail much more frequently than others. But it’s a simple example of what to you can do with manually querying.

Next steps for subunit2sql

There are a couple of things that I’d really like to finish over the next year with subunit2sql some are code cleanup and others are use case and feature expansion.

UI/UX

So this is honestly what I think has been the biggest barrier for getting people excited about all of this. I freely admit that this is an area I lack expertise in, a good interface for me is ncurses application with vim key bindings. (which is probably not the best way to do data visualization)  This is really why most of the interfaces to subunit2sql and the data are very raw.  What we really need here is a web interface which shows all the things I’ve been doing manually in a dynamic way to enable people to just look at things and not have to think too much about the underlying data model.

We’re running a gazillion of tests all the time, and we’re aren’t getting the full  value from these test runs by just looking at them in a strictly binary pass fail manner. We are definitely able to extract a lot more information about OpenStack as we’re developing it. We’ve already got a very large amount of data already collected in the database (107108401 individual test executions at one point while I was writing this post) and it could really provide a great deal of value to the development community. The perfect example is the run time graphs I showed above. But, this is really one aspect that I need the most help on before I think the larger community would see any advantage to having this resource available.

testrepository integration

This has been a big priority ever since I started working on the project. Using subunit2sql as a repository type in testrepository makes a ton of sense. Once this is implemented I’m planning to work with Robert Collins to eventually make this the default repository type in testrepository. It’ll enable leveraging all of the work being done here for anyone running tests with testr, but at a minimum it’ll expose a much more rich data store for testrepository to use for it’s own operations. I actually stared hacking on an implementation for this a long time ago but things in subunit2sql weren’t really ready back then so I abandoned it before I got to far. (FWIW, it doesn’t look too difficult to do)

There are a couple of things still to accomplish before we can do this. The first, which should be fairly trivial is add support to sql2subunit (and the associated python APIs it exposes and consumes) to handle attachments in the output subunit stream.  The other thing which isn’t technically a blocker to implement this, but would be a huge blocker to adoption is support for sqlite in the migrations. Dropping sqlite was one of my early mistakes in the project because at the time supporting it seemed like a burden and I didn’t really think about the implications. My plan for this is basically branch the DB migrations and compact them into a single migration  which works with sqlite, this will enable to new users to setup a database with sqlite . This will be the first step of the 1.x.x release series. (I have a WIP patch up for this here: https://review.openstack.org/203252)

Testing

So the irony isn’t lost on me of being OpenStack’s QA PTL and writing a project that has basically non-existent test coverage.  But, honestly it’s something I hadn’t put too much thought into for a long time, since I was more interested in getting something together and using it.  But, this has definitely hurt the project in the long run because it just makes verifying new changes much more difficult. To really help the project grow we need to improve this so that we can verify changes without having to pull them locally and run them. There is a TODO file in tree which I try to keep up to date with work items that need doing. A good portion of them are testing related: https://github.com/openstack-infra/subunit2sql/blob/master/TODO.rst

by Matthew Treinish at August 12, 2015 10:01 PM

Rich Bowen

The OpenStack Big Tent

I’ll be giving a presentation at LinuxCon next week about the ‘Big Tent’ at OpenStack. It’ll go something like this …

The OpenStack Big Tent

OpenStack is big and complicated. It’s composed of many moving parts, and it can be somewhat intimidating to figure out what all the bits do, what’s required, what’s optional, and how to put all the bits together.

The Problem

In the attempt to tame this confusion, the OpenStack Technical Committee defined what’s part of the Integrated Release and what’s not, so that you, the consumer, know what’s in and what’s out. One of the unintended side effects of this was that new projects were treated as second class citizens, and had trouble getting resources, developers, and a seat at the table at the developer summit.

As OpenStack continues to grow, this became more and more of a problem.

With the Liberty cycle, the Technical Committee has taken another look at what makes a project part of OpenStack, to make things better for the projects, as well as for the consumers.

Are You OpenStack?

The question that has been asked all along about any project wanting to be part of OpenStack was, is this thing OpenStack? To answer this question, a number of criteria were applied, including interoperability with existing bits, maturity, diversity (i.e., is this thing entirely developed by one company, or does it have broader participation?), and other things. This process was called Incubation, and once a project graduated from Incubation, it could be part of the integrated release.

As the stack grew, these questions became harder to answer, and more projects were getting left out of the tent, to everyone’s detriment, and to the growing confusion of the folks trying to use the software.

So, in recent months, the Technical Committee (TC) has decided to turn the question around. Rather than asking “Is thing thing OpenStack?” the new question is “Are You OpenStack?”

This changes how we look at making the determination on a few fronts.

OpenStack is People!

As Thierry Carrez Sean Dague said in their Summit presentation, OpenStack is composed of teams of people, working towards the betterment of the overall project. To that end, we’ll now welcome everyone to the table, if they are OpenStack.

So … how’s this defined?

Something is OpenStack if it:

1) Aligns with the OpenStack Mission: to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.

2) Follows the OpenStack Way – Open Source, Open Community, Open Development, and Open Design. (More here)

3) Strives for interoperability with other things that are OpenStack.

4) Subjects itself to the governance of the Technical Committee

Tags

But while this solves one problem, it creates another. As a user of the OpenStack software, I really still need to know what’s in and what’s out.

There is no longer going to be a single release that is defined to be OpenStack, how do I know which bits I need, and which bits I can live without?

To help sort this out, a system of community-defined tags will be applied to the various pieces of OpenStack, starting with “tc-approved-release” which will, initially, just reflect what was already the integrated release. These tags will indicate project maturity, as well as other considerations. Packagers, like the CentOS Cloud Sig, can then use those tags to determine what they want to include in distributions.

Who’s In

As a result of this change, we immediately have several new projects that are part of OpenStack, that were previously held at arm’s length:

What’s Next?

People are still going to expect a release, and exactly what that means going forward is a little unclear. Every six months there will be a release which will include stuff tagged ‘tc-approved-release’. It will be opt-in – that is, projects can participate, or not, as they like. Or they can release on their own cadence, as was discussed about a year ago.

There are still some details to be worked out, but the overall benefit to the community seems like it’s going to be huge, as we include more great ideas, and more passionate people, inside the Big Tent.

by rbowen at August 12, 2015 08:17 PM

Red Hat Stack

Upgrades are dying, don’t die with them

We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.

Let’s face it, the impact of this on traditional software companies are starting to be felt. Their services and methods of doing business are now being compared to a newer, more efficient model. One that is not bogged down by the inefficiencies of the traditional model. SaaS has the advantage that the software runs in their datacenters, where they have easy access to it, control the hardware, the architecture, the configurations, and so on.

Open source initiatives that target the enterprise market, like OpenStack, have to look at what others are doing in order to appeal to it’s intended audience. The grueling release cycle of the OpenStack community (major releases every 6 months) can put undue pressure on enterprise IT teams to update, deploy, and maintain environments, often times leaving them unable to keep up from one release to the next. Inevitably, they start falling behind. And in some cases, their attempts to update is slower than the software release cycle, resulting in them falling further behind each release. This is a major hindrance to successful OpenStack adoption.

Solving only one side of the problem

Looking at today’s best practices for upgrading, we can see that the technology hasn’t quite matured yet. And, although DevOps allows companies to deliver code to customers faster, It doesn’t solve the problem of installing the new underlying infrastructure – faster is not enough.. This situation is even more critical when considering your data security practices. The ability to patch quickly and efficiently is key for companies to deploy security updates when critical security issues are spotted.

Adding to this further, is how businesses can shorten the feedback loop with development releases. Releasing an alpha or beta, waiting for people to test it and send relevant feedback is a long process that causes delays for both the customer and the provider. Yet another friction point.

Efforts are currently being made with community projects Tempest and Rally to provide better vision in a cloud’s stability and functionality. These two projects on their own are necessary steps in the right direction, however they currently lack holistic integration and still only offer a vision into a single cloud’s performance. Additionally, they do not yet allow for an OpenStack distribution provider to check if their distribution’s new versions work with specific configurations or hardware. Whatever the solution is, it has to compete with what is currently being offered in the “*aaS” or it will be seen as outdated and risk losing users.

Automation: A way out

Continuous integration and continuous delivery (CI/CD) is all the rage these days and it might offer part of the solution. Automation has to play a key role if companies are to keep up. We need to look into ways of making the process repeatable, reliable, incrementally improving, and customizable. Developers can no longer claim it worked on their laptop, so companies cannot limit themselves to saying it worked (or didn’t work) on their infrastructure. Software providers have to get closer to their customers to share in the pain.

Every OpenStack deployment is a custom job these days. Everyone is not running the same hardware, the same configurations, and so on. This means we have to adapt to those customizations and provide a framework that allows people to test their specific use cases. Once unit testing, integration testing, and functional testing has happened inside the walls of the software providers, it has to go out into the wild and survive real customer use cases. And just as important, feedback has to be received quickly in order for the next iterations to be smaller, which will ease the burden of identifying problems and fixing as needed.

One of the concepts Red Hat is investigating is chaining different CI environments and managing the logs and log analysis from a “central CI”. We’ve been working with customers to validate this concept by testing it first on customer and partner equipment for those who have been able to set aside equipment for us. We want to deploy a new version and verify an update live on premise and include this step into our gating process before merging code. We are not satisfied  unless it can be deployed and proven to work in a real environment. This means that CI/CD isn’t just about us anymore, it has to work on-site or a patch is not merged.

Currently in our testing, we receive status reports from different architectures which allows us to identify if an issue is specific to a certain configuration, hardware, or environment. This also allows us identify a more widespread issue that needs to be fixed in the release. Ideally, we envision a point where once a new version reaches a certain “acceptance threshold,” it is marked as ready for release. It’s then automatically pushed out and updated to a customer’s pre-production environment.

A workflow might look something like this:

Screen Shot 2015-08-12 at 11.44.53 AM

Source (modified): https://en.wikipedia.org/wiki/Continuous_delivery#/media/File:Continuous_Delivery_process_diagram.png

This type of workflow could integrate well into existing tools like Red Hat Satellite. Updates would still be provided as usual, but additional options to test upgrades leveraging the capabilities of the cloud would be made available. This would provide system administrators with an added level of certainty before deploying packages to existing servers, including logs to troubleshoot before pushing to production environments, should anything go wrong.  

Red Hat is committed to delivering a better and smoother upgrade experience for our customers and partners. While there are many questions that remain to be answered, notably around security or proprietary code, there is no doubt in my mind that this is the way forward for software. Automation has to take over the busy work of testing and upgrading to free up critical IT staff members to spend more time delivering features to their customers or users.

by jeffja at August 12, 2015 03:47 PM

RDO

Flavio Percoco talks about the Zaqar project

Zaqar (formerly called Marconi) is the messaging service in OpenStack. I recently had an opportunity to interview Flavio Percoco, who is the PTL (Project Technical Lead) of that project, about what's new in Kilo, and what's coming in Liberty.

The recording is --> here <--, and the transcript follows below.


FlavioPercoco

R: This is Rich Bowen. I am the RDO community liaison at Red Hat, and I'm speaking with Flavio Percoco, who is the PTL of the Zaqar project. We spoke two years ago about the project, and at that time it had a different name. I was hoping you could tell us what has been happening in the Kilo cycle, and what we can expect to see in Liberty.

F: Thanks, Rich, for having me here. Yes, we spoke two years ago, back in Hong Kong, while the project was called Marconi. Many things have happened in these last few years. We developed new APIs, we've added new features to the project.

At that time, we had version 1 of the API, and we were still figuring out what the project was supposed to be like, and what features we wanted to support, and after that we released a version 1.1 of the API, which was pretty much the same thing, but with a few changes, and a few things that would make consuming Zaqar easier for the final user.

Some other things changed. The community provided a lot of feedback to the project team. We've attempted to graduate two times, and then the Big Tent discussion happened, and we just fell into the category of projects that would be a good part of the community - of the Big Tent discussion. So we are now officially part of OpenStack. We're part of this Big Tent group.

We changed the API a little bit. The impression that the old API gave was that it was a queueing service, whereas what we really wanted to do was a messaging service. There is a fundamental difference between the two. Our focus is to provide a messaging API for OpenStack that would not just allow users to send messages from one point to another, but it would also allow users to have notifications right away from that API. So we'll take advantage of the common storage that we'll use for both features, for different services living within the same service. That's a big thing, and something we probably didn't talk about back then.

The other thing is that in Kilo we dedicated a lot of time to work on these versions of the API and making sure that all of the feedback that we got from the community was taken care of and that we were improving the API based on that feedback, and those long discussions that we had on the mailing list.

In Liberty, we've dedicated time to integrating with other project, as in, having other projects consume the API. So we're very excited to say that in Liberty a few patches have landed in Heat that rely on Zaqar for having notifications, or to send messages, and communicate with other parts of the Heat service. This is very exciting for us, because we have some stories of production environments, but we didn't have stories of other projects consuming Zaqar, and this definitely puts us in a better position to improve the service, and get more feedback from the community.

In terms of features for the Liberty cycle, we've dedicated time to improve the websocket transport which we started in Kilo, but didn't have enough time to complete there. This websocket transport will allow for persistent connections to be made against the Zaqar service, so you'll just connect to the service once, and you'll keep that connection alive. This is ideal for several scenarios, and one of those is connecting to Zaqar from a browser and having Javascript communication directory to Zaqar, which is something we really want to have.

Another interesting feature that we implemented in Liberty is called pre-signed URLs, and what it does is something very similar - if folks are familiar with Swift temp URLs - http://docs.openstack.org/kilo/config-reference/content/object-storage-tempurl.html - this is something very similar to that. It generates a URL that can expire. You will share that URL with people or services that don't have an username in Zaqar, so that they can connect to the service and still send messages. This URL is limited to a single tenant and a single queue, and it has privileges and policies attached to it so that we can protect all the data that is going through the service.

I believe those are the two features that excite me the most from the Liberty cycle. But what excites me the most about this cycle is that we have other services using Zaqar, and that will allow us to improve our service a lot.

R: Looking forward to the future, is there anything that you would like to see in the M cycle? What is the next big thing for Zaqar?

F: In the M cycle, I still see us working on having more projects consuming Zaqar. There's several use cases that we've talked about that are not being taken care of in the community. For instance, talking to guest agents. We have several services that need to have an agent running in the instances. We can talk about Trove, we can talk about Sahara, and Murano. We are looking forward to address that use case, which is what we built presigned URLs for. I'm not sure we're going to make it in Liberty, because we're already on the last milestone of the cycle, but we'll still try to make it in Liberty. If we can't make it in Liberty, that's definitely one of the topics we'll need to dedicate time to in the M cycle.

But as a higher level view, I would really like to see a better story for Zaqar in terms of operations support and deployment - make it very simple for people to go there and say they want Zaqar, this is all I need, I have my Puppet manifest, or Anisible playbooks, or whatever people are using now - we want to address that area that we haven't paid much attention to. There is already some effort in the Puppet community to create manifests for Zaqar, which is amazing. We want to complete that work, we want to tell operations, hey, you don't have to struggle to make that happen, you don't have to struggle to run Zaqar, this is all you need.

And the second thing that I would like to see Zaqar doing in the future is to have a better opinion of what the storage it wants to rely on is. So far we have support for two storages that are unicode based and there's a proposal to support a third storage, but in reality what we would really like to do is have a more opinionated Zaqar instance of storage, so that we can build a better API, make it consistent, and make sure it is dependable, and provide specific features that are supported and that it doesn't matter what storage you are using, it doesn't matter how you deploy Zaqar, you'll always get the same API, which is something that right now it's not true. If you deploy Redis, for instance, you will not have support for FIFO queues, which are optional right now in the service. You won't be able to have them because that's something that's related to the storage itself. You don't get the same guarantees that you'd get with other storage. We want to have a single story that we can tell to users, regardless of what storage they are using. This doesn't mean that ops cannot use their own storage. If you deploy Zaqar and you really want to use a different storage, that's fine, we're not going to remove plugability from the service. But in terms of support, I would like Zaqar to be more opinionated.

by rbowen at August 12, 2015 01:23 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
August 30, 2015 10:23 PM
All times are UTC.

Powered by:
Planet