January 19, 2017

OpenStack Superuser

How a small cloud team supports big ideas

When science researchers at the University of Zurich need help with data analysis, a boost in computational power, run out of storage or need to optimize their workflow, they turn to the Service and Support for Science IT unit, or S3IT. The team also serves research groups and offers services for international projects.

S3IT adopts free and open-source solutions wherever possible, including Python, Linux and OpenStack. Their GitHub repositories are available at https://github.com/gc3-uzh-ch and https://github.com/uzh.

Superuser talked to Hanieh Rajabi, cloud engineer at S3IT, to find out more.

Tell us more about the team and where OpenStack comes in.

I started working on OpenStack after joining the S3IT group in April 2016.

S3IT supports University of Zurich researchers and research groups in using IT to support their research, from consultancy to app support and access to cutting-edge cloud (OpenStack-based), cluster and supercomputing systems.

Our OpenStack infrastructure is quite big — it offers 13,184 vCPU and 64 TB of ram. We’re a team of four running this infrastructure. As a cloud engineer, I run day-to-day operations on OpenStack.

I also do training for our OpenStack users. Each month we give a quick introduction session for new users joining the cloud and every three months I run Openstack training, teaching researchers how to use Horizon to manage their cloud resources.

The Barcelona Summit was your first — what’s your biggest takeaway?

It was really interesting for me. I was really impressed with how huge this community is and how many different topics and concurrent tracks there are. At the Summit it’s really easy to understand the value of Openstack, it’s amazing how enterprises are investing in a open-source software and in a community of open-source developers. When you start working with Openstack in your office, you can’t imagine how big and alive the community behind the project is.

It was really difficult to choose which talks to attend, as there was more than one interesting one in the same time slot. I really liked the troubleshooting sessions where I learned how to debug Neutron problems. The upgrades topic was important for me as I was engaged in upgrading to Mitaka at the time. Attending sessions with lots of tips about upgrades was really helpful.

Other than the routine operation, I was also interested into the Manila, Swift and Swift encryption talk using Barbican and I enjoyed the Barbican workshop a lot.

To keep myself up to date on containers, I followed a lot of sessions about Kubernetes and using it as an Openstack control plane.

Why do you think it’s important for women to get involved with OpenStack?

It is true that the number of men and women in Openstack, and in IT in general, is unbalanced. We need more women to join the community. Women need not to be afraid to show their talent for this cutting-edge technology.

I think that when the community will be more balanced, it will be easier for people of any gender to join.

What obstacles do women face when getting involved in the OpenStack community?

They are always in the minority and  sometimes that just does not work. Being the only woman at your workplace sometimes makes you feel shy.

There are always debates in the OpenStack community – which one is the most important now, in your opinion ?

I think that IPv6 support for production clouds is still not completely there. This topic came up both at the round-table session and at some specific Neutron presentations. With IPv4, the network design is different, because the floating IP abstraction with NAT leads to little interaction between Neutron and the physical network gear.

With IPv6 routing, it’s absolutely necessary to have a common routing protocol between the physical network gear of the data center and the Neutron virtual routers. However, the discussion is still going on how these routing IPv6 features are going to be implemented and so far I have seen very little adoption of IPv6 in production clouds. Even people who are running IPv6 in production have to coordinate configuration on the physical network equipment of the data center and Openstack Neutron to have routing working correctly…

How do you stay on top of things and/or learn more?

I try to read the OpenStack operators mailing list to be updated with the current discussions. I also try to be active in Switzerland, attending the OpenStack meetups. It’s great to attend the meetups to learn from other OpenStack operators. Last but not least, at the university we have quite a dynamic environment where learning and running new experiments/technology is always welcome.

This post is part of the Women of OpenStack  interview series to spotlight women in various roles within our community, who have helped make OpenStack successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’re interested in being featured, please email editor@openstack.org.

The post How a small cloud team supports big ideas appeared first on OpenStack Superuser.

by Nicole Martinelli at January 19, 2017 12:47 PM

January 18, 2017

Cloudwatt

5 Minutes Stacks, episode 47 : Drupal Commons

Episode 47 : Drupal Commons

DrupalCommonslogo

Drupal Commons is an Enterprise Social Network allowing relationships between employees of a company inside communities.

Thanks to an active community of contributors, many plugins are available to add interactivity.

Its interface is friendly and usable by everybody without any difficulties.

Drupal Commons is fully developed in PHP and uses a MySQL database to store all its data.

Preparations

The Versions

  • CoreOS Stable 1185.5
  • Drupal Commons 7.x-3.40

The prerequisites to deploy this stack

These should be routine by now:

Size of the instance

By default, the stack deploys on an instance of type “Standard 1” (n1.cw.standard-1). A variety of other instance types exist to suit your various needs, allowing you to pay only for the services you need. Instances are charged by the minute and capped at their monthly price (you can find more details on the Pricing page on the Cloudwatt website).

Stack parameters, of course, are yours to tweak at your fancy.

By the way…

If you do not like command lines, you can go directly to the “run it thru the console” section by clicking here

What will you find in the repository

Once you have cloned the github, you will find in the blueprint-coreos-drupalcommons/ repository:

  • blueprint-coreos-drupalcommons.heat.yml: HEAT orchestration template. It will be use to deploy the necessary infrastructure.
  • stack-start.sh: Stack launching script. This is a small script that will save you some copy-paste.
  • stack-get-url.sh: Flotting IP recovery script.

Start-up

Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

$ source COMPUTE-[...]-openrc.sh
Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Adjust the parameters

With the blueprint-coreos-drupalcommons.heat.yml file, you will find at the top a section named parameters. The sole mandatory parameter to adjust is the one called keypair_name. Its default value must contain a valid keypair with regards to your Cloudwatt user account. This is within this same file that you can adjust the instance size by playing with the flavor parameter.

heat_template_version: 2015-04-30


description: Blueprint Drupal Commons


parameters:
  keypair_name:
    description: Keypair to inject in instance
    label: SSH Keypair
    type: string

  flavor_name:
    default: n1.cw.standard-1
    description: Flavor to use for the deployed instance
    type: string
    label: Instance Type (Flavor)
    constraints:
      - allowed_values:
          - n1.cw.standard-1
          - n1.cw.standard-2
          - n1.cw.standard-4
          - n1.cw.standard-8
          - n1.cw.standard-12
          - n1.cw.standard-16

  sqlpass:
    description: password root sql
    type: string
    hidden: true

[...]

Start stack

In a shell, run the script stack-start.sh with his name in parameter:

 ./stack-start.sh DrupalCommons
 +--------------------------------------+-----------------+--------------------+----------------------+
 | id                                   | stack_name      | stack_status       | creation_time        |
 +--------------------------------------+-----------------+--------------------+----------------------+
 | ee873a3a-a306-4127-8647-4bc80469cec4 | DrupalCommons   | CREATE_IN_PROGRESS | 2015-11-25T11:03:51Z |
 +--------------------------------------+-----------------+--------------------+----------------------+

Within 5 minutes the stack will be fully operational. (Use watch to see the status in real-time)

 $ watch heat resource-list DrupalCommons
 +------------------+-----------------------------------------------------+---------------------------------+-----------------+----------------------+
 | resource_name    | physical_resource_id                                | resource_type                   | resource_status | updated_time         |
 +------------------+-----------------------------------------------------+---------------------------------+-----------------+----------------------+
 | floating_ip      | 44dd841f-8570-4f02-a8cc-f21a125cc8aa                | OS::Neutron::FloatingIP         | CREATE_COMPLETE | 2015-11-25T11:03:51Z |
 | security_group   | efead2a2-c91b-470e-a234-58746da6ac22                | OS::Neutron::SecurityGroup      | CREATE_COMPLETE | 2015-11-25T11:03:52Z |
 | network          | 7e142d1b-f660-498d-961a-b03d0aee5cff                | OS::Neutron::Net                | CREATE_COMPLETE | 2015-11-25T11:03:56Z |
 | subnet           | 442b31bf-0d3e-406b-8d5f-7b1b6181a381                | OS::Neutron::Subnet             | CREATE_COMPLETE | 2015-11-25T11:03:57Z |
 | server           | f5b22d22-1cfe-41bb-9e30-4d089285e5e5                | OS::Nova::Server                | CREATE_COMPLETE | 2015-11-25T11:04:00Z |
 | floating_ip_link | 44dd841f-8570-4f02-a8cc-f21a125cc8aa-`floating IP`  | OS::Nova::FloatingIPAssociation | CREATE_COMPLETE | 2015-11-25T11:04:30Z |
   +------------------+-----------------------------------------------------+-------------------------------+-----------------+----------------------

The start-stack.sh script takes care of running the API necessary requests to execute the normal heat template which:

  • Starts an CoreOS based instance with the docker container Drupal Commons and the container MySQL
  • Expose it on the Internet via a floating IP.

All of this is fine, but…

You do not have a way to create the stack from the console?

We do indeed! Using the console, you can deploy Drupal Commons:

  1. Go the Cloudwatt Github in the applications/blueprint-coreos-drupalcommons repository
  2. Click on the file named blueprint-coreos-drupalcommons.heat.yml
  3. Click on RAW, a web page will appear containing purely the template
  4. Save the file to your PC. You can use the default name proposed by your browser (just remove the .txt)
  5. Go to the « Stacks » section of the console
  6. Click on « Launch stack », then « Template file » and select the file you just saved to your PC, and finally click on « NEXT »
  7. Name your stack in the « Stack name » field
  8. Enter the name of your keypair in the « SSH Keypair » field
  9. Write a passphrase that will be used for the database drupalcommons user
  10. Choose your instance size using the « Instance Type » dropdown and click on « LAUNCH »

The stack will be automatically generated (you can see its progress by clicking on its name). When all modules become green, the creation will be complete. You have to wait 5 minutes to the softwares be ready. You can then go to the “Instances” menu to find the floating IP, or simply refresh the current page and check the Overview tab for a handy link.

If you’ve reached this point, you’re already done! Go enjoy Drupal Commons!

A one-click deployment sounds really nice…

… Good! Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOY and follow the simple steps… 2 minutes later, a green button appears… ACCESS: you have your e-commerce platform.

Enjoy

Once all this makes you can connect on your server in SSH by using your keypair beforehand downloaded on your compute,

You are now in possession of Drupal Commons, you can enter via the URL http://ip-floatingip. Your full URL will be present in your stack overview in horizon Cloudwatt console.

At your first connexion you will ask to give the information about your Enterprise Social Network and how to access to the database. Complete the fields as below, the password is which one you chose when you created the stack.

firstco

Now you have to set up the main informations about your social network:

configRSE

You can now setup your social network, this one being hosted in France in a safe environment, you can completely trust on this product.

So watt?

The goal of this tutorial is to accelerate your start. At this point you are the master of the stack.

You now have an SSH access point on your virtual machine through the floating-IP and your private keypair (default userusername core).

  • You have access to the web interface via the address specified in your output stack in horizon console.

  • Here are some news sites to learn more:

    • https://www.drupal.org/project/commons
    • https://docs.acquia.com/commons

Have fun. Hack in peace.

by Julien DEPLAIX at January 18, 2017 11:00 PM

Flavio Percoco

On communities: Trading off our values... Sometimes

Not long ago I wrote about how much emotions matter in every community. In that post I explained the importance of emotions, how they affect our work and why I believe they are relevant for pretty much everything we do. Emotions matter is a post quite focused on how we can affect, with our actions, other people's emotional state.

I've always considered myself an almost-thick skinned person. Things affect me but not in a way that would prevent me from keep moving forward. Most of the time, at least. I used to think this was a weakness, I used to think that letting this emotions through would slow me down. With time I came to accept it as a strength. Acknowledging this characteristic of mine has helped me to be more open about the relevance of emotions in our daily interactions and to be mindful about other folks that, like me, are almost-thick skinned or not even skinned at all. I've also come to question the real existence of the so called thick-skinned people and the more I interact with people, the more I'm convinced they don't really exist.

If you would ask me what emotion hits me the most I would probably say frustration. I'm often frustrated about things happening around me, especially about things that I am involved with. I don't spend time on things I can't change but rather try focus on those that not only directly affect me but that I can also have a direct impact on.

At this point, you may be wondering why I'm saying all this and what all this has to do with both, communities and with this post. Bear with me for a bit, I promise you this is relevant.

Culture (as explained in this post), emotions, personality and other factors drive our interactions with other team members. For some people, working in teams is easier than for others, although everyone claims they are awesome team mates (sarcasm intended, sorry). I believe, however, that one of the most difficult things of working with others is the constant evaluation of the things we values as team members, humans, professionals, etc.

There are no perfect teams and there are no perfect team mates. We weight the relevance of our values everyday, in every interaction we have with other people, in every thing we do.

But, what values am I talking about here?

Anything, really. Anything that is important to us. Anything that we stand for and that has slowly become a principle for us, our modus operandi. Our values are our methods. Our values are those beliefs that silently tell us how to react under different circumstances. Our values tell us whether we should care about other people's emotions or not. Controversially, our values are the things that will and won't make us valuable in a team and/or community. Our values are not things we posses, they are things we are and believe. In other words, the things we value are the things we consider important that will determine our behavior, our interaction with our environment and how the events happening around us will affect us.

The constant trading off of our values is hard. It makes us question our own stances. What's even harder is putting other people's values on top of ours from time to time. This constant evaluation is not supposed to be easy, it's never been easy. Not for me, at least. Let's face it, we all like to be stubborn, it feels go when things go the way we like. It's easier to manage, it's easier to reason about things when they go our way.

Have you ever found yourself doing something that will eventually make someone else's work useless? If yes, did you do it without first talking with that person? How much value do you put into splitting the work and keeping other folks motivated instead of you doing most of it just to get it done? Do you think going faster is more important than having a motivated team? How do you measure your success? Do you base success on achieving a common goal or about your personal performance in the process?

Note that the questions above don't try to express an opinion. The answers to those questions can be 2 or more depending on your point of view and that's fine. I don't even think there's a right answer to those questions. However, they do question our beliefs. Choosing one option over the other may go in favor or against of what we value. This is true for many areas of our life, not only our work environment. This applies to our social life, our family life, etc.

Some values are easier to question than others but we should all spend more time thinking about them. I believe the time we spend weighting and re-evaluating our values allow us for adapting faster to new environments and for us to grow as individuals and communities. Your cultural values have a great influence in this process. Whether you come from an individualist culture or a collectivist one (Listen to 'Customs of the world' for more info on this) will make you prefer one option over the other.

Of course, balance is the key. Giving up our beliefs every time is not the answer but not giving them up ever is definitely frustrating for everyone and makes interactions with other cultures more difficult. There are things that cannot be traded and that's fine. That's understandable, that's human. That's how it should be. Nonetheless, there are more things that can be traded than there are things that you shouldn't give up. The reason I'm sure of this is that our world is extremely diverse and we wouldn't be were we are if we wouldn't be able to give up some of our own beliefs from time to time.

I don't think we should give up who we are, I think we should constantly evaluate if our values are still relevant. It's not easy, though. No one said it was.

by Flavio Percoco at January 18, 2017 11:00 PM

Mirantis

Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project

The post Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project appeared first on Mirantis | The Pure Play OpenStack Company.

As part of both companies commitment to industry standards and interoperability, Wind River and Mirantis recently completed a joint Proof of Concept interoperability project at Wind River’s Network Functions Virtualization (NFV) lab in Santa Clara, California.

The goal of the project was modest: to demonstrate that Wind River’s Titanium Server Carrier Grade software virtualization platform could be deployed in federation with the latest, most advanced version of Mirantis Pure Play Web-Scale OpenStack distribution.

As expected, this goal was readily achieved, proving:  A) The significance and importance of adhering to open, standard interfaces; B) The value of healthy ‘coopetition’ for our respective customers and the industry as a whole.

Here are some specifics surrounding the project:

  • Hardware Baseline: Dual socket, Intel Xeon E5 Servers (provided by several Titanium Cloud H/W partners)
  • Wind River Software Baseline: Titanium Server Release 3
  • Mirantis Software Baseline: Mirantis OpenStack 9.1 + Ubuntu 14.04
  • Project Configuration:
    • Mirantis OpenStack installed as the primary OpenStack Region across one set of servers
    • Titanium Server installed as a secondary OpenStack Region for high performance & high reliability workloads across a second, separate set of servers
    • The Mirantis Region hosted OpenStack Keystone services (user identities and credentials) which were shared with the Titanium Server Region
    • Once installed and operational, the Horizon dashboards of both systems were able to see and administer resources in either Region.  i.e. Using the Mirantis dashboard, users could view and manage the Titanium Server virtual resources and workloads – together with the native Mirantis Region virtual resource and workloads.  Similarly, Titanium Server dashboard users could see and manage resources in either the Titanium Server Region or the Mirantis Region.

The results of this project are extremely important and powerful for the end user.  Having the ability to manage an entire cloud, containing different types of workloads, from a single user interface is fantastic.  By deploying and taking advantage of the shared services  built into OpenStack and enabled through OpenStack Regions, users are able to choose the software platform which best meets the needs and SLAs of their applications and services – without sacrificing ease of use and manageability.

Technical accomplishments aside, this project has shown that together, Wind River and Mirantis have the willingness and capability to leverage their respective strengths to the benefit of their customers.  This is part of the original promise of NFV, and it is impressive to actually see put into practice!

(Originally published on the Wind River blog.)

The post Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project appeared first on Mirantis | The Pure Play OpenStack Company.

by Guest Post at January 18, 2017 06:06 PM

NFVPE @ Red Hat

Bootstrap a kpm registry to run a kpm registry

Yo dawg… I heard you like kpm-registries. So I bootstrapped a kpm-registry so you can deploy a kpm-registry from a kpm-registry.

by Doug Smith at January 18, 2017 02:28 PM

OpenStack Superuser

Here’s what you need to know about OpenStack’s Stewardship Working Group

Stewardship is defined as the careful and responsible management of something entrusted to one’s care. OpenStack Foundation community members formed a Stewardship Working Group to ensure that “people at the bottom and the boundaries of the organization choice over how to serve a customer, a citizen, a community.” 

The group grew out of what Colette Alexander calls “sticky people problems” and the conversations she had with OpenStack leadership around them. Some of the common sticky bits will be familiar to anyone with an online life — such as flame wars and shutting the door on discussions to put them out.

Alexander, named “hero of the people” at the most recent Summit, is part of the SWG and presented it to the larger community in Barcelona on a panel that included Monty Taylor, Thierry Carrez and Doug Hellman.

Here she breaks down the SWG mission and how you can get involved.

You were given an OpenStack Contributor medal in Barcelona – tell us a bit about how you manage the dual action of pushing projects forward (training etc.) and yet keeping a light tone

That panel talk was so much fun to do! I think that’s really the key, honestly, to a light tone – remembering that this is about talking and working with a pretty awesome group of people, and about having fun.  Even though some of the stuff we talk about is pretty thorny (and not always even “solvable” in the sense that engineers like to experience solutions) the subject matter – helping people to communicate better, plan better and feel better about their work – necessitates that we approach it with a positive attitude, and with care for everyone involved.

What’s the mission of the stewardship working group?

It’s probably helpful to read the resolution that created the SWG over here: https://governance.openstack.org/tc/resolutions/20160705-stewardship.html

But the tl;dr is that the OpenStack Technical Committee saw a need for improving leadership and communication tools and practices across the community and established the SWG as a way of examining, vetting, and providing recommendations to them, as well as resources to the community related to those themes.

Who should get involved?

Everyone. (I mean that, seriously – Users! Product people! Executives! Managers of OpenStack developers! And of course, the developers themselves!)

What are the most pressing “people problems” in the OpenStack community?

They’re the same pressing “people problems” everywhere, I think – communicating clearly, resolving conflict, and providing information, resources, and generally available help to anyone who wants to join the community or step into a new role in it – these are difficult problems everywhere.

I talked a little bit in Barcelona about how sometimes the community can be very conflict avoidant. The question I asked the audience was: “Who here has walked away from a code review, or a mailing list post, disagreeing or feeling uneasy about something or disagreeing with something, but without actually saying anything or addressing the problem?” We all raised our hands (including me!). I think starting to have conversations that seem uncomfortable or potentially involve conflict can be really difficult if we’re already in the habit of avoiding them.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

What are some of the solutions you’re working on?

Well, we’ve already advised the TC on a couple of pieces of work they’ve produced: the goals for the release cycle was already on their radar before our formation, but some of the refinement of the goals process and work has happened with the help of the SWG. That’s also true with the discussion and writing of OpenStack Principles.

It should absolutely be noted that both of those actions resulted in quite a clamor on the mailing list and within the community. Some folks even came to the SWG cross project session in Barcelona to discuss their angst about those two things. The TC and the SWG were very receptive to feedback in those sessions (and still welcome it!) – and I think we all learned quite a bit during those initial months of work on how we can improve things moving forward.

What are some of the resources you’re collecting for how to help people be better PTLs minus the burnout?

One of the things that’s been on our backlog for a while now is the idea of an OpenStack leadership passport – a kind of checklist of activities or recommended reading that can help folks as they transition into different kinds of leadership positions in the community.

You can check off or “stamp” your passport as you complete things, and also have a place to write and reflect on what you’ve learned. The idea of a passport is to make explicit what sorts of activities really make a difference to effectively working within the community and leading a group of people here.

What contributions or involvement do you need most right now from the community?

We need meeting attendance! And participation in conversation on the mailing list! And on our IRC channel: #openstack-swg

More than anything, we want to hear what people think we should work on and solve, and what people are interested in working on themselves, even if they just have a spare half hour a week. Some items, like the passport I mentioned above, are going to take some research with various projects and leaders across the community to compile and put forward, so anyone who’s willing to have conversations and do some research and work on this is very much welcome and appreciated!

What are your upcoming plans for the Summit or this release cycle?

Well, one of the things we’re going to get to work on, by request of the TC, is facilitating the creation of a technical vision for OpenStack. Per my intro-email about the SWG, and Thierry’s response (you can see that here: http://lists.openstack.org/pipermail/openstack-dev/2016-December/108662.html ) it seems like the most pressing thing the TC would like some assistance on.

I’m also working on organizing another leadership training, sponsored by the Foundation, at ZingTrain. Tentatively, we’re looking at the week of April 10th for that and I’m looking forward to seeing TC members who haven’t participated in training yet, as well as many other members of the community there.

We’re still putting together a vision of what we’d like to see by the time of the PTG in Atlanta, and we’ll be talking about work for both the PTG and the Boston Summit in the coming month at our bi-weekly meetings: https://wiki.openstack.org/wiki/Meetings/SWGMeeting

Anything else you want people to know?

I think OpenStack has a really amazing opportunity to lead in the open source community space by providing an environment that fosters leadership and stewardship among its members. There aren’t that many other communities with such democratic principles in their governance models and structures, and that means that we have a lot of really cool strengths to play on.

The post Here’s what you need to know about OpenStack’s Stewardship Working Group appeared first on OpenStack Superuser.

by Nicole Martinelli at January 18, 2017 02:10 PM

Galera Cluster by Codership

Taking Full Advantage of Galera Multi-Master Replication-Galera Cluster Resources Updated

Last year Codership produced a lot of valuable content to help Galera users to get started with Galera and manage Galera. We have gathered  the resources to our website.

Taking Full Advantage of Galera Multi-Master replication video can be watched here.

We have now uploaded Slideshare with many new presentations. Check them out!

The best source of multi-master Galera Cluster help,  Galera Cluster documentation, is being updated constantly.

by Sakari Keskitalo at January 18, 2017 02:10 PM

Red Hat Stack

9 tips to properly configure your OpenStack Instance

In OpenStack jargon, an Instance is a Virtual Machine, the guest workload. It boots from an operating system image, and it is configured with a certain amount of CPU, RAM and disk space, amongst other parameters such as networking or security settings.

In this blog post kindly contributed by Marko Myllynen we’ll explore nine configuration and optimization options that will help you achieve the required performance, reliability and security that you need for your workloads.

Some of the optimizations can be done inside a guest regardless of what has the OpenStack Cloud Administrator enabled in your cloud. However, more advanced options require prior enablement and, possibly, special host capabilities. This means many of the options described here will depend on how the Administrator configured the cloud, or may not be available for some tenants as they are reserved for certain groups. More information about this subject can be found on the Red Hat Documentation Portal and its comprehensive guide on OpenStack Image Service. Similarly, the upstream OpenStack documentation has some extra guidelines available.

The following configurations should be evaluated for any VM running on any OpenStack environment. These changes have no side-effects and are typically safe to enable even if unused

openstack-libvirt-images

1) Image Format: QCOW or RAW?

OpenStack storage configuration is an implementation choice by the Cloud Administrator, often not fully visible to the tenant. Storage configuration may also change over the time without explicit notification by the Administrator, as he/she adds capacity with different specs.

When creating a new instance on OpenStack, it is based on a Glance image. The two most prevalent and recommended image formats are QCOW2 and RAW. QCOW2 images (from QEMU Copy On Write) are typically smaller in size. For instance a server with a 100 GB disk, the size of the image in RAW format, might be only 10 GBs when formatted into QCOW2. Regardless of the format, it is a good idea to process images before uploading them to Glance with virt-sysprep(1) and virt-sparsify(1).

The performance of QCOW2 depends on both the hypervisor kernel and the format version, the latest being QCOW2v3 (sometimes referred to as QCOW3) which has better performance than the earlier QCOW2, almost as good as RAW format. In general we assume RAW has better overall performance despite the operational drawbacks (like the lack of snapshots) or the increase in time it takes to upload or boot (due to its bigger size). Our latest versions of Red Hat OpenStack Platform automatically use the newer QCOW2v3 format (thanks to the recent RHEL versions) and it is possible to check and also convert between RAW and older/newer QCOW2 images with qemu-img(1).

OpenStack instances can either boot from a local image or from a remote volume. That means

  • Image-backed instances benefit significantly by the performance difference between older QCOW2 vs QCOW2v3 vs RAW.
  • Volume-backed instances can be created either from QCOW2 or RAW Glance images. However, as Cinder backends are vendor-specific (Ceph, 3PAR, EMC, etc), they may not use QCOW2 nor RAW. They may have their own mechanisms, like dedup, thin provisioning or copy-on-write. On a particular note, using QCOW2 in Glance with Ceph is not supported (see the ceph documentation and BZ#1383014).

.dashboard

As a general rule of thumb, rarely used images should be stored in Glance as QCOW2, but an image which is used constantly to create new instances (locally stored), or for any volume-backed instances, using RAW should provide better performance despite the sometimes longer initial boot time (except in Ceph-backed systems, thanks to its copy-on-write approach). In the end, any actual recommendation will depend on the OpenStack storage configurations chosen by the Cloud Administrator.

2) Performance Tweaks via Image Extra Properties

Since the Mitaka version, OpenStack allows Nova to automatically optimize certain libvirt and KVM properties on the Compute host to better execute a particular OS in the guest. To provide the guest OS information to Nova, just define the following Glance image properties:

  • os_type=linux # Generic name, like linux or windows
  • os_distro=rhel7.1 # Use osinfo-query os to list supported variants

Additionally, at least for the time being (see BZ#1397120), in order to make sure the newer and more scalable virtio-scsi para-virtualized SCSI controller is used instead of the older virt-blk, the following properties need to be set explicitly:

  • hw_scsi_model=virtio-scsi
  • hw_disk_bus=scsi

All the supported image properties are listed at the Red Hat Documentation portal as well as other CLI optionsflavor-metadata

3) Prepare for Cloud-init

“Cloud-init” is a package used for early initialization of cloud instances, to configure basics like partition / filesystem size and SSH keys.

Ensure that you have installed the cloud-init and cloud-utils-growpart packages in your Glance image, and that the related services will be executed on boot, to allow the execution of “cloud-init” configurations to the OpenStack VM.

In many cases the default configuration is acceptable but there are lots of customization options available, for details please refer to the cloud-init documentation.

4) Enable the QEMU Guest Agent

On Linux hosts, it is recommended to install and enable the QEMU guest agent which allows graceful guest shutdown and (in the future) automatic freezing of guest filesystems when snapshots are requested, which is a necessary operation for consistent backups (see BZ#1385748):

  • yum install qemu-guest-agent
  • systemctl enable qemu-guest-agent

In order to provide the needed virtual devices and use the filesystem freezing functionality when needed, the following properties need to be defined for Glance images (see also BZ#1391992):

  • hw_qemu_guest_agent=yes # Create the needed device to allow the guest agent to run
  • os_require_quiesce=yes # Accept requests to freeze/thaw filesystems

5) Just in case: how to recover from guest failure

Comprehensive instance fault recovery, high availability, and service monitoring requires a layered approach which as a whole is out of scope for this document. In the paragraphs below we show the options that can be applicable purely inside a guest (which can be thought as being the innermost layer). The most frequently used fault recovery mechanisms for an instance are:

  • recovery from kernel crashes
  • recovery from guest hangs (which do not necessarily involve kernel crash/panic)

In the rare case the guest kernel crashes, kexec/kdump will capture a kernel vmcore for further analysis and reboot the guest. In case the vmcore is not wanted, kernel can be instructed to reboot after a kernel crash by setting the panic kernel parameter, for example “panic=1”.

In order to reboot an instance after other unexpected behavior, for example high load over a certain threshold or a complete system lockup without a kernel panic, the watchdog service can be utilized. Other actions than “reboot” can be found here. The following property needs to be defined for Glance images or Nova flavors.

  • hw_watchdog_action=reset

Then, install the watchdog package inside the guest, then configure the watchdog device, and finally, enable the service:

  • yum install watchdog
  • vi /etc/watchdog.conf
  • systemctl enable watchdog

By default watchdog detects kernel crashes and complete system lockups. See the watchdog.conf(5) man page for more information, e.g., how to add guest health-monitoring scripts as part of watchdog functionality checks.

6) Tune the Kernel

The simplest way to tune a Linux node is to use the “tuned” facility. It’s a service which configures dozens of system parameters according to the selected profile, which in the OpenStack case is “virtual-guest”. For NFV workloads, Red Hat provides a set of NFV tuned profiles to simplify the tuning of network-intensive VMs, .

In your Glance image, it is recommended to install the required package, enable the service on boot, and activate the preferred profile. You can do it by editing the image before uploading to Glance, or as part of your cloud-init recipe:

  • yum install tuned
  • systemctl enable tuned
  • tuned-adm profile virtual-guest

7) Improve networking via VirtIO Multiqueuing

Guest kernel virtio drivers are part of the standard RHEL/Linux kernel package and enabled automatically without any further configuration as needed. Windows guests should also use the official virtio drivers for their particular Windows version, greatly improving network and disk IO performance.

However, recent advanced in Network packet processing in the Linux kernel and also in user-space components created a myriad of extra options to tune or bypass the virtio drivers. Below you’ll find an illustration of the virtio device model (from the RHEL Virtualization guide).

Network multiqueuing, or virtio-net multi-queue, is an approach that enables parallel packet processing to scale linearly with the number of available vCPUs of a guest, often providing notable improvement to transfer speeds especially with vhost-user.

Provided that the OpenStack Admin has provisioned the virtualization hosts with supporting components installed (at least OVS 2.5 / DPDK 2.2), this functionality can be enabled by OpenStack Tenant with the following property in those Glance images where we want network multiqueuing:

  • hw_vif_multiqueue_enabled=true

Inside a guest instantiated from such an image, the NIC channel setup can be checked and changed as needed with the commands below:

  • ethtool -l eth0 #to see the current number of queues
  • ethtool -L eth0 combined <nr-of-queues> # to set the number of queues. Should match the number of vCPUs

There is an open RFE to implement multi-queue activation by default in the kernel, see BZ#1396578.

virtio qemu

8) Other Miscellaneous Tuning for Guests

It should go without saying that right-sized instances should contain only the minimum amount of installed packages and run only the services needed. Of a particular note, it is probably a good idea to install and enable the irqbalance service as, although not absolutely necessary in all scenarios, its overhead is minimal and it should be used for example in SR-IOV setups (this way the same image can be used regardless of such lower level details).

Even though implicitly set on KVM, it is a good idea to explicitly add the kernel parameter no_timer_check to prevent issues with timing devices. Enabling persistent DHCP client and disabling zeroconf route in network configuration with PERSISTENT_DHCLIENT=yes and NOZEROCONF=yes, respectively, helps to avoid networking corner case issues.

Guest MTU settings are usually adjusted correctly by default, but having a proper MTU in use on all levels of the stack is crucial to achieve maximum network performance. In environments with 10G (and faster) NICs this typically means the use of Jumbo Frames with MTU up to 9000, taking possible VXLAN encapsulation into account. For further MTU discussion, see the upstream guidelines for MTU or the Red Hat OpenStack Networking Guide.

9) Improving the way you access your instances

Although some purists may consider incompatible running SSH inside truly cloud-native instances, especially in auto-scaling production workloads, most of us will still rely on good old SSH to perform configuration tasks (via Ansible for instance) as well as maintenance and troubleshooting (e.g., to fetch logs after a software failure).

The SSH daemon should avoid DNS lookups to speed up establishing SSH connections. For this, consider using UseDNS no in /etc/ssh/sshd_config and adding OPTIONS=-u0 to /etc/sysconfig/sshd (see sshd_config(5) for details on these). Setting GSSAPIAuthentication no could be considered if Kerberos is not in use. In case instances frequently connect to each other, the ControlPersist / ControlMaster options might be considered as well.

Typically remote SSH access and console access via Horizon are enough for most use cases. During development phase direct console access from the Nova compute host may also be helpful, for this to work enable the serial-getty@ttyS1.service, allow root access via ttyS1 if needed by adding ttyS1 to /etc/securetty, and then access the guest console from the Nova compute with virsh console <instance-id> –devname serial1.console-access


We hope with this blog post you’ve discovered new ways to improve the performance of your OpenStack instances. If you need more information, remember we have tons of documents in our OpenStack Documentation Portal and that we offer the best OpenStack courses of the industry, starting with the free of charge CL010 Introduction to OpenStack Course.

by Marko Myllynen at January 18, 2017 02:00 PM

Sean Roberts

TODO Open Source Presentation 17 January 2017

The primary purpose of the TODO group is to bring together companies who run open source programs. They believe there are a number of challenges for companies who want to run open source projects...

by sarob at January 18, 2017 12:00 AM

January 17, 2017

OpenStack Superuser

How to get a job working with OpenStack

Specialists still rule the IT world, but generalists with the emotional intelligence to learn new things are in increasing demand.

That’s one of the takeaways from Michael Apostol, software engineer director at the OpenStack Innovation Center. In an interview with Superuser TV, he talks about what he looks for as a hiring manager and how potential candidates can stand out. He also examines the role of certifications and training programs and how people looking jobs in OpenStack can take advantage of them.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/_KrlCa2bnJw?feature=oembed" width="640"></iframe>

For more information on OpenStack training courses, please visit:

The post How to get a job working with OpenStack appeared first on OpenStack Superuser.

by Superuser at January 17, 2017 12:56 PM

Sean Roberts

Public Product Management Transforms Private at Walmart

For those of you that know me, know that I am more a of engineer than a product guy. Even so, I was hired on to transform the Walmart Platform Products using my engineering...

by sarob at January 17, 2017 12:00 AM

January 16, 2017

RDO

Writing RPM macro for OpenStack

RPM macro is a short string, always prefixed by % and generally surrounded by curly brackets ({}) which RPM will convert to a different and usually longer string. Some macros can take arguments and some can be quite complex.

In RHEL, CentOS and Fedora, macros are provided by rpm package and from redhat-rpm-config.

In RDO, OpenStack macros are provided by openstack-macros which comes from upstream rpm-packaging project.

You can find list of all macros under /usr/lib/rpm/macros.d/ directory.

To see the list of all available macros on your system:

$ rpm --showrc

For example: %{_bindir} is a rpm-macro which points to the binary directory where executables are usually stored.

To evaluate an rpm macro:

$ rpm --eval %{_bindir}

%py_build is the commonly used rpm-macro in RDO OpenStack packages which points to python setup.py build process.

$ rpm --eval %py_build

Motivation behind writing a new RPM macro for OpenStack packages

Currently, Tempest provides an external test plugin interface which enables anyone to integrate an external test suite as a part of Tempest run and each service Tempest plugin has an entrypoint defined in setup.cfg through which tempest discovers it and list the Tempest plugins. For example:

tempest.test_plugins =
    heat_tests = heat_integrationtests.plugin:HeatTempestPlugin

In RDO OpenStack services RPM packages, In-tree Tempest plugins packages are provided by openstack-{service}-tests subpackage but the tempest plugin entrypoint is provided by the main package openstack-%{service}. So once you have a working OpenStack environment with Tempest installed having no test subpackage installed. Then we tried to run tempest commands you would have encountered "No module heat_integrationtests.plugin found" and you end up installing a hell lot of packages to fix this. The basic reason for the above error is tempest plugin entry point is installed by main OpenStack package but files pointing to entrypoint are not found.

To fix the above issue we have decided to separate out the tempest plugin entrypoint from the main package and move it to openstack-{service}-tests subpackage during rpmbuild process by creating a fake tempest plugin entry point for all RDO services packages. Since it is a massive and similar change affecting all OpenStack services packages. So, I have created %py2_entrypoint macro which is available in OpenStack Ocata release.

Here is the macro definition of %py2_entrypoint:

# Create a fake tempest plugin entry point which will
# resides under %{python2_sitelib}/%{service}_tests.egg-info.
# The prefix is %py2_entrypoint %{modulename} %{service}
# where service is the name of the openstack-service or the modulename
# It should used under %install section
# the generated %{python2_sitelib}/%{service}_tests.egg-info
# will go under %files section of tempest plugin subpackage
# Example: %py2_entrypoint %{modulename} %{service}
# In most of the cases %{service} is same as %{modulename}
# but in case of neutron plugins it is different
# like servicename is neutron-lbaas and modulename is neutron_lbass
%py2_entrypoint() \
egg_path=%{buildroot}%{python2_sitelib}/%{1}-*.egg-info \
tempest_egg_path=%{buildroot}%{python2_sitelib}/%{1}_tests.egg-info \
mkdir $tempest_egg_path \
grep "tempest\\|Tempest" %{1}.egg-info/entry_points.txt >$tempest_egg_path/entry_points.txt \
sed -i "/tempest\\|Tempest/d" $egg_path/entry_points.txt \
cp -r $egg_path/PKG-INFO $tempest_egg_path \
sed -i "s/%{2}/%{1}_tests/g" $tempest_egg_path/PKG-INFO \
%nil

Here is the list of tempest-plugin-entrypoint reviews.

Some learning from above macro:

[1.] You can use the shell script or Lua language to write macros.

[2.] %define <macroname> is used to define a macro in spec file or you can directly place the macro in /usr/lib/rpm/macros.d/macros.openstack-rdo to consume it using rpmbuild process.

[3.] use %nil to showcase the end of the macro.

[4.] use %{1} to %{6} to pass variables in macros.

Above is a temporary solution. We are working upstream to separate out tempest plugins from OpenStack project to a new repo for easier management and packaging in Pike release:https://review.openstack.org/#/c/405416/.

Thanks to Daniel, Alan Haikel and many others on #rdo channel for getting the work done. It was a great learning experience.

by chandankumar at January 16, 2017 04:36 PM

Carlos Camacho

TripleO deep dive session #7 (Undercloud - TripleO UI)

This is the seven release of the TripleO “Deep Dive” sessions

In this session Liz Blanchard and Ana Krivokapic will give us some bits about how to contribute to the TripleO UI project. Once checking this session we will have a general overview about the project’s history, properties, architecture and contributing steps.

So please, check the full session content on the TripleO YouTube channel.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/9TseONVfLR8" width="560"></iframe>

Here you will be able to see a quick overview about how to install the UI as a development environment.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/1puSvUqTKzw" width="560"></iframe>

The summarized steps are also available in this blog post.

Sessions index:

    * TripleO deep dive #1 (Quickstart deployment)

    * TripleO deep dive #2 (TripleO Heat Templates)

    * TripleO deep dive #3 (Overcloud deployment debugging)

    * TripleO deep dive #4 (Puppet modules)

    * TripleO deep dive #5 (Undercloud - Under the hood)

    * TripleO deep dive #6 (Overcloud - Physical network)

    * TripleO deep dive #7 (Undercloud - TripleO UI)

by Carlos Camacho at January 16, 2017 04:00 PM

OpenStack Superuser

Why you should hire upstream

Transforming a team to work on open source software is a complex task. Not particularly because of licensing or knowing the underlying technology a given community is using, but due to social and cultural barriers that can be difficult to overcome if everyone in the team is new to this environment. That’s why you need to plan for upstream, first.

Why isn’t it enough to send people to training courses and provide them the necessary time and tools for the work?

The challenge that your developers are facing is similar to when you hire a new team member who needs to learn how to be part of this new group. In an open-source community like OpenStack, the size of the core teams can be similar to proprietary environments, but when someone expresses their thoughts and ideas, thousands of people outside the company will see it and can react to it. This can easily scare people from participating in open discussions, sending mails to mailing lists or uploading new code or a documentation snippet for review.

How can hiring experts  help your teams and organization?

Having mentors on your teams is one of the best ways to help people new to OpenStack take their first steps in a supervised way. The best mentors will be people who are already experienced with the tools and social norms of the community. You can also be sure that these experts know the underlying technology as well, since they are already trusted and respected members of OpenStack.

By knowing the pace and the people, they will be able to help not just the work the teams are doing, but with planning and business strategy too. They can better predict what to expect in an upcoming release, how fast the development of a feature is going, where to invest more time and effort and what to do another way. All in all, it’s a great boost to your organization from people who know best!

These OpenStack veterans will also be able to influence the community better with the visibility they already have. They can use their insights to coach and mentor the people who need to work upstream to give them a better community experience, help them engage the best way to get the most out of their open-source efforts. They can review ideas before introducing them to people to ensure they are clear and understandable, giving a vital once-over from someone who knows what the community looks for.

By knowing the dynamics of the community better, they can also help your team to introduce features at the right time and help them understand the processes, timing and right way to plan downstream.

What else they can help with?

If you haven’t heard about InnerSource, now is the time to get familiar with the concept. It’s a strategic effort to take what you’ve learned about open-source software development and consider using it as part of your proprietary software development flow. You can experiment how to make your tools and processes more efficient while giving your team and organization an easier way and better environment to contribute their changes and ideas upstream.

Having experts embedded in your organization can help you find the starting points for all the transformational steps above. They have the knowledge by participating and working upstream in OpenStack already, they have the internal view and deep understanding of this lively ecosystem which will benefit your business and development efforts.

When you plan your headcount for the year be strategic, look for expertise and experience to jump-start your OpenStack efforts!  You can post your open positions on the following web site: https://www.openstack.org/community/jobs/

Some final reminders about why it makes sense to hire upstream first:

Have experts join your team

They are:

  • experienced with the technology and the community as well
  • able to predict better what’s next by the experience of seeing/experiencing the past
  • have contacts and visibility
  • able to teach/coach your team (embedded/internal mentors can be more available to you than outside/volunteer mentors)
  • able to help out with your upstream processes

Keep the Inner Source in mind

  • Consider using processes and technologies that are present in open source
  • Aiming for efficiency, better and faster integration
  • Having experts on board can help with revolutionizing your internal ways of working to improve

The post Why you should hire upstream appeared first on OpenStack Superuser.

by ildiko Vancsa at January 16, 2017 01:48 PM

Alessandro Pilotti

OpenStack Newton Benchmarking – Scenario 3 (Hadoop)

We have seen how to setup the benchmarking environment and two simple scenarios (Scenario 1, Scenario 2) in the previous blogposts of this series. This time we are going to add Hadoop to the mix to test a workload with significant memory, CPU and I/O impact.

 

Scenario 3

 

The image used for this test is an Ubuntu Xenial 16.04.1 LTS (cloud image), which is available for download here. Hadoop has been configured to run in standalone mode (Single Node Cluster), with commands being sent via SSH from Rally over a tenant network. The official documentation on how to setup an Apache Hadoop Cluster 2.7.2 (latest stable version) can be found here.

 

This test consists of:

  • booting a VM from an image where Hadoop is already installed and configured. The flavor being used for each VM has 4096MB RAM, 40GB disk and 2 vCPUs.
  • waiting until the VM becomes active and is ready to process Hadoop jobs
  • executing three different Hadoop jobs:
    • TeraGen -> a map/reduce program to generate the data
    • TeraSort -> samples the input data generated by TeraGen and uses map/reduce to sort the data
    • TeraValidate -> a map/reduce program that validates if the output from TeraGen is properly sorted
  • deleting the VM

 

Part 1 – One VM in parallel

 

For the beginning, we started with one VM and the size of the input data for TeraGen set to 10.000.000 (number of 100-byte rows).

  1. Results for KVM with Xenial Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic) as host operating system:
    kvm-3-1

  2. Results for Hyper-V with Windows Server 2012 R2 as host operating system:
    hyperv-2012r2-3-1
  3. Results for Hyper-V with Windows Server 2016 as host operating system:
    hyperv-2016-3-1

Remarks for the results obtained so far: the average time is approximately the same in case of Hyper-V, with KVM being slightly slower than Hyper-V.

 

Part 2 – 10 VMs in paralellel

 

The following results have been obtained by running the tests with a load of 10 VMs in parallel.

The size of the input data for TeraGen is still set at 10.000.000 (number of 100-byte rows):

  1. Results for KVM with Xenial Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic) as host operating system:
    kvm-3-2
  2. Results for Hyper-V with Windows Server 2012 R2 as host operating system:
    hyperv-2012r2-3-2
  3. Results for Hyper-V with Windows Server 2016 as host operating system:
    hyperv-2016-3-2

After increasing the workload of the compute nodes by adding more parallel iterations to the test, we can notice that Hyper-V is slightly faster than KVM on average time. The differences in this case are anyway quite negligible. Time for some conclusions? Yes, in the next blog post!

 

 

The post OpenStack Newton Benchmarking – Scenario 3 (Hadoop) appeared first on Cloudbase Solutions.

by Alin Băluțoiu at January 16, 2017 01:00 PM

Thierry Carrez

So you want to create a new official OpenStack project...

OpenStack development is organized around a mission, a governance model and a set of principles. Project teams apply for inclusion, and the Technical Committee (TC), elected by all OpenStack contributors, judges whether that team work helps with the OpenStack mission and follows the OpenStack development principles. If it does, the team is considered part of the OpenStack development community, and its work is considered an official OpenStack project.

The main effect of being official is that it places the team work under the oversight of the Technical Committee. In exchange, recent contributors to that team are considered Active Technical Contributors (ATCs), which means they can participate in the vote to elect the Technical Committee.

Why ?

When you want to create a new official OpenStack project, the first thing to check is whether you're doing it for the right reasons. In particular, there is no need to be an official OpenStack project to benefit from our outstanding project infrastructure (git repositories, Gerrit code reviews, cloud-powered testing and gating). There is also no need to place your project under the OpenStack Technical Committee oversight to be allowed to work on something related to OpenStack. And the ATC status no longer brings additional benefits, beyond the TC election voting rights.

From a development infrastructure standpoint, OpenStack provides the governance, the systems and the neutral asset lock to create open collaboration grounds. On those grounds multiple organizations and individuals can cooperate on a level playing field, without one organization in particular owning a given project.

So if you are not interested in having new organizations contribute to your project, or would prefer to retain full control over it, it probably makes sense to not ask to become an official OpenStack project. Same if you want to follow slightly-different principles, or want to relax certain community rules, or generally would like to behave a lot differently than other OpenStack projects.

What ?

Still with me ? So... What would be a good project team to propose for inclusion ? The most important aspect is that the topic you're working on must help further the OpenStack Mission, which is to produce a ubiquitous Open Source Cloud Computing platform that is easy to use, simple to implement, interoperable between deployments, works well at all scales, and meets the needs of users and operators of both public and private clouds.

It is also very important that the team seamlessly merges into the OpenStack Community. It must adhere to the 4 Opens and follow the OpenStack principles. The Technical Committee made a number of choices to avoid fragmenting the community into several distinct silos. All projects use Gerrit to propose changes, IRC to communicate, a set of approved programming languages... Those rules are not set in stone, but we are unlikely to change them just to facilitate the addition of one given new project team. All those requirements are summarized in the new project requirements document.

The new team must also know its way around our various systems, development tools and processes. Ideally the team would be formed from existing OpenStack community members; if not the Project Team Guide is there to help you getting up to speed.

Where ?

OK, you're now ready to make the plunge. One question you may ask yourself is whether you should contribute your project to an existing project team, or ask to become a new official project team.

Since the recent project structure reform (a.k.a. the "big tent"), work in OpenStack is organized around groups of people, rather than the general topic of your work. So you don't have to ask the Neutron team to adopt your project just because it is about networking. The real question is more... is it the same team working on both projects ? Does the existing team feel like they can vouch for this new work, and/or are willing to adapt their team scope to include it ? Having two different groups under a single team and PTL only creates extra governance problems. So if the teams working on it are distinct enough, then the new project should probably be filed separately.

Another question you may ask yourself is whether alternate implementations of the same functionality are OK. Is competition allowed between official projects ? On one hand competition means dilution of effort, so you want to minimize it. On the other you don't want to close evolutionary paths, so you need to let alternate solutions grow. The Technical Committee answer to that is: alternate solutions are allowed, as long as they are not gratuitously competing. Competition must be between two different technical approaches, not two different organizations or egos. Cooperation must be considered first. This is all the more important the deeper you go in the stack: it is obviously a lot easier to justify competition on an OpenStack installer (which consumes all other projects), than on AuthN/AuthZ (which all other projects rely on).

How ?

Let's do this ! How to proceed ? The first and hardest part is to pick a name. We want to avoid having to rename the project later due to trademark infringement, once it has built some name recognition. A good rule of thumb is that if the name sounds good, it's probably already used somewhere. Obscure made-up names, or word combinations are less likely to be a registered trademark than dictionary words (or person names). Online searches can help weeding out the worst candidates. Please be good citizens and also avoid collision with other open source project names, even if they are not trademarked.

Step 2, you need to create the project on OpenStack infrastructure. See the Infra manual for instructions, and reach out on the #openstack-infra IRC channel if you need help.

The final step is to propose a change to the openstack/governance repository, to add your project team to the reference/projects.yaml file. That will serve as the official request to the Technical Committee, so be sure to include a very informative commit message detailing how well you fill the new projects requirements. Good examples of that would be this change or this one.

When ?

The timing of the request is important. In order to be able to assess whether the new team behaves like the rest of the OpenStack community, the Technical Committee usually requires that the new team operates on OpenStack infrastructure (and collaborates on IRC and the mailing-list) for a few months.

We also tend to freeze new team applications during the second part of the development cycles, as we start preparing for the release and the PTG. So the optimal timing would be to set up your project on OpenStack infrastructure around the middle of one cycle, and propose for official inclusion at the start of the next cycle (before the first development milestone). Release schedules are published here.

That's it !

I hope this article will help you avoid the most obvious traps in your way to become an official OpenStack project. Feel free to reach out to me (or any other Technical Committee member) if you have questions or would like extra advice !

by Thierry Carrez at January 16, 2017 01:00 PM

Rackspace Developer Blog

Getting Started with Bandit

One of the many benefits of using and working with Python is its ability to introspect itself. This empowers us to write and use tools to analyze the projects we use and write. Tools written in Python can use the built-in ast module to parse and analyze other Python code into an "Abstract Syntax Tree". Perhaps you've heard of Flake8, PyFlakes, PyLint, Radon, or another tool that provides style checking, lint discovery, or complexity computation? They all use the AST to provide that functionality. There's also a tool called Bandit that uses the AST to provide static security analysis of Python programs.

Starting out with Bandit is simple

$ pip install bandit
$ bandit path/to/code/to/check/*.py

Bandit scans our files for any known vulnerabilities and then provides us with explicit feedback about what it found, the severity of the problem, and how confident it is in its discovery. Let's take a look at an example. Bandit knows about PyYAML and some of its past security vulnerabilities, so let's give it example some code that I wrote:

# blog_ex.py
import yaml


def to_yaml(object):
    return yaml.dump(object)


def from_yaml(yaml_str):
    return yaml.load(yaml_str)


yaml_str = to_yaml({
    # Yes, this is some metadata about this blog ;)
    'layout': 'post',
    'title': 'Getting Started with Bandit',
    'date': '2017-01-16 10:00',
    'author': 'Ian Cordasco',
})
parsed_yaml = from_yaml(yaml_str)

Running Bandit on this file results in:

~/o/bandit ❯❯❯ bandit blog_ex.py
[main]    INFO    profile include tests: None
[main]    INFO    profile exclude tests: None
[main]    INFO    cli include tests: None
[main]    INFO    cli exclude tests: None
[main]    INFO    running on Python 2.7.12
[node_visitor]    INFO    Unable to find qualified name for module: blog_ex.py
Run started:2017-01-11 20:47:39.901651

Test results:
>> Issue: [B506:yaml_load] Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load().
   Severity: Medium   Confidence: High
   Location: blog_ex.py:8
7    def from_yaml(yaml_str):
8        return yaml.load(yaml_str)
9

--------------------------------------------------

Code scanned:
    Total lines of code: 12
    Total lines skipped (#nosec): 0

Run metrics:
    Total issues (by severity):
        Undefined: 0
        Low: 0
        Medium: 1
        High: 0
    Total issues (by confidence):
        Undefined: 0
        Low: 0
        Medium: 0
        High: 1
Files skipped (0):

Let's look specifically at the Test results section. We see here that there's an issue labeled B506 and named yaml_load. The message then tells us what the specific issue is and a potential way to fix it:

Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider
yaml.safe_load().

After that message, we are given information about:

  1. How severe the issue is - Medium in this case
  2. How confident Bandit is that there's a problem - High
  3. Where the issue is - in blog_ex.py on line number 8
  4. And the code in question, complete with line numbers.

If we were certain we wanted to disable this check, we could then "skip" it by its ID (B506) explicitly:

~/o/bandit ❯❯❯ bandit -s B506 blog_ex.py
[main]  INFO    profile include tests: None
[main]  INFO    profile exclude tests: None
[main]  INFO    cli include tests: None
[main]  INFO    cli exclude tests: B506
[main]  INFO    running on Python 2.7.12
[node_visitor]  INFO    Unable to find qualified name for module: blog_ex.py
Run started:2017-01-11 20:55:05.987581

Test results:
    No issues identified.

Code scanned:
    Total lines of code: 12
    Total lines skipped (#nosec): 0

Run metrics:
    Total issues (by severity):
        Undefined: 0
        Low: 0
        Medium: 0
        High: 0
    Total issues (by confidence):
        Undefined: 0
        Low: 0
        Medium: 0
        High: 0
Files skipped (0):

Further, we're able to store this configuration in a file. If we're using tox for our project, then we can add the following to our tox.ini file:

# tox.ini

[bandit]
skips = B506

And run

~/o/bandit ❯❯❯ bandit --ini tox.ini blog_ex.py
[main]    INFO    Using .bandit arg for skipped tests
[main]    INFO    profile include tests: None
[main]    INFO    profile exclude tests: None
[main]    INFO    cli include tests: None
[main]    INFO    cli exclude tests: B506
[main]    INFO    running on Python 2.7.12
[node_visitor]    INFO    Unable to find qualified name for module: blog_ex.py
Run started:2017-01-11 20:59:08.793653

Test results:
    No issues identified.

Code scanned:
    Total lines of code: 12
    Total lines skipped (#nosec): 0

Run metrics:
    Total issues (by severity):
        Undefined: 0
        Low: 0
        Medium: 0
        High: 0
    Total issues (by confidence):
        Undefined: 0
        Low: 0
        Medium: 0
        High: 0
Files skipped (0):

Finally, if our first run of Bandit is giving us a lot of noise, we can filter by severity and confidence. It's very reasonable for us to start addressing only the issues that are the highest severity and the highest confidence. To see only those, we can run:

$ bandit -lll -iii /path/to/code/*.py

Providing -l increases the minimum severity level and it can be repeated without having to specify it individually. Likewise, -i increases the minimum confidence level. Specifying each of them three times means we will only see the issues with HIGH severity and confidence.

Finally, it is important to note that Bandit should be installed and run with the version of Python we're writing for. If our code is using features in Python 3 only, Bandit should be installed on Python 3 and run from Python 3, otherwise it may not be able to fully detect problems due to not being able to parse the files.

January 16, 2017 10:00 AM

Opensource.com

Tips for contributors, managing containers at CERN, and more OpenStack news

Explore what's happening this week in OpenStack, the open source cloud computing project.

by Jason Baker at January 16, 2017 06:00 AM

Hugh Blemings

Lwood-20170115

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for the week 9 to 15 January for openstack-dev:

  • ~382 Messages (down about 33% relative to the long term average)
  • ~145 Unique threads (down about 19% relative to the long term average)

Traffic once again quiet but trajectory is positive after the break.  A reminder that I’ve changed the reporting slightly to be against the long term average (since 22 June 2015), a pretty graph to follow one week soon, truly!

Notable Discussions – openstack-dev

Improving Vendor Driver Discoverability

Mike Perez puts forward a proposal for improving the availability and accuracy of vendor specific driver information across OpenStack projects.  This has been the purview of the driverlog project which in turn, amongst other things, provides data that is used in the OpenStack Marketplace.

One of the difficulties put forward has been keeping this information current, residing as it currently does in a central json file – many projects would prefer to maintain this information within their projects – somewhat tricky to do if it’s in a common file.

Among some other suggestions for the process, Mike volunteers to create files for each project involved to bootstrap this process and then turn them over to be maintained by the project therafter.  Data in these per-project files will be aggregated together to produce the final results used by the Marketplace and elsewhere.

List of all Pike PTG Etherpads

From Thierry Carrez a quick email where he notes there is now a list of Etherpads for the various projects present at the Pike PTG.

End of Week Wrap-ups

Two this week from Ruby Loo and Richard Jones for Ironic and Horizon respectively.

Notable Discussions – other OpenStack lists

No traffic particularly stood out on the other lists this last week.

People and Projects

PTL nominations & Changes

Core nominations & changes

  • [Kolla] Adding Jeffrey Zhang (jeffrey4l) to kolla-kubernetes-core because he is release liaison – Steve Dake
  • [Kuryr] Ocata cycle ending and proposing new people as Kuryr cores (Liping Mao and Ilya Chukhnakov) – Toni Segura Puimedon

Miscellanea

Further reading

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

Credits

Attending sessions at linux.conf.au this week so no tunes, but many a good talk :)

Last but by no means least, thanks, as always, to Rackspace :)

 

by hugh at January 16, 2017 05:27 AM

January 15, 2017

Stefano Maffulli

Why the OpenStack community voting process fails and how to fix it

Open source communities offer a lot of democratic participation. The idea that you contribute to a project and have a say in its governance is a powerful one.  When it doesn’t work, those same projects turn their backs on active contributors and discourage newcomers. The most recent OpenStack elections for Individual members to the Board of Directors is a strong example of how community voting fails and how to fix it.

This time I watched the elections from the distance, as much of an outsider as I have ever been. Now that the results are in, I’m very disappointed to see confirmed four Individual board members — half of the total — whose inaction during 2016 should have not granted them reconfirmation.

It’s also extremely sad to see more than a few very active individuals have not been elected.  Of the eight elected, only one works for a smaller company and only one is mostly an OpenStack user (seven are primarily OpenStack vendors). The Individual members of the OpenStack Foundation were added to the bylaws to keep large corporate interests in check, and clearly this doesn’t seem to be working.

The OpenStack community has a huge problem here: good behavior and personal investments to improve the project don’t get rewarded. On the contrary, affiliation with large companies, spammy promotion, and geographic proximity seem to be more effective at granting a seat on the board. This has discouraged participation already, as some backchannel conversations have confirmed. You have to ask yourself, why would someone like Edgar Magana do what he has done for the short time he’s been on the board, when almost-inactive members get the votes?

A couple of immediate actions can be taken to improve the situation. First, acknowledge that an issue exists at the Board level, where too few small organizations and users are represented. Then, stop tolerating abuse of community resources like planet.openstack.org and use of the OpenStack logo. Actions should have followed OpenStack Foundation’s COO Mark Collier reminder to be nice during the campaign. Mark wrote:

With respect to local user groups and web channels, I think they should remain neutral ground that are open to all local community members.

But then the OpenStack logo (that’s another problem, known and unsolved for many years) was used to promote a single candidate as ‘our APAC‘ candidate. Who’s we exactly? OpenStack AU with the logo of the OpenStack Foundation? From my cursory glance at the candidates, there were others in APAC region, but I bet those candidates didn’t have direct access to the OpenStack AU Twitter feed.

OpenStack Australia suggesting to vote for Kavit as 'our APAC candidate'

And the shared Planet OpenStack (which is syndicated to Reddit and other places) was inundated for a week by the same advertising. My attempt to limit the damage to the community and send a signal was blocked.

One more thing: make the pages of the candidates more meaningful. They’re too wordy now, start with the generic bio and offer no link to hard facts. How many of the 3,000 voters actually read their pages (I bet they don’t, and we can easily find out with Google Analytics)? Members pages should show facts, not generic intention to make the world a better place. I always had the vision to collect data from analytics tools and show those in the individual member pages. Stackalytics already offers a pretty comprehensive view of what each person does, not just code-relate but emails, translations, work on bugs, and I bet more can be added from OpenStack Groups portal.

Communities need constant supervision and nurturing, they can’t be left unattended because as quickly as they have formed, they fall apart or at the very least lose critical focus. OpenStack is under tremendous pressure and now more than ever needs dedicated contributors to keep the project at the center of attention.


by smaffulli at January 15, 2017 10:50 PM

Ramon Acedo

Deploying Ironic in OpenStack Newton with TripleO

OVS Libvirt VLANs

Introduction

This post describes the process to enable Ironic in the Overcloud in a multi-controller deployment with director or TripleO, a new feature introduced in Red Hat OpenStack Platform 10 (Newton).

The process should work with any working OpenStack Newton (and above) platform deployed with TripleO, even with an already deployed environment updated with the configuration templates described here should work.

The workflow is based in the upstream documentation.

Architecture Setup

With this setup we can have virtual instances and instances on baremetal nodes in the same environment. In this architecture I’m using floating IPs with VMs and a provisioning network with the baremetal nodes.

To be able to test this setup in a lab with virtual machines, we use Libvirt+KVM using VMs for all the nodes in a all-in-one lab. The network topology is described in the diagram below.

Ideally, we would have more networks, for example a dedicated network for cleaning the disks and another one for provisioning the baremetal nodes from the Overcloud, and even an extra one as the tenant network for the baremetal nodes in the Overcloud. For simplicity reasons though, in this lab I reused the Undercloud’s provisioning network for this four network roles:

  • Provisioning from the Undercloud
  • Provisioning from the Overcloud
  • Cleaning the baremetal nodes’ disks
  • Baremetal tenant network for the Overcloud nodes

OVS Libvirt VLANs

Virtual environment configuration

In order to be able to test with root_device hints in the nodes (Libvirt VMs) that we want to test as baremetal nodes we define the first disk in Libvirt with a iSCSI bus and a wwn ID:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/var/lib/virtual-machines/overcloud-2-node4-disk1.qcow2'/>
  <target dev='sda' bus='scsi'/>
  <wwn>0x0000000000000001</wwn>
</disk>

To verify the hints we can optionally introspect the node in the Undercloud (as currently there’s no introspection in the Overcloud). This is what we can see after introspection of the node in the Undercloud:

$ openstack baremetal introspection data save 7740e442-96a6-496c-9bb2-7cac89b6a8e7|jq '.inventory.disks'
[
  {
    "size": 64424509440,
    "rotational": true,
    "vendor": "QEMU",
    "name": "/dev/sda",
    "wwn_vendor_extension": null,
    "wwn_with_extension": "0x0000000000000001",
    "model": "QEMU HARDDISK",
    "wwn": "0x0000000000000001",
    "serial": "0000000000000001"
  },
  {
    "size": 64424509440,
    "rotational": true,
    "vendor": "0x1af4",
    "name": "/dev/vda",
    "wwn_vendor_extension": null,
    "wwn_with_extension": null,
    "model": "",
    "wwn": null,
    "serial": null
  },
  {
    "size": 64424509440,
    "rotational": true,
    "vendor": "0x1af4",
    "name": "/dev/vdb",
    "wwn_vendor_extension": null,
    "wwn_with_extension": null,
    "model": "",
    "wwn": null,
    "serial": null
  },
  {
    "size": 64424509440,
    "rotational": true,
    "vendor": "0x1af4",
    "name": "/dev/vdc",
    "wwn_vendor_extension": null,
    "wwn_with_extension": null,
    "model": "",
    "wwn": null,
    "serial": null
  }
]

Undercloud templates

The following templates contain all the changes needed to configure Ironic and to adapt the NIC config to have a dedicated OVS bridge for Ironic as required.

Ironic configuration

~/templates/ironic.yaml

parameter_defaults:
    IronicEnabledDrivers:
        - pxe_ssh
    NovaSchedulerDefaultFilters:
        - RetryFilter
        - AggregateInstanceExtraSpecsFilter
        - AvailabilityZoneFilter
        - RamFilter
        - DiskFilter
        - ComputeFilter
        - ComputeCapabilitiesFilter
        - ImagePropertiesFilter
    IronicCleaningDiskErase: metadata
    IronicIPXEEnabled: true
    ControllerExtraConfig:
        ironic::drivers::ssh::libvirt_uri: 'qemu:///system'

Network configuration

First we map an extra bridge called br-baremetal which will be used by Ironic:

~/templates/network-environment.yaml:

[...]
parameter_defaults:
[...]
  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal
  NeutronFlatNetworks: datacentre,baremetal

This bridge will be configured in the provisioning network (control plane) of the controllers as we will reuse this network as the Ironic network later. If we wanted to add a dedicated network we would do the same config.

It is important to mention that this Ironic network used for provisioning can’t be VLAN tagged, which is yet another reason to justify using the Undercloud’s provisioning network for this lab:

~/templates/nic-configs/controller.yaml:

[...]
          network_config:
            -
              type: ovs_bridge
              name: br-baremetal
              use_dhcp: false
              members:
                 -
                   type: interface
                   name: eth0
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
[...]

Deployment

This is the deployment script I’ve used. Note there’s a roles_data.yaml template to add a composable role (a new feature in OSP 10) that I used for the deployment of an Operational Tools server (Sensu and Fluentd). The deployment also includes 3 Ceph nodes. These are irrelevant for the purpose of this setup but I wanted to test it all together in an advanced and more realistic architecture.

Red Hat’s documentation contains the details for configuring these advanced options and the base configuration with director.

~/deployment-scripts/ironic-ha-net-isol-deployment-dupa.sh:

openstack overcloud deploy \
--templates \
-r ~/templates/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/ceph-storage.yaml \
-e ~/templates/parameters.yaml \
-e ~/templates/firstboot/firstboot.yaml \
-e ~/templates/ips-from-pool-all.yaml \
-e ~/templates/fluentd-client.yaml \
-e ~/templates/sensu-client.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml \
-e ~/templates/ironic.yaml \
--control-scale 3 \
--compute-scale 1 \
--ceph-storage-scale 3 \
--compute-flavor compute \
--control-flavor control \
--ceph-storage-flavor ceph-storage \
--timeout 60 \
--libvirt-type kvm

Post-deployment configuration

Verifications

After the deployment completes successfully we should see how the controllers have the compute service enabled:

$ . overcloudrc
$ openstack compute service list -c Binary -c Host -c State
+------------------+------------------------------------+-------+
| Binary           | Host                               | State |
+------------------+------------------------------------+-------+
| nova-consoleauth | overcloud-controller-1.localdomain | up    |
| nova-scheduler   | overcloud-controller-1.localdomain | up    |
| nova-conductor   | overcloud-controller-1.localdomain | up    |
| nova-compute     | overcloud-controller-1.localdomain | up    |
| nova-consoleauth | overcloud-controller-0.localdomain | up    |
| nova-consoleauth | overcloud-controller-2.localdomain | up    |
| nova-scheduler   | overcloud-controller-0.localdomain | up    |
| nova-scheduler   | overcloud-controller-2.localdomain | up    |
| nova-conductor   | overcloud-controller-0.localdomain | up    |
| nova-conductor   | overcloud-controller-2.localdomain | up    |
| nova-compute     | overcloud-controller-0.localdomain | up    |
| nova-compute     | overcloud-controller-2.localdomain | up    |
| nova-compute     | overcloud-compute-0.localdomain    | up    |
+------------------+------------------------------------+-------+

And the driver we passed with IronicEnabledDrivers is also enabled:

$ openstack baremetal driver list
+---------------------+------------------------------------------------------------------------------------------------------------+
| Supported driver(s) | Active host(s)                                                                                             |
+---------------------+------------------------------------------------------------------------------------------------------------+
| pxe_ssh             | overcloud-controller-0.localdomain, overcloud-controller-1.localdomain, overcloud-controller-2.localdomain |
+---------------------+------------------------------------------------------------------------------------------------------------+

Baremetal network

This network will be:

  • The provisioning network for the Overcloud’s Ironic.
  • The cleaning network for wiping the baremetal node’s disks.
  • The tenant network for the Overcloud’s Ironic instances.

Create the baremetal network in the Overcloud with the same subnet and gateway than the Undercloud’s ctlplane but using a different range:

$ . overcloudrc
$ openstack network create \
--share \
--provider-network-type flat \
--provider-physical-network baremetal \
--external \
baremetal
$ openstack subnet create \
--network baremetal \
--subnet-range 192.168.3.0/24 \
--gateway 192.168.3.1 \
--allocation-pool start=192.168.3.150,end=192.168.3.170 \
baremetal-subnet

Then, we need to configure each controller’s /etc/ironic.conf to use this network to clean the nodes’ disks at registration time and also before tenants use them as baremetal instances:

$ openstack network show baremetal -f value -c id
f7af39df-2576-4042-87c0-14c395ca19b4
$ ssh heat-admin@$CONTROLLER_IP
$ sudo vi /etc/ironic/ironic.conf
$ cleaning_network_uuid=f7af39df-2576-4042-87c0-14c395ca19b4
$ sudo systemctl restart openstack-ironic-conductor

We should also leave it ready to be included in our next update by adding it to the ControllerExtraConfig section in the ironic.yaml template:

parameter_defaults:
  ControllerExtraConfig:
    ironic::conductor::cleaning_network_uuid: f7af39df-2576-4042-87c0-14c395ca19b4

Baremetal deployment images

We can use the same deployment images we use in the Undercloud:

$ openstack image create --public --container-format aki --disk-format aki --file ~/images/ironic-python-agent.kernel deploy-kernel
$ openstack image create --public --container-format ari --disk-format ari --file ~/images/ironic-python-agent.initramfs deploy-ramdisk

We could also create them using the CoreOS images. For example, if we wanted to troubleshoot the deployment, we could use the CoreOS images and enable debug output in the Ironic Python Agent or adding our ssh-key to access during the deployment of the image.

Baremetal instance images

Again, for simplicity, we can use the overcloud-full image we use in the Undercloud:

$ KERNEL_ID=$(openstack image create --file ~/images/overcloud-full.vmlinuz --public --container-format aki --disk-format aki -f value -c id overcloud-full.vmlinuz)
$ RAMDISK_ID=$(openstack image create --file ~/images/overcloud-full.initrd --public --container-format ari --disk-format ari -f value -c id overcloud-full.initrd)
$ openstack image create --file ~/images/overcloud-full.qcow2 --public --container-format bare --disk-format qcow2 --property kernel_id=$KERNEL_ID --property ramdisk_id=$RAMDISK_ID overcloud-full

Note that it uses kernel and ramdisk images, as the Overcloud default image is a partition image.

Create flavors

We create two flavors to start with, one for the baremetal instances and another one for the virtual instances.

$ openstack flavor create --ram 1024 --disk 20 --vcpus 1 baremetal
$ openstack flavor create --disk 20 m1.small

Baremetal instances flavor

Then, we set a boolean property in the newly created flavor called baremetal, which will also be set in the host aggregates (see below) to differentiate nodes for baremetal instances from nodes virtual instances.

And, as by default the boot_option is netboot, we set it to local (and later we will do the same when we create the baremetal node):

$ openstack flavor set baremetal --property baremetal=true
$ openstack flavor set baremetal --property capabilities:boot_option="local"

Virtual instances flavor

Lastly, we set the flavor for virtual instances with the boolean property set to false:

$ openstack flavor set m1.small --property baremetal=false

Create host aggregates

To have OpenStack differentiating between baremetal and virtual instances we can create host aggregates to have the nova-compute service running on the controllers just for Ironic and the the one on compute nodes for virtual instances:

$ openstack aggregate create --property baremetal=true baremetal-hosts
$ openstack aggregate create --property baremetal=false virtual-hosts
$ for compute in $(openstack hypervisor list -f value -c "Hypervisor Hostname" | grep compute); do openstack aggregate add host virtual-hosts $compute; done
$ openstack aggregate add host baremetal-hosts overcloud-controller-0.localdomain
$ openstack aggregate add host baremetal-hosts overcloud-controller-1.localdomain
$ openstack aggregate add host baremetal-hosts overcloud-controller-2.localdomain

Register the nodes in Ironic

The nodes can be registered with the command openstack baremetal create and a YAML template where the node is defined. In this example I register only one node named overcloud-2-node4, which I had previously registered in the Undercloud for introspection (and later deleted from it or set to maintenance mode to avoid conflicts between the two Ironic services).

The section root_device contains commented examples of the hints we could use. Remember that when configuring the Libvirt XML file for the node above, we added a wwn ID section, which is the one we’ll use in this example.

This template is like the instackenv.json one in the Undercloud but in YAML.

$ cat overcloud-2-node4.yaml
nodes:
    - name: overcloud-2-node5
      driver: pxe_ssh
      driver_info:
        ssh_username: stack
        ssh_key_contents:  |
          -----BEGIN RSA PRIVATE KEY-----
          MIIEogIBAAKCAQEAxc0a2u18EgTy5y9JvaExDXP2pWuE8Ebyo24AOo1iQoWR7D5n
          fNjkgCeKZRbABhsdoMBmbDMtn0PO3lzI2HnZQBB4BdBZprAiQ1NwKKotUv9puTeY
          [..]
          7DsSKAL4EDqjufY3h+4fRwOcD+EFqlUTDG1sjsSDKjdiHyYMzjcrg8nbaj/M9kAs
          xXnSm9686KxUiCDXO5FWKun204B18mPH1UP20aYw098t6aAQwm4=
          -----END RSA PRIVATE KEY-----
        ssh_virt_type: virsh
        ssh_address: 10.0.0.1
      properties:
        cpus: 4
        memory_mb: 12288
        local_gb: 60
        #boot_option: local (it doesn't set 'capabilities')
        root_device:
          # vendor: "0x1af4"
          # model: "QEMU HARDDISK"
          # size: 64424509440
          wwn: "0x0000000000000001"
          # serial: "0000000000000001"
          # vendor: QEMU
          # name: /dev/sda
      ports:
        - address: 52:54:00:a0:af:da

We create the node using the above template:

$ openstack baremetal create overcloud-2-node4.yaml

Then we have to specify which are the deployment kernel and ramdisk for the node:

$ DEPLOY_KERNEL=$(openstack image show deploy-kernel -f value -c id)
$ DEPLOY_RAMDISK=$(openstack image show deploy-ramdisk -f value -c id)
$ openstack baremetal node set $(openstack baremetal node show overcloud-2-node4 -f value -c uuid) \
--driver-info deploy_kernel=$DEPLOY_KERNEL \
--driver-info deploy_ramdisk=$DEPLOY_RAMDISK

And lastly, just like we do in the Undercloud, we set the node to available:

$ openstack baremetal node manage $(openstack baremetal node show overcloud-2-node4 -f value -c uuid)
$ openstack baremetal node provide $(openstack baremetal node show overcloud-2-node4 -f value -c uuid)

You can have all of this in a script and run it together every time you register a node.

If everything has gone well, the node will be registered and Ironic will clean its disk metadata (as per above configuration):

$ openstack baremetal node list -c Name -c "Power State" -c "Provisioning State"
+-------------------+-------------+--------------------+
| Name              | Power State | Provisioning State |
+-------------------+-------------+--------------------+
| overcloud-2-node4 | power off   | cleaning           |
+-------------------+-------------+--------------------+

Wait until the cleaning process has finished and then set the boot_option to local:

$ openstack baremetal node set $(openstack baremetal node show overcloud-2-node4 -f value -c uuid) --property 'capabilities=boot_option:local'

Start a baremetal instance

Just as in the virtual instances we’ll use a ssh key and then we’ll start the instance with Ironic:

$ openstack keypair create --public-key ~/.ssh/id_rsa.pub stack-key

Then we make sure that the cleaning process has finished (Provisioning State is available):

$ openstack baremetal node list -c Name -c "Power State" -c "Provisioning State"
+-------------------+-------------+--------------------+
| Name              | Power State | Provisioning State |
+-------------------+-------------+--------------------+
| overcloud-2-node4 | power off   | available          |
+-------------------+-------------+--------------------+

and we start the baremetal instance:

$ openstack server create \
--image overcloud-full \
--flavor baremetal \
--nic net-id=$(openstack network show baremetal -f value -c id) \

Now check its IP and access the newly created machine:

$ openstack server list -c Name -c Status -c Networks
+---------------+--------+-------------------------+
| Name          | Status | Networks                |
+---------------+--------+-------------------------+
| bm-instance-0 | ACTIVE | baremetal=192.168.3.157 |
+---------------+--------+-------------------------+
$ ssh cloud-user@192.168.3.157
Warning: Permanently added '192.168.3.157' (ECDSA) to the list of known hosts.
Last login: Sun Jan 15 07:49:37 2017 from gateway
[cloud-user@bm-instance-0 ~]$

Start a virtual instance

Optionally, we start a virtual instance to test that virtual and baremetal instances can reach each other.

As I need to create public and private networks, an image, a router, a security group, a floating IP, etc. I’ll use a heat template that does it all for me and, including creating the virtual instance, so I will use it skip the details of doing this:

$ openstack stack create -e overcloud-env.yaml -t overcloud-template.yaml overcloud-stack

Check that the networks and the instance have been created:

$ openstack network list -c Name
+----------------------------------------------------+
| Name                                               |
+----------------------------------------------------+
| public                                             |
| baremetal                                          |
| HA network tenant 1e6a7de837ad488d8beed626c86a6dfe |
| private-net                                        |
+----------------------------------------------------+
$ openstack server list -c Name -c Networks
+----------------------------------------+------------------------------------+
| Name                                   | Networks                           |
+----------------------------------------+------------------------------------+
| overcloud-stack-instance0-2thafsncdgli | private-net=172.16.2.6, 10.0.0.168 |
| bm-instance-0                          | baremetal=192.168.3.157            |
+----------------------------------------+------------------------------------+

We now have both instances and they can communicate over the network:

$ ssh cirros@10.0.0.168
Warning: Permanently added '10.0.0.168' (RSA) to the list of known hosts.
$ ping 192.168.3.157
PING 192.168.3.157 (192.168.3.157): 56 data bytes
64 bytes from 192.168.3.157: seq=0 ttl=62 time=1.573 ms
64 bytes from 192.168.3.157: seq=1 ttl=62 time=0.914 ms
64 bytes from 192.168.3.157: seq=2 ttl=62 time=1.064 ms
^C
--- 192.168.3.157 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.914/1.183/1.573 ms

by ramonacedo at January 15, 2017 08:07 PM

Major Hayden

systemd-networkd on Ubuntu 16.04 LTS (Xenial)

My OpenStack cloud depends on Ubuntu, and the latest release of OpenStack-Ansible (what I use to deploy OpenStack) requires Ubuntu 16.04 at a minimum. I tried upgrading the servers in place from Ubuntu 14.04 to 16.04, but that didn’t work so well. Those servers wouldn’t boot and the only recourse was a re-install.

Once I finished re-installing them (and wrestling with several installer bugs in Ubuntu 16.04), it was time to set up networking. The traditional network configurations in /etc/network/interfaces are fine, but they weren’t working the same way they were in 14.04. The VLAN configuration syntax appears to be different now.

But wait — 16.04 has systemd 229! I can use systemd-networkd to configure the network in a way that is a lot more familiar to me. I’ve made posts about systemd-networkd before and the simplicity in the configurations.

I started with some simple configurations:

root@hydrogen:~# cd /etc/systemd/network
root@hydrogen:/etc/systemd/network# cat enp3s0.network 
[Match]
Name=enp3s0

[Network]
VLAN=vlan10
root@hydrogen:/etc/systemd/network# cat vlan10.netdev 
[NetDev]
Name=vlan10
Kind=vlan

[VLAN]
Id=10
root@hydrogen:/etc/systemd/network# cat vlan10.network 
[Match]
Name=vlan10

[Network]
Bridge=br-mgmt
root@hydrogen:/etc/systemd/network# cat br-mgmt.netdev 
[NetDev]
Name=br-mgmt
Kind=bridge
root@hydrogen:/etc/systemd/network# cat br-mgmt.network 
[Match]
Name=br-mgmt

[Network]
Address=172.29.236.21/22

Here’s a summary of the configurations:

  • Physical network interface is enp3s0
  • VLAN 10 is trunked down from a switch to that interface
  • Bridge br-mgmt should be on VLAN 10 (only send/receive traffic tagged with VLAN 10)

Once that was done, I restarted systemd-networkd to put the change into effect:

# systemctl restart systemd-networkd

Great! Let’s check our work:

root@hydrogen:~# brctl show
bridge name bridge id       STP enabled interfaces
br-mgmt     8000.0a30a9a949d9   no      
root@hydrogen:~# networkctl
IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 enp2s0           ether              routable    configured
  3 enp3s0           ether              degraded    configured
  4 enp4s0           ether              off         unmanaged 
  5 enp5s0           ether              off         unmanaged 
  6 br-mgmt          ether              no-carrier  configuring
  7 vlan10           ether              degraded    unmanaged 

7 links listed.

So the bridge has no interfaces and it’s in a no-carrier status. Why? Let’s check the journal:

# journalctl --boot -u systemd-networkd
Jan 15 09:16:46 hydrogen systemd[1]: Started Network Service.
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: netdev exists, using existing without changing its parameters
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not append VLANs: Operation not permitted
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Failed to assign VLANs to bridge port: Operation not permitted
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not set bridge vlan: Operation not permitted
Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp3s0: Configured
Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp2s0: Configured

The Could not append VLANs: Operation not permitted error is puzzling. After some searching on Google, I found a thread from Lennart:

> After an upgrade, systemd-networkd is broken, exactly the way descibed
> in this issue #3876[0]

Please upgrade to 231, where this should be fixed.

Lennart

But Ubuntu 16.04 has systemd 229:

# dpkg -l | grep systemd
ii  libpam-systemd:amd64                229-4ubuntu13                      amd64        system and service manager - PAM module
ii  libsystemd0:amd64                   229-4ubuntu13                      amd64        systemd utility library
ii  python3-systemd                     231-2build1                        amd64        Python 3 bindings for systemd
ii  systemd                             229-4ubuntu13                      amd64        system and service manager
ii  systemd-sysv                        229-4ubuntu13                      amd64        system and service manager - SysV links

I haven’t found a solution for this quite yet. Keep an eye on this post and I’ll update it once I know more!

The post systemd-networkd on Ubuntu 16.04 LTS (Xenial) appeared first on major.io.

by Major Hayden at January 15, 2017 03:24 PM

January 14, 2017

OpenStack Blog

OpenStack Developer Mailing List Digest January 7-13

SuccessBot Says

  • dims 1: Rally running against Glance (Both Rally and Glance using py3.5).
  • AJaegar 2: docs.openstack.org is served from the new Infra file server that is AFS based.
  • jd 3: Gnocchi 3.1 will be shipped with an empty /etc and will work without any config file by default.
  • cdent 4 : edleafe found narrowed down an important bug in gabbi.
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All

Return of the Architecture Working Group

  • Meeting times Alternate, even weeks Thursday at 20:00UTC, odd weeks Thursday at 01:00UTC
  • Currently two proposes:
    • “Base Services” proposal 5 recognizes components leveraging features from external services that OpenStack components can assume will be present. Two kinds:
      • Local (like a hypervisor on a compute node)
      • Global (like a database)
    • “Nova Compute API” proposal 6 breaking nova-compute out of Nova itself.
  • Full thread

Restarting Service-types-authority / service catalog work

  • In anticipation of having a productive time in Atlanta for the PTG, various patches have been refreshed 7.
  • Two base IASS services aren’t in the list yet because of issues:
    • Neutron / network – discrepancy between common use of “network” and “networking” in the API reference URL. Other services in the list have the service-type and the URL name for the API reference are the same.
    • Cinder / volume – Moving forward from using volumev2 and volumev3 in devstack.
  • Full thread

Feedback From Driver Maintainers About Future of Driver Projects

  • Major observations
    • Yes drivers are an important part of OpenStack.
    • Discoverability of drivers needs to be fixed immediately.
    • It’s important to have visibility in a central place of the status of each driver.
    • Both driver developer and a high level person at a company should feel they’re part of something.
    • Give drivers access to publish to docs.openstack.org.
    • What constitutes a project was never for drivers. Drivers are part part of the project. Driver developers contribute to OpenStack by creating drivers.
  • Discoverability:
    • Consensus: it is currently all over the place 8 9 10.
    • There should be CI results available.
    • Discoverability can be fixed independently of governance changes.
  • Driver projects official or not?
    • Out-of-tree vendors have a desire to become “official” OpenStack projects.
    • Opinion: let driver projects become official without CI requirements.
    • Opinion: Do not allow drivers projects to become official, that doesn’t mean they shouldn’t easily be discoverable.
    • Opinion: We don’t need to open the flood gates of allowing vendors to be teams in the OpenStack governance to make the vendors developers happy.
    • Fact: This implies being placed under the TC oversight. It is a significant move that could have unintended side-effects, it is hard to reverse (kicking out teams we accepted is worse than not including them in the first place), and our community is divided on the way forward. So we need to give that question our full attention and not rush the answer.
    • Opinion: Consider driver log 11 an official OpenStack project to be listed under governance with a PTL, weekly meetings, and all that it required to allow the team to be effective in their mission of keeping the marketplace a trustworthy resource for learning about OpenStack driver ecosystem.
  • Driver Developers:
    • Opinion: A driver developer that ONLY contributes to vendor specific driver code should not have the same influence as other OpenStack developers, voting for PTL, TC, and ATC status.
    • Opinion: PTLs should leverage the extra-atcs option in the governance repo.
  • In-tree VS out-of-tree
    • Cinder has in-tree drivers, but also has out-of-tree drivers when their CI is not maintained or when minimum feature requirements are not met. They are marked as ‘not supported’ and have a single release to get things working before being moved out-of-tree.
    • Ironic has a single out-of-tree repo 12 — But also in-tree 13
    • Neutron has all drivers out-of-tree, with project names like: ‘networking-cisco’.
    • Many opinions on the “stick-based” approach the cinder team took.
    • Opinion: The in-tree vs out-of-tree argument is developer focused. Out-of-tree drivers have obvious benefits (develop quickly, maintain their own team, no need for a core to review the patch). But a vendor that is looking to make sure a driver is supported will not be searching git repos (goes back to discoverability).
    • Opinion: May be worth handling the projects that keep supported drivers in-tree differently that we handle projects that have everything out-of-tree.
  • Full thread

POST /api-wg/news

  • Guidelines currently under review:
    • Add guidelines on usage of state vs. status 14
    • Add guidelines for boolean names 15
    • Clarify the status values in versions 16
    • Define pagination guidelines 17
    • Add API capabilities discovery guideline 18
    • Add guideline for invalid query parameters 19
  • Full thread

New Deadline for PTG Travel Support Program

  • Help contributors that are not otherwise funded to join their project team gathering 20
  • Originally the application acceptance was set to close January 15, but it’s now extended to the end-of-day Tuesday January 17th.
  • Apply now if you need it! 21
  • Submissions will be evaluated next week and grantees will be notified by Friday, January 20th.
  • Register for the event 22 if you haven’t yet. Prices will increase on January 24 and February 14.
  • If you haven’t already booked your hotel yet, do ASAP in the event hotel itself using the PTG room block. This helps us keep costs under control and helps share the most time with the event participants.
    • Closes January 27
    • Book now 23
  • Full thread

Release Countdown For Week R-5

  • Focus:
    • Feature work and major refactoring be starting to wrap up as we approach the the third milestone.
  • Release Tasks:
    • stable/ocata branches will be created and configured with a small subset of the core review team. Release liaisons should ensure that these groups exist and the membership is correct.
  • General Notes:
    • We will start the soft string freeze during R-4 (Jan 23-27) 24
    • Subscribe to the release calendar with your favorite calendaring software 25
  • Important Dates:
    • Final release for non-client libraries: January 19
    • Ocata 3 milestone with feature and requirements freeze: January 26
    • Ocata release schedule 26
  • Full thread

 

[1] – http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-01-09.log.html

[2] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-01-10.log.html

[3] – http://eavesdrop.openstack.org/irclogs/%23openstack-telemetry/%23openstack-telemetry.2017-01-11.log.html

[4] – http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-01-12.log.html

[5] – http://git.openstack.org/cgit/openstack/arch-wg/tree/proposals/base-services.rst

[6] – https://review.openstack.org/#/c/411527/1

[7] – https://review.openstack.org/#/c/286089/

[8] – http://docs.openstack.org/developer/cinder/drivers.html

[9] – http://docs.openstack.org/developer/nova/support-matrix.html

[10] – http://stackalytics.openstack.org/report/driverlog

[11] – http://git.openstack.org/cgit/openstack/driverlog

[12] – https://git.openstack.org/cgit/openstack/ironic-staging-drivers

[13] – http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers

[14] – https://review.openstack.org/#/c/411528/

[15] – https://review.openstack.org/#/c/411529/

[16] – https://review.openstack.org/#/c/411849/

[17] – https://review.openstack.org/#/c/390973/

[18] – https://review.openstack.org/#/c/386555/

[19] – https://review.openstack.org/417441

[20] – http://www.openstack.org/ptg#tab_travel

[21] – https://openstackfoundation.formstack.com/forms/travelsupportptg_atlanta

[22] – https://pikeptg.eventbrite.com/

[23] – https://www.starwoodmeeting.com/events/start.action?id=1609140999&key=381BF4AA

[24] – https://releases.openstack.org/ocata/schedule.html#o-soft-sf

[25] – https://releases.openstack.org/schedule.ics

[26] – http://releases.openstack.org/ocata/schedule.html

by Mike Perez at January 14, 2017 06:38 PM

Anne Gentle

Server automation for documentation deployment

When you treat docs like code, you want to deploy that “code,” such as doc source files, so that you can see how the doc looks on a web site. I have been practicing these deployment techniques while working on OpenStack, which offers open source cloud computing services. I needed to practice so I could get better, and also practice was the best way to learn this type of technical problem-solving—by doing.

The way I approached the practice effort was to:

  1. Find credentials for a cloud (or two).
  2. Determine which web services to install on the cloud servers I launch there.
  3. Find deployment orchestration templates that launch the right combination of web services to make the site I wanted, deploying Ruby, Jekyll, and NGINX, using Ansible.
  4. Test, test, test. Test some more.
  5. Try out Docker locally, then get the cloud server working, finally. This step took a while while I worked out the Linux user permissions needed for installing Ruby with the compatible version needed.
  6. Set up the cloud server as a git remote, then push the website to the git remote, building the HTML and copying the files to the web server.


Hear me talk about my excitement trying out docs deployment in this video clip from the original on thenewstack.io. In it, I talk to Alex Williams, founder of TheNewStack.io, about my adventures at the OpenStack Summit in Barcelona.  Thanks to Alex for the permission to re-post and for asking about my latest work.

<iframe allowfullscreen="true" class="youtube-player" height="390" src="http://www.youtube.com/embed/lErtObwuadw?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="640"></iframe>

Resources

The deck is available on Slideshare.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/key/eZ7P4VxpirKqVP" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="427"> </iframe>

The Ansible code is on GitHub.

The Jekyll theme, so-simple, is on GitHub.

The content repo is on GitHub.

This demo shows pushing the site to the git remote to update the content.

<iframe allowfullscreen="true" class="youtube-player" height="390" src="http://www.youtube.com/embed/oSmKEk8UAMI?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="640"></iframe>

by annegentle at January 14, 2017 01:12 PM

January 13, 2017

Aptira

OpenStack Election 2017: Vote for Kavit – Make OpenStack Great Again

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="608" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/199256670" title="Kavit OpenStack Election Campain - Vote for Kavit" webkitallowfullscreen="webkitallowfullscreen" width="1080"></iframe>

Today is the final day of the OpenStack election, and we have one last reason why Kavit deserves your vote.

Vote for Kavit – Make OpenStack great again.

OpenStack’s success is critical for the industry as a whole to prevent monopolies and ensure that the users own their infrastructure and have options. Kavit has always used and backed open source and open standards, so OpenStack’s success is of paramount importance to him.

To help make OpenStack great again, please vote for Kavit in the 2017 OpenStack election.

 <iframe frameborder="0" height="" src="http://player.vimeo.com/video/199256670?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" width=""></iframe>

Can't see the video in your RSS reader or email? Click Here!

The post OpenStack Election 2017: Vote for Kavit – Make OpenStack Great Again appeared first on Aptira Cloud Solutions.

by Jessica Field at January 13, 2017 06:57 PM

OpenStack Superuser

How to manage Hyper-V on Open vSwitch

OVS VXLAN setup on Hyper-V without OpenStack

In the previous post we explained how to deploy Open vSwitch (OVS) on Hyper-V and integrate it into an OpenStack environment.

In this second part we will explain how to configure manually a VXLAN tunnel between VMs running on Hyper-V and KVM hosts.

KVM OVS configuration

In this example, KVM1 provides a VXLAN tunnel with local endpoint 14.14.14.1:

  • vxlan-0e0e0e02 connected to Hyper-V (14.14.14.2) through br-eth3
ubuntu@ubuntu:~$ sudo ovs-vsctl show
82585eef-349c-4573-8d77-91f9602bb535
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "vm1"
            Interface "vm1"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge "br-eth3"
        Port "eth3"
            Interface "eth3"
        Port "br-eth3"
            Interface "br-eth3"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0e0e0e02"
            Interface "vxlan-0e0e0e02"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="14.14.14.1", out_key=flow, remote_ip="14.14.14.2"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.5.1" 
ububtu@ubuntu:~$ ifconfig eth3
eth3      Link encap:Ethernet  HWaddr 00:0c:29:25:db:8c  
          inet6 addr: fe80::20c:29ff:fe25:db8c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40051 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51087 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6907123 (6.9 MB)  TX bytes:81805610 (81.8 MB)
ubuntu@ubuntu:~$ ifconfig br-eth3
br-eth3   Link encap:Ethernet  HWaddr 00:0c:29:25:db:8c  
          inet addr:14.14.14.1  Bcast:14.14.14.255  Mask:255.255.255.0
          inet6 addr: fe80::d413:1fff:fe62:cdd8/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:1377 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1573 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:315330 (315.3 KB)  TX bytes:283030 (283.0 KB)
ubuntu@@ubuntu:~$ ifconfig vm1
vm1       Link encap:Ethernet  HWaddr 6a:d6:1b:77:2d:95  
          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::68d6:1bff:fe77:2d95/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1450  Metric:1
          RX packets:506 errors:0 dropped:0 overruns:0 frame:0
          TX packets:768 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:39788 (39.7 KB)  TX bytes:61932 (61.9 KB)

Please note the MTU value on vm1 is set to 1450.

Hyper-V OVS configuration

Let us presume that you have a Hyper-V virtual switch, vSwitch, bound to the interface, port1 .

The following commands will create an IP-able device, add the physical NIC to the bridge, enable the device, set the IP address of 14.14.14.2 to the device, add a bridge in which we will add the VMs to and create another bridge with the tunneling information on the port.

ovs-vsctl.exe add-br br-port1
ovs-vsctl.exe add-port br-port1 port1
Enable-NetAdapter br-port1
New-NetIpAddress -IpAddress 14.14.14.2 -PrefixLength 24 -InterfaceAlias br-port1
ovs-vsctl.exe add-br br-int
ovs-vsctl.exe add-port br-int patch-tun -- set interface patch-tun type=patch options:peer=patch-int
ovs-vsctl.exe add-br br-tun
ovs-vsctl.exe add-port br-tun patch-int -- set interface patch-int type=patch options:peer=patch-tun
ovs-vsctl.exe add-port br-tun vxlan-1 -- set interface vxlan-1 type=vxlan options:local_ip=14.14.14.2 options:remote_ip=14.14.14.1 options:in_key=flow options:out_key=flow

As you can see, all the commands are very familiar if you are used to OVS on Linux.

As introduced before, the main area where Hyper-V implementation differs from its Linux counterpart is in how virtual machines are attached to a given OVS port. This is easily accomplished by using the Set-VMNetworkAdapterOVSPort PowerShell cmdlet provided with the installer (please refer to part 1 for details on installing OVS).

Let’s say that we have a Hyper-V virtual machine called “instance-00000003,” and we want to connect it to the Hyper-V OVS switch. What we have to do for each VM network adapter is connect it to the Hyper-V Virtual Switch (vSwitch) as you would normally do, assign it to a given OVS port and create the corresponding ports in OVS:

$vnic = Get-VMNetworkAdapter instance-00000003
Connect-VMNetworkAdapter -VMNetworkAdapter $vnic -SwitchName vSwitch
$vnic | Set-VMNetworkAdapterOVSPort -OVSPortName vm2
ovs-vsctl.exe add-port br-int vm2

Here is what the resulting OVS configuration looks like on Hyper-V:

PS C:\> ovs-vsctl.exe show
a81a54fc-0a3c-4152-9a0d-f3cbf4abc3ca
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-1"
            Interface "vxlan-1"
                type: vxlan
                options: {in_key=flow, local_ip="14.14.14.2", out_key=flow, remote_ip="14.14.14.1"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vm2"
            Interface "vm2"
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-port1"
        Port "port1"
            Interface "port1"
        Port "br-port1"
            Interface "br-port1"
                type: internal

Further control can be accomplished by applying flow rules.

OVS based networking is now fully functional between KVM and Hyper-V hosted virtual machines!

This post first appeared on the Cloudbase Solutions blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post How to manage Hyper-V on Open vSwitch appeared first on OpenStack Superuser.

by Superuser at January 13, 2017 01:26 PM

Carlos Camacho

Installing the TripleO UI

This is a brief recipe to use or install TripleO UI in the Undercloud.

First, once installed the Undercloud, the TripleO UI is already available in the 3000 port.

Let’s assume you have both root password for your development environment and the Undercloud node.

TripleO-UI queries directly the endpoints (i.e. keystone) from your browser, so we need the traffic for the net 192.168.24.0/24 forwarded from your workstation to the Undercloud node in order to reach all required ports (6385, 5000, 8004, 8080, 9000, 8989, 3000, 13385, 13000, 13004, 13808, 9000, 13989 and 443).

Let’s install sshuttle in your workstation.

sudo yum install -y sshuttle

Now, let’s get the Undercloud IP and configure SSH with a ProxyCommand.

instack_mac=`ssh root@labserver "tripleo get-vm-mac instack"`
undercloudIp=`ssh root@labserver "sudo virsh domifaddr instack" | grep $instack_mac | awk '{print $4}' | sed 's/\/.*$//'`

cat << EOF >> ~/.ssh/config
Host lab
  Hostname labserver
  User root
Host uc
  Hostname $undercloudIp
  User root
  ProxyCommand ssh -vvvv -W %h:%p root@lab
EOF

sshuttle will ask you for your hypervisor and Undercloud root password.

To start forwarding the traffic execute:

sshuttle -e "ssh -vvv" -r root@uc -vvvv 192.168.24.0/24

Once you have done this, open from your browser http://192.168.24.1:3000/ and the TripleO UI should be shown correctly.

If you need a TripleO UI development environment follow:

The first step will be to install the TripleO UI and all the dependencies.

  cd
  sudo yum install -y nodejs npm tmux
  git clone https://github.com/openstack/tripleo-ui.git
  cd tripleo-ui
  npm install

Now, we need to update all the TripleO UI config files

  cd
  cp ~/tripleo-ui/dist/tripleo_ui_config.js.sample ~/tripleo-ui/dist/tripleo_ui_config.js
  echo "Changing the default IP"
  export ENDPOINT_ADDR=$(cat stackrc | grep OS_AUTH_URL= | awk -F':' '{print $2}'| tr -d /)
  sed -i "s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/$ENDPOINT_ADDR/g" ~/tripleo-ui/dist/tripleo_ui_config.js

  echo "Removing comments for the keystone URI"
  sed -i '/^  \/\/ \"keystone\"\:/s/^  \/\///' ~/tripleo-ui/dist/tripleo_ui_config.js

  echo "Removing comments for the zaqar_default_queue"
  sed -i '/^  \/\/ \"zaqar_default_queue\"\:/s/^  \/\///' ~/tripleo-ui/dist/tripleo_ui_config.js

  # Uncomment all the parameters
  # sed -i '/^  \/\/ \".*\"\:/s/^  \/\///' ~/tripleo-ui/dist/tripleo_ui_config.js

  echo "Changing listening port for the dev server, 3000 already used"
  sed -i '/port: 3000/s/3000/33000/' ~/tripleo-ui/webpack.config.js

In the following step we will use tmux to persist the service running for debugging purposes.

  cd
  tmux new -s tripleo-ui
  cd ~/tripleo-ui/
  npm start

At this stage you should have up and running your node server (33000 port).

If you followed the first step to see the default TripleO UI installation go to log in the TripleO UI: http://localhost:33000/

Happy TripleOing!

Updated 2017/01/13: First version.

Updated 2017/01/14: Add default TripleO UI info. Still getting 'Connection to Keystone is not available' the config params are correct, checking it...

Updated 2017/01/17: Forwarded all the required ports using sshuttle.

by Carlos Camacho at January 13, 2017 12:00 AM

January 12, 2017

Cloudwatt

5 Minutes Stacks, 46 episode : OpenStack CLI

Episode 46 : OpenStack CLI

This stack helps you to control the different modules of the CloudWatt Openstack infrastructure. We start by Debian Jessie image with the openstack client installed and your credentials that will allow you to access the Cloudwatt API via the shell of the instance.

Preparations

The version

  • openstackclient 3.6.0

The prerequisites to deploy this stack

Size of the instance

By default, the stack deploys on an instance of type “Tiny” (t1.cw.tiny). A variety of other instance types exist to suit your various needs, allowing you to pay only for the services you need. Instances are charged by the minute and capped at their monthly price (you can find more details on the Tarifs page on the Cloudwatt website).

Other stack parameters, of course, are yours to tweak at your fancy.

Start-up

but you do not have a way to create the stack from the console?

We do indeed! Using the console, you can deploy a Openstack CLi server:

  1. Go the Cloudwatt Github in the applications/Jestart repository
  2. Click on the file named ‘bundle-jessie-openstack-cli.heat.yml’
  3. Click on RAW, a web page will appear containing purely the template
  4. Save the file to your PC. You can use the default name proposed by your browser (just remove the .txt)
  5. Go to the « Stacks » section of the console
  6. Click on « Launch stack », then « Template file » and select the file you just saved to your PC, and finally click on « NEXT »
  7. Name your stack in the « Stack name » field and click “LAUNCH”
  8. Enter the name of your keypair in the « SSH Keypair » field
  9. Choose your instance size using the « Instance Type » dropdown and click on « LAUNCH »
  10. Enter the name of your network_name in the « network_name » field
  11. Enter the name of your os_auth_url in the « os_auth_url » field
  12. Choose the region (fr1 or fr2) From the drop-down menu « os_region_name »
  13. Enter the name of your os_tenant_name in the « os_tenant_name » field
  14. Enter the name of your os_username in the « os_username » field
  15. Enter the name of your os_password in the « os_password » field then click « LAUNCH »

The stack will be automatically generated (you can see its progress by clicking on its name). When all modules become green, the creation will be complete. If you’ve reached this point, you’re already done!

A one-click chat sounds really nice…

… Good! Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOYER and follow the simple steps… 2 minutes later, a green button appears… ACCEDER: you have your stack !

Enjoy

 Some examples of using the openstack command.

You now have an SSH access point on your virtual machine (through the floating IP and your private keypair, with the default user name cloud).

To display the instances that are on your tenant :

$ openstack server list

To display the images that are on your tenant :

$ openstack image list

To display the networks that are on your tenant :

openstack network list

To creat a stack via heat template:

$ openstack stack create MYSTACK --template server_console.yaml

To display your stack details:

$ openstack stack resource list MYSTACK

To display how to use openstack commande :

$ openstack help

The environment variables are in /home/cloud/.bashrc file, for dislaying:

$ env | grep OS

OS_REGION_NAME=fr1
OS_PASSWORD=xxxxxxxxxxxxxxxxxxxxx
OS_AUTH_URL=https://identity.fr1.cloudwatt.com/v2.0
OS_USERNAME=your_username
OS_TENANT_NAME=xxxxxxxxxxxxxxxxxx

Other resources you could be interested in:

Have fun. Hack in peace.

by Julien DEPLAIX at January 12, 2017 11:00 PM

Aptira

OpenStack Election 2017: Vote for Kavit – The Voice of Users & Operators

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="608" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/198980605" title="Kavit OpenStack Election 2017 - The Voice for Users &amp; Operators" webkitallowfullscreen="webkitallowfullscreen" width="1080"></iframe>

We’re now at day 4 of the OpenStack election and it’s time to amplify the voice of real world OpenStack users – who better to do this than someone with real world experience. Kavit has worked on over 30 different production sites in the last 5 yrs and understands the end user journey.

Vote for Kavit to support users, operators and user groups.

Kavit has worked hard to give voice to the needs of the users and operators over his two terms at the OpenStack Foundation Board. Kavit understands what the users and operators of OpenStack need and has strongly supported the empowerment of the User Committee and the various user groups during his tenure.

To help Kavit promote the voice for real world users, please vote for Kavit in the OpenStack election!

 <iframe frameborder="0" height="" src="http://player.vimeo.com/video/198980605?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" width=""></iframe>

Can't see the video in your RSS reader or email? Click Here!

The post OpenStack Election 2017: Vote for Kavit – The Voice of Users & Operators appeared first on Aptira Cloud Solutions.

by Jessica Field at January 12, 2017 09:56 PM

Amrith Kumar

Automating OpenStack’s gerrit commands with a CLI

Every OpenStack developer has to interact with the gerrit code review system. Reviewers and core reviewers have to do this even more, and PTL’s do a lot of this. The web-based interface is not conducive to many of the more common things that one has to do while managing a project and early on, I … Continue reading "Automating OpenStack’s gerrit commands with a CLI"

by amrith at January 12, 2017 08:49 PM

Sean Roberts

Open Source First

This is a manifesto that any private organization can use to frame their collaboration transformation. Take a read. Let me know what you think. I will be giving a talk at the Linux TODO...

by sarob at January 12, 2017 07:26 PM

SUSE Conversations

Pi Aims for Large Slice of the Cloud Market with OpenStack

Just in case anyone thinks there’s a typo in the title, I’m referring to “Pi” as in the 16th letter of Greek alphabet or the mathematical constant. Not “Pie” as in apple pie. More specifically, this is about Pi DATACENTERS, a cloud service provider in India who is aiming to capture a large slice of …

+read more

The post Pi Aims for Large Slice of the Cloud Market with OpenStack appeared first on SUSE Blog. Mark_Smith

by Mark_Smith at January 12, 2017 06:54 PM

Aptira

Intentions & Contributions for the 2017 OpenStack Election

The voice of users and operators - OpenStack Election 2017

Before the OpenStack election draws to a close, I’d like to take this opportunity to remind voters of my intentions for 2017:

  • More events, including bringing an OpenStack Summit to India
  • Build and solidify the OpenStack brand
  • Create an inclusive and harmonious ecosystem
  • Be the voice for real world users of OpenStack

If successful at this election, I will continue to build on the long history of contributions that Aptira has made to OpenStack at the Board level:

  • Championing travel support to bring more members to events
  • Calling for certification of training services and outcomes
  • Suggesting ops meetups
  • Arguing for the importance of diversity
  • Placing a greater importance on user feedback

So please vote for me during the OpenStack election 2017 so I can continue to align the goals of the OpenStack Foundation with real world users of OpenStack.

The post Intentions & Contributions for the 2017 OpenStack Election appeared first on Aptira Cloud Solutions.

by Kavit Munshi at January 12, 2017 06:23 PM

OpenStack Superuser

How Fujitsu powers ‘human-centric’ artificial intelligence

Imagine a world where the person driving next to you multitasking (eating, checking texts, reading) pays more for insurance than you do by keeping your eyes on the road.

That’s one of areas where Fujitsu Ltd. is concentrating artificial intelligence (AI) research efforts along with signature analysis as-a-service and 3D image modeling. If right now it seems futuristic to control the heat in your house or check the charge of your Tesla from your phone, you’d better hang on.

“We’re living in a hyper-connected world, where every person and every device is connected,” says Jason Daniels, Fujitsu Hybrid IT CTO. “That’s pretty cool, though it’s not cool if you’re an enterprise,” he adds, noting that large corporations are big, monolithic and move at a glacial pace in the breakneck digital era.

Recognizing the challenge, Fugitsu stepped up the pace by fueling R&D into artificial intelligence over 30 years ago, leading to some nifty use cases, including a crowd-congestion app that could help you figure out when to hit the streets for Black Friday or the optimal time to go to the stadium. More recently, artificial intelligence teams built MetaArc, a cloud-based digital business platform offering a bundle of services including internet of things, block chain technologies, big data and AI. The Japanese multinational IT services provider will rev things up by more than doubling AI-related staff to 1,500 by 2018.

At the OpenStack Summit in Barcelona, Daniels and colleague Roger Menday, principal researcher at Fujitsu Laboratories Europe, demonstrated what they call “Human-centric AI Zenrai,” and how it’s powered by Cloud Foundry, Docker and OpenStack.

driving

For the car example, Menday showed a video where a driver wearing a wristband tracking movements is “busted” while playing with his phone or eating while at the wheel. The simulation screen highlights different colors tracking the activities; the program essentially generates a time series, the images changing in visual recognition of particular patterns.

“Rather than getting into the hairy work of tuning your neural network for every single domain…we’re working like a human brain does to recognize patterns in pictures,” Menday says. This “imagification” process works similarly for signatures (recognizing inliers and outliers) and for 3D shapes, where it can pick out comparable parts for machinery. Menday says the underlying concept is to turn data problems into image problems by using the source data and training the network accordingly.

3d

These great technologies are fueled by Cloud K5, the service at the center of MetaArc, Daniels says. “OpenStack will power digital business platform digital transformation power our future putting people at the heart of what we do,” Daniels says.  K5 currently operates in Japan and several regions in the U.K.; a total of 24+ availability zones will arrive at the end of the deployment for this large scale public-cloud offering.

k5
OpenStack offers an API-first approach Menday says. For example, in the case of object storage with Swift “we can do this from our applications in terms of putting up and building a machine learning deployment, we use Heat, which is great  — just one click and the whole infrastructure is there and ready to go.” The API side runs through Cloud Foundry, Menday says, adding that the team likes it it very much. “It’s quick and easy and offers very nice rapid deployment cycles.” The machine learning environment is prepared with Docker, pairing it side-by-side with Cloud Foundry, he notes.

“It’s a very human-centric way about thinking about AI and engineering AI and then deploying it to the cloud. This is a journey that we’re currently in the middle of,” Menday says. “It’s how we use OpenStack, how we can leverage these services to get high-availability robustness.”

Putting resources into artificial intelligence to future-proof business is on a lot of people’s minds. Although only one other person in the audience was working on AI when Daniels asked for a show of hands, investment in the sector is booming. Spending on AI technologies is expected to grow to $47 billion by 2020.

You can catch the entire 32-minute session here or below. Check out who will be speaking at the next OpenStack Summit in Boston here.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/Dp6u3HcdyCg?feature=oembed" width="640"></iframe>

The post How Fujitsu powers ‘human-centric’ artificial intelligence appeared first on OpenStack Superuser.

by Nicole Martinelli at January 12, 2017 02:25 PM

Dougal Matthews

Calling Ansible from Mistral Workflows

I have spoken with a few people that were interested in calling Ansible from Mistral Workflows.

I finally got around to trying to make this happen. All that was needed was a very small and simple custom action that I put together, uploaded to github and also published to pypi.

Here is an example of a simple Mistral Workflow that makes use of these new actions.

---
version: 2.0

run_ansible_playbook:
  type: direct
  tasks:
    run_playbook:
      action: ansible-playbook
      input:
        playbook: path/to/playbook.yaml

Installing and getting started with this action is fairly simple. This is how I done it in my TripleO undercloud.

sudo pip install mistral-ansible-actions;
sudo mistral-db-manage populate;
sudo systemctrl restart openstack-mistral*;

There is one gotcha that might be confusing. The Mistral Workflow runs as the mistral user, this means that the user needs permission to access the Ansible playbook files.

After you have installed the custom actions, you can test it with the Mistral CLI. The first command should work without anything extra setup, the second requires you to create a playbook somewhere and provide access.

mistral run-action ansible '{"hosts": "localhost", "module": "setup"}'
mistral run-action ansible-playbook '{"playbook": "path/to/playbook.yaml"}'

The action supports a few other input parameters, they are all listed for now in the README in the git repo. This is a very young project, but I am curious to know if people find it useful and what other features it would need.

If you want to write custom actions, check out the Mistral documentation.

by Dougal Matthews at January 12, 2017 02:20 PM

January 11, 2017

IBM OpenTech Team

The OpenStack Interoperability Challenge Update: Phase Two Progress

brad_don_team

Overview

In 2016 the OpenStack Interoperability Challenge was originally announced by IBM GM Don Rippert at the OpenStack Austin Summit. This effort was the first initiative to use the deployment and execution of enterprise workloads using automated deployment tools as the means of proving interoperability across OpenStack cloud environments. The first phase of the OpenStack Interoperability Challenge culminated with a Barcelona Summit Keynote demo comprised of 16 vendors all running the same enterprise workload and automation tools to illustrate that OpenStack enables workload portability across public and private OpenStack clouds. Here is a short trip down memory lane:
<figure class="wp-caption alignleft" id="attachment_8142" style="width: 300px">Keynote Rehearsal AV check<figcaption class="wp-caption-text">Keynote Rehearsal AV check</figcaption></figure><figure class="wp-caption alignright" id="attachment_8144" style="width: 300px">Interop Challenge Demo Team Huddles at Rehearsal<figcaption class="wp-caption-text">Interop Challenge Demo Team Huddles at Rehearsal</figcaption></figure><figure class="wp-caption aligncenter" id="attachment_8143" style="width: 300px">Interop Challenge Demo Team Backstage Trophy Presentation<figcaption class="wp-caption-text">Interop Challenge Demo Team Backstage Trophy Presentation</figcaption></figure>

Interoperability Challenge Momentum Continues: Phase Two

Building upon the momentum generated at the OpenStack Summit in Barcelona, the Interoperability Challenge Team decided to move forward with a second phase of the challenge. For the second phase, we needed to decide on what new workloads to use to drive interoperability improvements. We received lots of great suggestions. In December 2016, a Doodle pool was open to the full OpenStack Community to determine which workloads should be prioritized. The results of the Doodle poll are provided below.
<figure class="wp-caption aligncenter" id="attachment_8152" style="width: 1540px">Interop Challenge Phase Two Doodle Poll<figcaption class="wp-caption-text">Interop Challenge Phase Two Doodle Poll</figcaption></figure>

Based on the results of the Doodle poll, the Interoperability Challenge team will be focusing on a Kubernetes workload and an NVF workload for phase two. The creation of these workloads is currently in progress.
A New Interoperability Challenge Project Repository
After all the great progress in Barcelona, the Interoperability Challenge team was rewarded with its own new project repository for storing our workloads. Our new repository is now located at https://github.com/openstack/interop-workloads

New Bug Tracker
We also have a new bug tracker. You can now open bugs on our interoperability workloads using the following link: https://bugs.launchpad.net/interop-workloads

Please come join us: How to get involved

The OpenStack Interoperability Challenge Team is always looking for greater participation from the OpenStack Community. Here are some details on how you can join this effort:

Weekly Meeting
Our team has a weekly IRC meeting on Wednesdays at 1400 UTC. We use IRC Channel (on freenode) #openstack-meeting-5
Logs from past meetings can be found here.
If you would like to receive a reminder invite to this meeting, please email Tong Li at litong01@us.ibm.com.

Half Day Session at OpenStack PTG
We are planning on having a half day face to face meeting at the OpenStack Project Teams Gathering (PTG). The agenda for this meeting can be found on our etherpad. If you plan to attend please add your name to the attendee list section of the etherpad.

Interoperability Challenge Wiki Page
Our Interoperability Challenge Wiki Page has pretty much all the information you need to get involved. Please come join us! If you have any questions on this initiative please contact Brad Topol at btopol@us.ibm.com.

The post The OpenStack Interoperability Challenge Update: Phase Two Progress appeared first on IBM OpenTech.

by Brad Topol at January 11, 2017 09:27 PM

SUSE Conversations

SUSE Expert Days: rejoignez-nous à Paris

Depuis 6 ans, nous organisons sur Paris une journée entièrement dédiée à vos problématiques et comment les produits SUSE peuvent vous aider. Pas de discours marketing mais les roadmaps, les nouvelles fonctionnalités, des démos techniques live, des discours d’experts … En résumé des réponses! Comme le nom l’indique, les Expert Days sont présentés des Experts, …

+read more

The post SUSE Expert Days: rejoignez-nous à Paris appeared first on SUSE Blog. eportefaix

by eportefaix at January 11, 2017 03:36 PM

Aptira

OpenStack Election 2017: Vote for Kavit – Inclusive Ecosystem

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="608" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/198906058" title="Kavit OpenStack Election 2017 - An Inclusive Ecosystem for OpenStack" webkitallowfullscreen="webkitallowfullscreen" width="1080"></iframe>

Today is day 3 of the OpenStack election, and we have another great reason why Kavit deserves your vote.

Vote for Kavit for an inclusive ecosystem!

OpenStack needs to embrace inclusiveness, both in membership and software. Kavit supports and pushes for OpenStack to become the defacto OpenSource cloud platform. Kavit has also pushed for the OpenStack project and Foundation to work closely with other related or seemingly competing projects to ensure harmony and inclusiveness.

For a harmonious OpenStack ecosystem filled with sunshine and rainbows, please vote for Kavit in the OpenStack election.

 <iframe frameborder="0" height="" src="http://player.vimeo.com/video/198906058?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" width=""></iframe>

Can't see the video in your RSS reader or email? Click Here!

The post OpenStack Election 2017: Vote for Kavit – Inclusive Ecosystem appeared first on Aptira Cloud Solutions.

by Jessica Field at January 11, 2017 01:56 PM

Mirantis

Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.

In previous articles, we’ve been talking about how to use Kubernetes to spin up resources. So far, we’ve been working exclusively on the command line, but there’s an easier and more useful way to do it: creating configuration files using YAML. In this article, we’ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.

Using YAML for K8s definitions gives you a number of advantages, including:

  • Convenience: You’ll no longer have to add all of your parameters to the command line
  • Maintenance: YAML files can be added to source control, so you can track changes
  • Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set.  On the other hand, that’s not very likely, unfortunately.  Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Fortunately, there are only two types of structures you need to know about in YAML:

  • Lists
  • Maps

That’s it. You might have maps of lists and lists of maps, and so on, but if you’ve got those two structures down, you’re all set. That’s not to say there aren’t more complex things you can do, but in general, this is all you need to get started.

YAML Maps

Let’s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you’re trying to set up configuration information.  For example, you might have a config file that starts like this:

---
apiVersion: v1
kind: Pod

The first line is a separator, and is optional unless you’re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.

This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:

{
   "apiVersion": "v1",
   "kind": "Pod"
}

Notice that in our YAML version, the quotation marks are optional; the processor can tell that you’re looking at a string based on the formatting.

You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

---
apiVersion: v1
kind: Pod
metadata:
  name: rss-site
  labels:
    app: web

In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.

The YAML processor knows how all of these pieces relate to each other because we’ve indented the lines. In this example I’ve used 2 spaces for readability, but the number of spaces doesn’t matter — as long as it’s at least 1, and as long as you’re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they’re both part of the same map; it knows that app is a value for labels because it’s indented further.

Quick note: NEVER use tabs in a YAML file.

So if we were to translate this to JSON, it would look like this:

{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
               "name": "rss-site",
               "labels": {
                          "app": "web"
                         }
              }
}

Now let’s look at lists.

YAML lists

YAML lists are literally a sequence of objects.  For example:

args
  - sleep
  - "1000"
  - message
  - "Bring back Firefly!"

As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:

{
   "args": ["sleep", "1000", "message", "Bring back Firefly!"]
}

And of course, members of the list can also be maps:

---
apiVersion: v1
kind: Pod
metadata:
  name: rss-site
  labels:
    app: web
spec:
  containers:
    - name: front-end
      image: nginx
      ports:
        - containerPort: 80
    - name: rss-reader
      image: nickchase/rss-php-nginx:v1
      ports:
        - containerPort: 88

So as you can see here, we have a list of containers “objects”, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.

For completeness, let’s quickly look at the JSON equivalent:

{
   "apiVersion": "v1",
   "kind": "Pod",
   "metadata": {
                 "name": "rss-site",
                 "labels": {
                             "app": "web"
                           }
               },
    "spec": {
       "containers": [{
                       "name": "front-end",
                       "image": "nginx",
                       "ports": [{
                                  "containerPort": "80"
                                 }]
                      }, 
                      {
                       "name": "rss-reader",
                       "image": "nickchase/rss-php-nginx:v1",
                       "ports": [{
                                  "containerPort": "88"
                                 }]
                      }]
            }
}

As you can see, we’re starting to get pretty complex, and we haven’t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.

So let’s review.  We have:

  • maps, which are groups of name-value pairs
  • lists, which are individual items
  • maps of maps
  • maps of lists
  • lists of lists
  • lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  

Creating a Pod using YAML

OK, so now that we’ve got the basics out of the way, let’s look at putting this to use. We’re going to first create a Pod, then a Deployment, using YAML.

If you haven’t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It’s OK, we’ll wait….

ernie

Back already?  Great!  Let’s start with a Pod.

Creating the pod file

In our previous example, we described a simple Pod using YAML:


apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88

Taking it apart one piece at a time, we start with the API version; here it’s just v1. (When we get to deployments, we’ll have to specify a different version because Deployments don’t exist in v1.)

Next, we’re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we’re trying to achieve.

Next we specify the metadata. Here we’re specifying the name of the Pod, as well as the label we’ll use to identify the pod to Kubernetes.

Finally, we’ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let’s take a closer look at a typical container definition:


spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader

In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it’s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you’ll need more information.

You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it’s instantiated.  You can also specify even deeper information, such as the location of the container’s exit log.  Here are the properties you can set for a Container:

  • name
  • image
  • command
  • args
  • workingDir
  • ports
  • env
  • resources
  • volumeMounts
  • livenessProbe
  • readinessProbe
  • livecycle
  • terminationMessagePath
  • imagePullPolicy
  • securityContext
  • stdin
  • stdinOnce
  • tty

Now let’s go ahead and actually create the pod.

Creating the pod using the YAML file

The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:


apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88

Save the file, and tell Kubernetes to create its contents:

> kubectl create -f pod.yaml
pod "rss-site" created

As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:

> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s

If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:

> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s

From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let’s go ahead and delete it so there aren’t any name conflicts:

> kubectl delete pod rss-site
pod "rss-site" deleted

Troubleshooting pod creation

Sometimes, of course, things don’t go as you expect. Maybe you’ve got a networking issue, or you’ve mistyped something in your YAML file.  You might see an error like this:

> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s

In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:

> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
  front-end:
    Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
    Image:                      nginx
    Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
    Port:                       80/TCP
    State:                      Running
      Started:                  Sun, 08 Jan 2017 08:36:49 +0000
    Ready:                      True
    Restart Count:              0
    Environment Variables:      <none>
  rss-reader:
    Container ID:
    Image:                      nickchase/rss-php-nginx
    Image ID:
    Port:                       88/TCP
    State:                      Waiting
      Reason:                   ErrImagePull
    Ready:                      False
    Restart Count:              0
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
  ---------     --------        -----   ----                    -------------  -------- ------                  -------
  45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
  44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image "nginx"
  45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image "nginx"
  43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
  43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
  43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image "nickchase/rss-php-nginx"
  42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image "nickchase/rss-php-nginx": Tag latest not found in repository docker.io/nickchase/rss-php-nginx
  42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to "StartContainer" for "rss-reader" with ErrImagePull: "Tag latest not found in repository docker.io/nickchase/rss-php-nginx"


  41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image "nickchase/rss-php-nginx"
  41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "rss-reader" with ImagePullBackOff: "Back-off pulling image \"nickchase/rss-php-nginx\""

As you can see, there’s a lot of information here, but we’re most interested in the Events — specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I’d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn’t exist.  

To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.

Now that we’ve successfully gotten a Pod running, let’s look at doing the same for a Deployment.

Creating a Deployment using YAML

Finally, we’re down to creating the actual Deployment.  Before we do that, though, it’s worth understanding what it is we’re actually doing.

K8s, remember, manages container-based resources. In the case of a Deployment, you’re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod — literally, a ReplicaSet — to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2

Here we’re specifying the apiVersion as extensions/v1beta1 — remember, Deployments aren’t in v1, as Pods were — and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let’s keep things simple for now.

Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we’ll do the same thing here with the Deployment. We’ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it’s considered “ready”.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.

OK, so now that we know we want 2 replicas, we need to answer the question: “Replicas of what?”  They’re defined by templates:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       – name: front-end
         image: nginx
         ports:
           – containerPort: 80
       – name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           – containerPort: 88

Look familiar?  It should; it’s virtually identical to the Pod definition in the previous section, and that’s by design. Templates are simply definitions of objects to be replicated — objects that might, in other circumstances, by created on their own.

Now let’s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:

> kubectl create -f deployment.yaml
deployment "rss-site" created

To see how it’s doing, we can check on the deployments list:

> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s

As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:

> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2

As you can see here, there’s no problem, it just hasn’t finished scaling up yet. Another few seconds, and we can see that both Pods are running:

> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m

What we’ve seen so far

OK, so let’s review. We’ve basically covered three topics:

  1. YAML is a human-readable text-based format that let’s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
  2. YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
  3. You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that’s our basic YAML tutorial. We’re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there’s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at January 11, 2017 01:45 PM

Aptira

The OpenStack Election – Representing YOUR interests

Representing your interests - OpenStack Election 2017

I would like to remind voters that this OpenStack Election is an election of the directors selected from individual members. Most companies that support the Foundation have already or are in the process of voting along company lines to represent their interests.

I would urge the voters to not vote for their employers but to vote in true democratic fashion: for who they think would best represent the interest of the individual members and their region.

The Board plays a very important role in the OpenStack ecosystem. Apart from providing guidance and direction to the Project at the management level, the Board ensures that the interests of all the different varieties of participants are looked after. The board also needs to ensure that project remains relevant and keeps growing in the right direction. The Board needs to ensure that OpenStack stays competitive with other emerging technologies. The balance between the needs of the users and the technical goals of the projects also need to be maintained by the Board.

The Board is one of the bodies in the OpenStack world that provides a voice for stakeholders around the world. I aim to keep working on increasing the voice of OpenStack customers, developers and operators worldwide, particularly in Asia, through my candidacy.
Please vote for me in the OpenStack Election so I can continue to represent YOU.

The post The OpenStack Election – Representing YOUR interests appeared first on Aptira Cloud Solutions.

by Kavit Munshi at January 11, 2017 01:32 PM

OpenStack Superuser

Cloud Kindergarten preps students for OpenStack careers

Cloud Kindergarten began this year to offer students a chance to learn about OpenStack and how to work with it. The students taking part in this program have access to Devstack so that they can learn about different commands and how to best utilize them in practice. Students are also able to create a tenant or network with routers and host an application like WordPress with databases and web servers.

At the OpenStack Summit in Barcelona, Janika SchaferAdriano Perri and Oliver Klippel, apprentices at Deutsche Telekom through Cloud Kindergarten,  introduced the program to the OpenStack Community and shared their work.

After presenting, Schafer and Perri spoke to Superuser TV  about what they learned in this virtual sandbox. The pair also described what skills they’re taking with them and what immersing themselves in the cloud community was like.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/P-Gx-rkLhi0?feature=oembed" width="640"></iframe>

 

 

The post Cloud Kindergarten preps students for OpenStack careers appeared first on OpenStack Superuser.

by Allison Price at January 11, 2017 12:03 PM

January 10, 2017

Aptira

OpenStack Election 2017: Vote for Kavit – Solidify the OpenStack Brand

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="608" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/198657198" title="Kavit OpenStack Election 2017 - Solidify the OpenStack Brand" webkitallowfullscreen="webkitallowfullscreen" width="1080"></iframe>

Day 2 of the OpenStack election is now upon us. Let’s help Kavit build OpenStack around the globe:

Vote for Kavit to solidify the OpenStack brand.

Kavit is an OpenStack Ambassador and has worked hard to evangelise the OpenStack project in his region. He has introduced a lot of enterprises, startups and developers to OpenStack and has worked tirelessly to promote OpenStack around the globe.

To help Kavit support OpenStack in the Asia Pacific region, please vote for Kavit! <iframe frameborder="0" height="" src="http://player.vimeo.com/video/198657198?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" width=""></iframe>

Can't see the video in your RSS reader or email? Click Here!

The post OpenStack Election 2017: Vote for Kavit – Solidify the OpenStack Brand appeared first on Aptira Cloud Solutions.

by Jessica Field at January 10, 2017 02:16 PM

Alessandro Pilotti

OpenStack Newton Benchmarking – Scenario 2

We talked about setting up the environment for benchmarking and we have seen in the previous part a simple OpenStack scenario. In this part we are going to continue by adding more complexity to the use case being tested.

 

Scenario 2


In the second scenario a few intermediate steps have been added to increase the complexity of the scenario test. The steps now are:

  • nova boot VM
  • wait until nova reports VM as active
  • nova associate floating IP to the VM
  • wait until a SSH connection is available to the VM
  • nova delete floating IP
  • nova delete VM

The following results have been obtained by running the test with 50 VMs in parallel on a single hypervisor and with a total of 200 VM.

  1. Results for KVM with Xenial Ubuntu 16.04.1 LTS (default kernel version 4.4.0-45-generic) as host operating system:
    kvm-2
  2. Results for Hyper-V with Windows Server 2012 R2 as host operating system:
    hyperv-2012r2-2
  3. Results for Hyper-V with Windows Server 2016 as host operating system:
    hyperv-2016-2

Remarks for the second scenario: as explained before, each iteration sequence number corresponds to one run, where the steps are described at the beginning of this post.

On average KVM is ~30% slower than Hyper-V on Windows Server 2012 R2 and almost 42% slower than Hyper-V on Windows Server 2016.

The difference is pretty small between each Windows version with Windows Server 2016 being slightly faster. We recommend anyway the latter, since it includes a lot of new features.

The post OpenStack Newton Benchmarking – Scenario 2 appeared first on Cloudbase Solutions.

by Alin Băluțoiu at January 10, 2017 01:00 PM

OpenStack Superuser

Navigating OpenStack: community, release cycles and events

Hopefully last week we piqued your interest in a career path in OpenStack. Like any other open source project, if you’re going to use it—professionally or personally—it’s important to understand its community and design/release patterns.

The OpenStack community

OpenStack has an international community of more than 60,700 individual members. Not all of these members contribute code, but they are all involved in developing, evangelizing or supporting the project.

Individuals have contributed over 20 million lines of code to OpenStack since its inception in 2010. OpenStack’s latest release, Newton, arrived in the first week of October and has more than 2,500 individual contributors. You need not be an expert in infrastructure development to contribute—OpenStack project teams include groups like the internationalization team (which helps translate the dashboard and docs into dozens of languages) and the Docs team—work that’s equally important to OpenStack’s success.

You can find a full list of projects here, ranging from core services like compute and storage to emerging solutions like container networking.

The release cycle

OpenStack releases run on a six-month schedule. Releases are named in alphabetical order after a landmark near the location of that release cycle’s Summit, the big event where the community gathers to plan the next release. The first release was named Austin; the current release is Newton and the upcoming release is Ocata.

Release planning will soon undergo a change in response to community feedback. In the A-N releases, developers and contributors both met with users to gather parameter and worked on solutions in their teams at the OpenStack Summit (an event we’ll talk about momentarily—you don’t want to miss them!). This started to become too large of a task for the allotted time.

Starting in 2017, the community will try something new: what used to be the Design Summit will be split into two events. More strategic planning conversations and requirements gathering will continue to happen at the Summit in an event to be called the “Forum.” More detailed implementation discussions will happen among contributors at the smaller Project Teams Gathering (PTG), which will occur in between the Summits.

If you’re a developer or administrator working professionally on OpenStack, you might find yourself attending the Forum to give your input on the upcoming release, or becoming a contributor on a project team and attending the PTG!

Summits, Hackathons and everything in between

With such a large and active community, there’s no shortage of ways to meet up with other Stackers. The biggest, mark-your-calendar-don’t-miss-it event is the OpenStack Summit. The Summit is a bi-annual gathering of community members, IT leaders, developers and ecosystem supporters. Each year one Summit is held in North America and one Summit rotates between APAC and EMEA. In April 2016, the Austin, Texas Summit brought more than 7,800 Stackers. May 8-11 2017, the community heads to Boston for a week of hands-on workshops, intensive trainings, stories and advice from real OpenStack users, and discussions submitted and voted on by the community.

In between Summits, community members host OpenStack Days—one or two day gatherings that draw everyone from active contributors to business leaders interested in learning more about OpenStack. Topics are determined by community organizers, and often reflect the challenges pertinent to that community as well as the local industries’ specialties.

The newest OpenStack events for cloud app developers are OpenStack App Hackathons, another community-led event. Ever wondered what you could build if you had 48 hours, unlimited cloud resources and a bunch of pizza? Taiwan hosted the first Hackathon, where the winning team created a tool that helped rehabilitate damaged neuromuscular connections, followed by Guadalajara, where the winning team created an app that gave users storage of and access to their healthcare records, a major problem in the team’s local community.

And of course, there’s no shortage of local user groups and Meetups to explore around the world.

Getting started

In the subsequent pieces in this series, we’ll discuss the tools and resources available for sharpening your OpenStack skills and developing the necessary experience to work on OpenStack professionally. But if you’re ready to start exploring, the community has multiple low-risk options to start getting involved.

If you’re interested in development, DevStack is a full OpenStack deployment that you can run on your laptop. If you’re interested in building apps on OpenStack or playing with a public cloud-like environment, you can use TryStack, a free testing sandbox. There is also a plethora of OpenStack Powered clouds in the OpenStack Marketplace.

As you’re exploring OpenStack, keep ask.openstack.org handy—it’s the OpenStack-only version of Stackoverflow.

Common concerns

You’ve seen the job market, you’ve gotten the community layout and surely you have more questions. In our final installment, we’ll address the experience it takes to get hired to work on OpenStack and share the resources you can use to help get you there. If you have a question you want answered, tweet us at @OpenStack.

Want to learn the basics of OpenStack? Take the new, free online course from The Linux Foundation and EdX. Register Now!

The OpenStack Summit is the most important gathering of IT leaders, telco operators, cloud administrators, app developers and OpenStack contributors building the future of cloud computing. Hear business cases and operational experience directly from users, learn about new products in the ecosystem and build your skills at OpenStack Summit, May 8-11 2017 in Boston. Register Now!

This post first appeared on the Linux blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post Navigating OpenStack: community, release cycles and events appeared first on OpenStack Superuser.

by Anne Bertucio at January 10, 2017 12:11 PM

Aptira

Supporting the APAC region in the OpenStack Election 2017

Support the APAC region - OpenStack Election 2017

The community has been kind enough to nominate me again as a candidate for this years Board of Directors Election and thanks to your support, I have been able to highlight several key issues at the Board and with the Foundation.

  • Diversity: I have been critical in ensuring diversity and inclusivity are given due importance. I hope to continue to represent the user community at the Board.
  • Governance: My experience in working with the Board for the past two years has taught me a lot about the need for open and transparent governance.
  • Inclusivity: There is an unwavering need for inclusivity within the ecosystem. I find that the project stands at a crossroads and the right vision, leadership are required to ensure that OpenStack remains relevant in the ever changing IT ecosystem. I hope the community places their trust in me again to provide that leadership.

Please vote for me so I can continue to support the APAC region during 2017.

The post Supporting the APAC region in the OpenStack Election 2017 appeared first on Aptira Cloud Solutions.

by Kavit Munshi at January 10, 2017 06:56 AM

January 09, 2017

Cloudwatt

Blueprint: Kubernetes HA multi-tenant multi-region

Kubernetes

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. With Kubernetes, you are able to quickly and efficiently respond to customer demand: * Deploy your applications quickly and predictably. * Scale your applications on the fly. * Seamlessly roll out new features. * Optimize use of your hardware by using only the resources you need. Our goal is to foster an ecosystem of components and tools that relieve the burden of running applications in public and private clouds.

This stack allows you to deplay a multi-tenant multi-region HA Kubernetes cluster in few clicks.

Preparations

Versions

  • CoreOS Stable 1010.6
  • Docker 1.10.3
  • Kubernetes 1.5.1
  • Ceph 10

The prerequisites

This should be a routine now:

Instance Size

By default, the script proposes a deployment on an instance of type “standard-1” (n1.cw.standard-1). There are a variety of other types of instances for meeting your multiple needs. The instances are invoiced per minute, allowing you to pay only for the services you have consumed and capped at their monthly price (you will find more details on the rates page of the Cloudwatt website).

You can adjust the stack parameters to suit your taste.

By the way…

If you do not like the command lines, you can go directly to the version “I launch in 1-click” or “I launch with the console” by clicking on this link

Tour of the owner

Once the repository is cloned, you will find the blueprint-kubernetes-ha/ directory

  • stack-fr1.yml: HEAT orchestration template for region FR1, it will be used to deploy the necessary infrastructure.
  • stack-fr2.yml: HEAT orchestration template for region FR2, it will be used to deploy the necessary infrastructure.
  • stack-start.sh: Script to launch the stack, which simplifies the input of the parameters.

Start-up

Initializing the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

$ source COMPUTE-[...]-openrc.sh
Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Launch the stack

In a shell, run the stack-start.sh script:

 $ ./stack-start.sh

The script will ask you several questions, then, once the stack create you will display four lines:

scale_dn_url: ...
scale_up_url: ...
scale_storage_dn_url: ...
scale_storage_up_url: ...

scale_dn_url is a url that you can call to decrease the capacity of your cluster

scale_up_url is a url that you can call to increase the capacities of your cluster

scale_storage_up_url is a url that you can call to increase the capacity of the cluster Ceph

scale_storage_dn_url is a url that you can call to decrease the capacity of the cluster Ceph, in this scenario, please look at the FAQ.

Et ensuite

Each node has a public and private ip.

The cluster will take about ten minutes to initialize, once this time has elapsed, you can connect throught ssh to the public ip of one of them.

Then, to list the state of the Kubernetes components, you can execute this command:

$ fleetctl list-units

It should show you this:

UNIT                       MACHINE                  ACTIVE SUB
pidalio-apiserver.service  62bf699b.../84.39.36.87  active running
pidalio-controller.service b8cc10ee.../84.39.35.207 active running
pidalio-node.service       4f723b52.../84.39.36.13  active running
pidalio-node.service       62bf699b.../84.39.36.87  active running
pidalio-node.service       b8cc10ee.../84.39.35.207 active running
pidalio-proxy.service      4f723b52.../84.39.36.13  active running
pidalio-proxy.service      62bf699b.../84.39.36.87  active running
pidalio-proxy.service      b8cc10ee.../84.39.35.207 active running
pidalio-scheduler.service  4f723b52.../84.39.36.13  active running
pidalio.service            4f723b52.../84.39.36.13  active running

Pidalio is a utility to easily bootstrapp a Kubernetes cluster.

It is composed of six parts:

- pidalio: It makes available all the certificates and resources necessary for the operation of the cluster.
- pidalio-apiserver: corresponds to the Kubernetes API Server component
- pidalio-controller: corresponds to the Controller Manager component of Kubernetes, it takes care of your Pods
- pidalio-scheduler: corresponds to the Scheduler component, it distributes the pods in your cluster
- pidalio-proxy: corresponds to the Kube Proxy component, it takes care of your iptables to automatically route Kubernetes services ip to the correct pods
- pidalio-node: corresponds to the Kubelet, the Kubernetes agent on each node.

You can use the Kubernetes client from any node.

We will use it to run a nginx server in our cluster :

kubectl run --image=nginx --port=80 nginx

Then we will make this server available on the internet :

kubectl expose deployment nginx --type=NodePort
kubectl describe service nginx

This last command will show you the details about the nginx service:

Name:                      nginx
Namespace:                 default
Labels:                    run=nginx
Selector:                  run=nginx
Type:                      NodePort
IP:                        10.18.203.177
Port:                      <unset>    80/TCP
NodePort:                  <unset>    24466/TCP
Endpoints:                 10.40.0.2:80
Session Affinity:          None
No events.

Look at the NodePort, it’s the one you can use to access to this service throught any public ip of your cluster, be careful to open the ports on the cluster security group.

To access nginx, you can go to any public ip in your cluster on port 24466.

I would like to persist my data

It is sometimes useful to persist container data but the task is often far from easy.

That’s why the stack gives you a Ceph cluster out-of-the-box.

Type this command to list the volumes:

rbd ls

First, run this command to create a volume of 10GB :

rbd create db --size=10G

We will now launch a MariaDB database with an attached volume.

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mariadb
  labels:
    app: mariadb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      containers:
        - image: mariadb
          name: mariadb
          env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "true"
          volumeMounts:
            - name: mariadb-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
         - name: mariadb-persistent-storage
           rbd:
             monitors:
               - ceph-mon.ceph:6789
             user: admin
             image: db
             pool: rbd
             secretRef:
               name: ceph-admin-key
EOF

Since Kubernetes 1.5, you can also use volume autoprovisionning.

Example :

cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
   name: ceph
provisioner: kubernetes.io/rbd
parameters:
  monitors: ceph-mon.ceph:6789
  adminId: admin
  adminSecretName: ceph-admin-key
  adminSecretNamespace: ceph
  userId: admin
  userSecretName: ceph-admin-key
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: db
  annotations:
    "volume.beta.kubernetes.io/storage-class": ceph
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mariadb
  labels:
    app: mariadb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      containers:
        - image: mariadb
          name: mariadb
          env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "true"
          volumeMounts:
            - name: mariadb-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mariadb-persistent-storage
          persistentVolumeClaim:
            claimName: db
EOF

Monitoring

It is very important to monitor the status of your cluster, so if you have checked the Monitoring option during the creation of the stack, a Grafana is automatically available on any machine from port 31000.

You will get a list of different dashboards by clicking on the Home menu:

Monitoring

For example, click on Kubernetes resources usage monitoring (via Prometheus) for detailed monitoring of your Kubernetes cluster.

You should get this screen:

Monitoring

And high availability in all this?

Nothing simpler, run again the stack-start.sh script but on a different region of the first one and choose Join mode. Once the stack is created, the two clusters will join together to form one. Simple not?

It’s magical but how does it work?

Each node connects securely to a Weave virtual network, in this way, all containers can chat with each other regardless of their location.

Once interconnected, Fleet takes over to dispatch the various Kubernetes components through the cluster and Pidalio provides them with everything they need to function properly.

There you go !

A little diagram?

Architecture Réseau

All of this is fine, but…

A one-click chat sounds really nice…

… Good! Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOY and follow the simple steps… 2 minutes later, a green button appears… ACCESS: you have your owncloud server!

Huston we have a problem!

Cluster fails to launch correctly

If your cluster does not launch properly, try to rebuild the stack.

You have lost a Ceph node, how to properly remove it

When you add a storage node, your Ceph cluster automatically boosts. But when a node falls or is removed, we can not know if it will come back someday, which is why it is not automatically deleted from Ceph.

Before you delete your node, determine which osd to delete:

echo $(kubectl --namespace=ceph get pods -o json | jq -r '.items[] | select(.metadata.labels.daemon=="osd") | select(.spec.nodeName=="ip_de_la_machine") | .metadata.name')

This will give you the name of one of the osd, example: ceph-osd-5mi7g

Then you have to find the number of this osd:

echo $(ceph osd crush tree | jq '.[].items[] | select(.name=="ceph-osd-5mi7g") | .items[].id')

We will now output this OSD from the cluster:

ceph osd out numero_de_l_osd

Then you have to wait for Ceph to finish moving the data, you can check the progress status with the command:

ceph -s

When the cluster is back in a normal state (HEALTH_OK), you can go on:

ceph osd crush remove nom_de_l_osd
ceph auth del osd.numero_de_l_osd
ceph osd rm numero_de_l_osd

There you go ! You can now delete the machine.

Some Ceph volumes are locked

Sometimes a container blocks a Ceph volume, to remove the lock, run this:

rbd lock list nom_du_volume

Should display this:

rbd lock rm nom_du_volume id_du_lock locker

Example:

rbd lock rm grafana kubelet_lock_magic_to-hfw3u7-e3pnkzd34lhp-22iuiamqx2s4-node-f644cpr26t7l.novalocal client.14105

So watt ?

This tutorial is intended to accelerate your startup. At this stage you are master on board.

You have an entry point on your virtual machine in SSH via the exposed floating IP and your private key (user core by default).

Other sources you might be interested in


Have fun. Hack in peace.

by The CAT at January 09, 2017 11:00 PM

Graham Hayes

Non Candidacy for Designate PTL

<figure></figure>
https://photos.smugmug.com/Designate-Mid-Cycle/i-2McNPwq/0/X3/IMG_3854-X3.jpg

Non Candidacy for Designate PTL - Pike

Happy new year!

As you may have guessed from the title, I have decided that the time has come to step aside as PTL for the upcoming cycle. It is unfortunate, but my work has pivoted in a different direction over the last year (containers all the way down man - but hey, I got part of my wish to write Golang, just not on the project I envisaged :) ).

As a result, I have been trying to PTL out of hours for the last cycle and a half. Unfortunatly, this has had a bad impact on this cycle, and I don't think we should repeat the pattern.

We have done some great work over the last year or so - Worker Model, the s/Domain/Zone work, the new dashboard, being one of the first projects to have an external tempest plugin and getting lost in the west of Ireland in the aftermath of the flooding.

https://photos.smugmug.com/Designate-Mid-Cycle/i-h4gGtxX/0/X3/IMG_3869-X3.jpg

I can honestly say, I have enjoyed my entire time with this team, from our first meeting in Austin, back in the beginning of 2014, the whole way through to today. We have always been a small team, but when I think back to what we have produced over the last few years, I am incredibly proud.

Change is healthy, and I have been in a leadership position in Designate longer than most, and no project should rely on a person or persons to continue to exist.

I will stick around on IRC, and still remain a member of the core review team, as a lot of the roadmap is still in the heads of myself and 2 or 3 others, but my main aim will be to document the roadmap in a single place, and not just in thousands of etherpads.

It has been a fun journey - I have gotten to work with some great people, see some amazing places, work on really interestig problems and contribute to a project that was close to my heart.

This is not an easy thing to do, but I think the time is right for the project and me to let someone else make their stamp on the project, and bring it to the next level.

Nominations close soon [0] so please start thinking about if you would like to run or not. If anyone has any questions about the role, please drop me an email [1] or ping me [2] on IRC

Thank you for this opportunity to serve the community for so long, it is not something I will forget.

[0]Election Schedule
[1]graham.hayes (a) hpe.com
[2]mugsie

by Graham Hayes at January 09, 2017 10:16 PM

RDO

Recent blog posts

I've been out for a few weeks, but the blog posts from the community kept coming.

Containers on the CERN cloud by Tim Bell

We have recently made the Container-Engine-as-a-Service (Magnum) available in production at CERN as part of the CERN IT department services for the LHC experiments and other CERN communities. This gives the OpenStack cloud users Kubernetes, Mesos and Docker Swarm on demand within the accounting, quota and project permissions structures already implemented for virtual machines.We shared the latest news on the service with the CERN technical staff (link). This is the follow up on the tests presented at the OpenStack Barcelona (link) and covered in the blog from IBM.

Read more at http://tm3.org/d6

ANNOUNCE: New libvirt project Go XML parser model by Daniel Berrange

Shortly before christmas, I announced the availability of new Go bindings for the libvirt API. This post announces a companion package for dealing with XML parsing/formatting in Go. The master repository is available on the libvirt GIT server, but it is expected that Go projects will consume it via an import of the github mirror, since the Go ecosystem is heavilty github focused (e.g. godoc.org can’t produce docs for stuff hosted on libvirt.org git)

Read more at http://tm3.org/d7

Red Hat OpenStack Platform 10 is here! So what’s new? by Marcos Garcia - Principal Technical Marketing Manager

It’s that time of the year. We all look back at 2016, think about the good and bad things, and wish that Santa brings us the gifts we deserve. We, at Red Hat, are really proud to bring you a present for this holiday season: a new version of Red Hat OpenStack Platform, version 10 (press release and release notes). This is our best release ever, so we’ve named it our first Long Life release (up to 5 years support), and this blog post will show you why this will be the perfect gift for your private cloud project.

Read more at http://tm3.org/d8

Comparing OpenStack Neutron ML2+OVS and OVN – Control Plane by russellbryant

We have done a lot of performance testing of OVN over time, but one major thing missing has been an apples-to-apples comparison with the current OVS-based OpenStack Neutron backend (ML2+OVS).  I’ve been working with a group of people to compare the two OpenStack Neutron backends.  This is the first piece of those results: the control plane.  Later posts will discuss data plane performance.

Read more at http://tm3.org/d9

by Rich Bowen at January 09, 2017 07:41 PM

OpenStack in Production

Containers on the CERN cloud

We have recently made the Container-Engine-as-a-Service (Magnum) available in production at CERN as part of the CERN IT department services for the LHC experiments and other CERN communities. This gives the OpenStack cloud users Kubernetes, Mesos and Docker Swarm on demand within the accounting, quota and project permissions structures already implemented for virtual machines.

We shared the latest news on the service with the CERN technical staff (link). This is the follow up on the tests presented at the OpenStack Barcelona (link) and covered in the blog from IBM. The work has been helped by collaborations with Rackspace in the framework of the CERN openlab and the European Union Horizon 2020 Indigo Datacloud project.

    Performance

    At the Barcelona summit, we presented with Rackspace and IBM regarding our additional performance tests after the previous blog post. We expanded beyond the 2M requests/s to reach around 7M where some network infrastructure issues unrelated to OpenStack limited the scaling further.

    As we created the clusters, the deployment time increased only slightly with the number of nodes as most of the work is done in parallel. But for 128 node or larger clusters, the increase in time started to scale almost linearly. At the Barcelona summit, the Heat and Magnum teams worked together to develop proposals for how to improve further in future releases, although a 1000 node cluster in 23 minutes is still a good result

    <style></style>
    Cluster Size (Nodes)
    Concurrency
    Deployment Time (min)
    2
    50
    2.5
    16
    10
    4
    32
    10
    4
    128
    5
    5.5
    512
    1
    14
    1000
    1
    23

    Storage

    With the LHC producing nearly 50PB this year, High Energy Physics has some custom storage technologies for specific purposes, EOS for physics data, CVMFS for read-only, highly replicated storage such as applications.

    One of the features of providing a private cloud service to the CERN users is to combine the functionality of open source community software such as OpenStack with the specific needs for high energy physics. For these to work, some careful driver work is needed to ensure appropriate access while ensuring user rights. In particular,
    • EOS provides a disk-based storage system providing high-capacity and low-latency access for users at CERN. Typical use cases are where scientists are analysing data from the experiments.
    • CVMFS is used for a scalable, reliable and low-maintenance for read-only data such as software.
    There are also other storage solutions we use at CERN such as
    • HDFS for long term archiving of data using Hadoop which uses an HDFS driver within the container.  HDFS works in user space, so no particular integration was required to use it from inside (unprivileged) containers
    • Cinder provides additional disk space using volumes if the basic flavor does not have sufficient. This Cinder integration is offered by upstream Magnum, and work was done in the last OpenStack cycle to improve security by adding support for Keystone trusts.
    CVMFS was more straightforward as there is no need to authenticate the user. The data is read-only and can be exposed to any container. The access to the file system is provided using a driver (link) which has been adapted to run inside a container. This saves having to run additional software inside the VM hosting the container.

    EOS requires authentication through mechanisms such as Kerberos to identify the user and thus determine what files they have access to. Here a container is run per user so that there is no risk of credential sharing. The details are in the driver (link).

    Service Model

    One interesting question that came up during the discussions of the container service was how to deliver the service to the end users. There are several scenarios:
    1. The end user launches a container engine with their specifications but they rely on the IT department to maintain the engine availability. This implies that the VMs running the container engine are not accessible to the end user.
    2. The end user launches the engine within a project that they administer. While the IT department maintains the templates and basic functions such as the Fedora Atomic images, the end user is in control of the upgrades and availability.
    3. A variation of option 2., where the nodes running containers are reachable and managed by the end user, but the container engine master nodes are managed by the IT department. This is similar to the current offer from the Google Container Engine and requires some coordination and policies regarding upgrades
    Currently, the default Magnum model is for the 2nd option and adding option 3 is something we could do in the near future. As users become more interested in consuming containers, we may investigate the 1st option further

    Applications

    Many applications at use in CERN are in the process of being reworked for a microservices based architecture. A choice of different container engines is attractive for the software developer. One example of this is the file transfer service which ensures that the network to other high energy physics sites is kept busy but not overloaded with data transfers. The work to containerise this application was described at the recent CHEP 2016 FTS poster.

    While deploying containers is an area of great interest for the software community, the key value comes from the physics applications exploiting containers to deliver a new way of working. The Swan project provides a tool for running ROOT, the High Energy Physics application framework, in a browser with easy access to the storage outlined above. A set of examples can be found at https://swan.web.cern.ch/notebook-galleries. With the academic paper, the programs used and the data available from the notebook, this allows easy sharing with other physicists during the review process using CERNBox, CERN's owncloud based file sharing solution.



    Another application being studied is http://opendata.cern.ch/?ln=en which allows the general public to run analyses on LHC open data. Typical applications are Citizen Science and outreach for schools.

    Ongoing Work

    There are a few major items where we are working with the upstream community:
    • Cluster upgrades will allow us to upgrade the container software. Examples of this would be a new version of Fedora Atomic, Docker or the container engine. With a load balancer, this can be performed without downtime (spec)
    • Heterogeneous cluster support will allow nodes to have different flavors (cpu vs gpu, different i/o patterns, different AZs for improved failure scenarios). This is done by splitting the cluster nodes into node groups (blueprint)
    • Cluster monitoring to deploy Prometheus and cAdvisor with Grafana dashboards for easy monitoring of a Magnum cluster (blueprint).

    References

    by Tim Bell (noreply@blogger.com) at January 09, 2017 06:08 PM

    Aptira

    OpenStack Election 2017: Vote for Kavit – OpenStack Events

    <iframe allowfullscreen="allowfullscreen" frameborder="0" height="608" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/198611305" title="Kavit OpenStack Election 2017 - More Events" webkitallowfullscreen="webkitallowfullscreen" width="1080"></iframe>

    The OpenStack election for 2017 opens today and we think Kavit deserves your vote! One of Kavit’s priorities for 2017 is to focus on events – including bringing an OpenStack Summit to India.

    Vote for Kavit for more OpenStack events: OpenStack Meetups, OpenStack Days, OpenStack Summits, OpenStack Everything!

    Kavit has been a driving force behind the Indian OpenStack community and has helped organise numerous OpenStack events in India and around the world. Kavit has been instrumental in creating one of the largest OpenStack regional communities, the Indian OpenStack User Group and has also been heavily involved with evangelising OpenStack in emerging markets.

    If you’d like to see more OpenStack events, please vote for Kavit in the 2017 OpenStack election!  <iframe frameborder="0" height="" src="http://player.vimeo.com/video/198611305?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff" width=""></iframe>

    Can't see the video in your RSS reader or email? Click Here!

    The post OpenStack Election 2017: Vote for Kavit – OpenStack Events appeared first on Aptira Cloud Solutions.

    by Jessica Field at January 09, 2017 03:51 PM

    StackHPC Team Blog

    Managing BIOS and RAID in the Hyperscale Era

    Have you ever had the nuisance of configuring a server BIOS? How about a rack full of servers? Or an aisle, a hall, an entire facility even? It gets to be tedious toil even before the second server, and it also becomes increasingly unreliable to apply a consistent configuration with increasing scale.

    In this post we describe how we apply some modern tools from the cloud toolbox (Ansible, Ironic and Python) to tackle this age-old problem.

    Server Management in the 21st Century

    Baseboard management controllers (BMCs) are a valuable tool for easing the inconvenience of hardware management. By using a BMC we can configure our firmware using remote access, avoiding a trip to the data centre and stepping from server to server with a crash cart. This is already a big win.

    However, BMCs are still pretty slow to apply changes, and are manipulated individually. Through automation, we could address these shortcomings.

    I've seen some pretty hairy early efforts at automation, for example playing out timed keystroke macros across a hundred terminals of BMC sessions. This might work, but it's a desperate hack. Using the tools created for configuration management we can do so much better.

    A Quick Tour of OpenStack Server Hardware Management

    OpenStack deployment usually draws upon some hardware inventory management intelligence. In our recent project with the University of Cambridge this was Red Hat OSP Director. The heart of OSP Director is TripleO and the heart of TripleO is OpenStack Ironic.

    Ironic is OpenStack's bare metal manager. It masquerades as a virtualisation driver for OpenStack Nova, and provisions bare metal hardware when a user asks for a compute instance to be created. TripleO uses this capability to good effect to create OpenStack-on-OpenStack (OoO), in which the servers of the OpenStack control plane are instances created within another OpenStack layer beneath.

    Our new tools fit neatly into the TripleO process between registration and introspection of undercloud nodes, and are complementary to the existing functionality offered by TripleO.

    iDRAC: Dell's Server Management Toolkit

    The system at Cambridge makes extensive use of Dell server hardware, including:

    • R630 servers for OpenStack controllers.
    • C6320 servers for high-density compute nodes.
    • R730 servers for high performance storage.

    Deploying a diverse range of servers in a diverse range of roles requires flexible (but consistent) management of firmware configuration.

    These Dell server models feature Dell's proprietary BMC, the integrated Dell Remote Access Controller (iDRAC). This is what we use for remote configuration of our Dell server hardware.

    A Cloud-centric Approach to Firmware Configuration Management

    OpenStack Ironic tracks hardware state for every server in an OpenStack deployment.

    A simple overview can be seen with ironic node-list:

    +--------------------------------------+----------+--------------------------------------+-------------+--------------------+-------------+
    | UUID                                 | Name     | Instance UUID                        | Power State | Provisioning State | Maintenance |
    +--------------------------------------+----------+--------------------------------------+-------------+--------------------+-------------+
    | 415c254f-3e82-446d-a63b-232af5816e4e | control1 | 3d27b7d2-729c-467c-a21b-74649f1b1203 | power on    | active             | False       |
    | 2646ece4-a24e-4547-bbe8-786eca16da82 | control2 | 8a066c7e-36ec-4c45-9e1b-5d0c5635f256 | power on    | active             | False       |
    | 2412f0ef-dedb-49c8-a923-778db36a57d9 | control3 | 6a62936f-40ec-49e7-a820-6f3329e5bb0c | power on    | active             | False       |
    | 81676b2d-9c37-4111-a32a-456a9f933e57 | compute0 | aac2866c-7d16-4089-9d94-611bfc38467e | power on    | active             | False       |
    | c6a5fbe7-566a-447e-a806-9e33676be5ea | compute1 | 619476ae-fec4-42c6-b3f5-3a4f5296d3bc | power on    | active             | False       |
    | c7f27dd4-67a7-42b9-93ab-2e444802c5c2 | compute2 | a074c3f8-eb87-46d6-89c8-f360fbf2a3df | power on    | active             | False       |
    | 025d84dc-a590-46c5-a456-211d5c1e8f1a | compute3 | 11524318-2ecf-4880-a1cf-76cd62935b00 | power on    | active             | False       |
    +--------------------------------------+----------+--------------------------------------+-------------+--------------------+-------------+
    

    Ironic's node data includes how to access the BMC of every server in the node inventory.

    We extract the data from Ironic's inventory to generate a dynamic inventory for use with Ansible. Instead of a file of hostnames, or a list of command line parameters, a dynamic inventory is the output from an executed command. A dynamic inventory executable accepts a few simple arguments and emits node inventory data in JSON format. Using Python and the ironicclient module simplifies the implementation.

    To perform fact gathering and configuration, two new Ansible roles were developed and published on Ansible Galaxy.

    DRAC configuration
    Provides the drac Ansible module for configuration of BIOS settings and RAID controllers. A single task is provided to execute the module. The role is available on Ansible Galaxy as stackhpc.drac and the source code is available on Github as stackhpc/drac.
    DRAC fact gathering
    Provides the drac_facts Ansible module for gathering facts from a DRAC card. The module is not executed by this role but is available to subsequent tasks and roles. The role is available on Ansible Galaxy as stackhpc.drac-facts and the source code is available on Github as stackhpc/drac-facts.

    We use the python-dracclient module as a high-level interface for querying and configuring the DRAC via the WSMAN protocol. This module was developed by the Ironic team to support the DRAC family of controllers. The module provides a useful level of abstraction for these Ansible modules, hiding the complexities of the WSMAN protocol.

    Example Playbooks

    The source code for all of the following examples is available on Github at stackhpc/ansible-drac-examples. The playbooks are not large, and we encourage you to read through them.

    A Docker image providing all dependencies has also been created and made available on Dockerhub at the stackhpc/ansible-drac-examples repository. To use this image, run:

    $ docker run --name ansible-drac-examples -it --rm docker.io/stackhpc/ansible-drac-examples
    

    This will start a Bash shell in the /ansible-drac-examples directory where there is a checkout of the ansible-drac-examples repository. The stackhpc.drac and stackhpc.drac-facts roles are installed under /etc/ansible/roles/. Once the shell is exited the container will be removed.

    Ironic Inventory

    In the example repository, the inventory script is inventory/ironic_inventory.py. We need to provide this script with the following environment variables to allow it to communicate with Ironic: OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME and OS_AUTH_URL. For the remainder of this article we will assume that a file, cloudrc, is available and exports these variables. To see the output of the inventory script:

    $ source cloudrc
    $ ./inventory/ironic_inventory.py --list
    

    To use this dynamic inventory with ansible-playbook, use the -i argument:

    $ source cloudrc
    $ ansible-playbook -i inventory ...
    

    The inventory will contain all Ironic nodes, named by their UUID. For convenience, an Ansible group is created for each named node using its name with a prefix of node_.

    The inventory also contains groupings for servers in Ironic maintenance mode, and for servers in different states in Ironic's hardware state machine. Groups are also created for each server profile defined by TripleO: controller, compute, block-storage, etc..

    In the following examples, the playbooks will execute against all Ironic nodes discovered by the inventory script. To limit the hosts against which a play is executed, use the --limit argument to ansible-playbook.

    If you would rather not make any changes to the systems in the inventory, use the --check argument to ansible-playbook. This will display the changes that would have been made if the --check argument were not passed.

    Example 1: Gather and Display Facts About Firmware Configuration

    The drac-facts.yml playbook shows how the stackhpc.drac-facts role can be used to query the DRAC module of each node in the inventory. It also displays the results. Run the following command to execute the playbook:

    $ source cloudrc
    $ ansible-playbook -i inventory drac-facts.yml
    

    Example 2: Configure the NumLock BIOS Setting

    NOTE: This example may make changes to systems in the inventory.

    The drac-bios-numlock.yml playbook demonstrates how the stackhpc.drac role can be used to configure BIOS settings. It sets the NumLock BIOS setting to either On or Off.

    The playbook specifies the drac_reboot variable as False, so the setting will not be applied immediately. A reboot of the system is required for this pending setting to be applied. The drac_facts module provides information on any pending BIOS configuration changes, as may be seen in the first example.

    Run the following command to execute the playbook and configure the setting:

    $ source cloudrc
    $ ansible-playbook -i inventory -e numlock=<value> drac-bios-numlock.yml
    

    Set the numlock variable to the required value (On or Off). The drac_result variable is registered by the role and contains the results returned by the drac module. The playbook displays this variable after the role is executed. Of particular interest is the reboot_required variable which indicates whether a reboot is required to apply the changes. If a reboot is required, this must be performed before making further BIOS configuration changes.

    Example 3: Configure a RAID-1 Virtual Disk

    NOTE: This example may make changes to systems in the inventory.

    The drac-raid1.yml playbook shows how the stackhpc.drac role can be used to configure RAID controllers. In this example we configure a RAID1 virtual disk.

    Ensure that raid_pdisk1 and raid_pdisk2 are set to the IDs of two physical disks in the system that are attached to the same RAID controller and not already part of another virtual disk. The facts gathered in the first example may be useful here. This time we specify the drac_reboot variable as True. This means that if required, the drac module will reboot the system to apply changes.

    Run the following command to execute the playbook and configure the system. The task will likely take a long time to execute if the virtual disk configuration is not already as requested, as the system will need to be rebooted:

    $ source cloudrc
    $ ansible-playbook -i inventory -e raid_pdisk1=<pdisk1> -e raid_pdisk2=<pdisk2> drac-raid1.yml
    

    Under The Hood

    The vast majority of the useful code provided by these roles takes the form of python Ansible modules. This takes advantage of the capability of Ansible roles to contain modules under a library directory, and means that no python code needs to be installed on the system or included with the core or extra Ansible modules.

    The drac_facts Module

    The drac_facts module is relatively simple. It queries the state of BIOS settings, RAID controllers and the DRAC job queues. The results are translated to a JSON-friendly format and returns them as facts.

    The drac Module

    The drac module is more complex than the drac_facts module. The DRAC API provides a split-phase execution model, allowing changes to be staged before either committing or aborting them. Committed changes are applied by rebooting the system. To further complicate matters, the BIOS settings and each of the RAID controllers represents a separate configuration channel. Upon execution of the drac module these channels may have uncommitted or committed pending changes. We must therefore determine a minimal sequence of steps to realise the requested configuration for an arbitrary initial state, which may affect more than one of these channels.

    The python-dracclient module provided almost all of the necessary input data with one exception. When querying the virtual disks, the returned objects did not contain the list of physical disks that each virtual disk is composed of. We developed the required functionality and submitted it to the python-dracclient project.

    Thanks go to the python-dracclient community for their help in implementing the feature.

    by Mark Goddard at January 09, 2017 03:20 PM

    Dougal Matthews

    Mistral Flow Control

    When writing a Mistral workflow it is common that you want to add a condition to a specific task. For example, you only want to run a task if the user provided a specific input to the workflow. I found this counter intuitive at first and have since explained it to a few people, so I wanted to document it for the future.

    As a contrived example, we want a workflow that returns "x is true" if we pass the x input or "x is not true" if it isn't passed (or false is passed).

    As a Pythonista, I wanted to solve the problem with something equivalent to this short Python code. It's just a simple if/else after all.

    def workflow(x=False):
        if x:
            return "x is true"
        else:
            return "x is not true"
    

    In Mistral's Workflow DSL you need to think about it a little differently.

    my_workflow:
      input:
        - x: false
    
      tasks:
        task_switch:
          on-success:
            - task_a: <% $.x = true %>
            - task_b: <% $.x != true %>
        task_a:
          action: std.echo output="x is true"
        task_b:
          action: std.echo output="x is not true"
    

    In this workflow we have two tasks, task_a and task_b, they perform the different outcomes we want. The logic that switches between them becomes a task of it's own. The task task_switch doesn't specify an action or workflow to run, this means that it performs a std.noop. On the success of this, which should always happen, we then use a mapping of tasks and conditions (in the on-success attribute) to specify what happens next. task_a and task_b are the tasks and on the right of these we have the YAQL expression which is evaluated to determine if they run. If their result is truthy, that task will be scheduled for execution.

    In our case, if I run the workflow with x=true, then it will evaluate to the following and only task_a will be executed.:

    on-success:
      - task_a: true
      - task_b: false
    

    This then becomes something like a switch statement, however, unlike a switch statement in most languages, multiple tasks and workflow paths can be followed rather than the first that turns out to be true.

    Here is a slightly more complicated workflow that shows multiple tasks being called depending on the input.

    my_workflow:
      input:
        - letter
    
      tasks:
        task_switch:
          on-success:
            - letter_a: <% $.letter = 'a' %>
            - letter_a_or_b: <% $.letter in ['a', 'b']  %>
            - letter_other: <% not $.letter in ['a', 'b'] %>
        letter_a:
          action: std.echo output="letter is a"
        letter_a_or_b:
          action: std.echo output="letter a or b"
        letter_other:
          action: std.echo output="letter is not a or b"
    

    Input 'a' means tasks letter_a and letter_a_or_b are executed. Input 'b' means that only letter_a_or_b is executed and any other input would execute the letter_other command.

    As mentioned before, these examples are contrived, but they can be quite useful when used in larger workflows to control the flow or provide some simple validation.

    by Dougal Matthews at January 09, 2017 02:20 PM

    OpenStack Superuser

    Why do we open source again?

    The process of writing up a patch, testing it, pushing it to the community and getting it merged is not a simple one.

    There are many pitfalls that stand between git clone, the  “Welcome new contributor” message and the “Your patch has been successfully merged to the repository” message that often stop contributors in their tracks and leave repositories strewn with dead and abandoned patches.  While this path is well known and more easily navigated by those who have been contributing code for some time, it is less publicized to those who deploy OpenStack.

    Managing expectations is key to contribution

    The inaugural OpenStack Day France debuted the presentation “Managing Expectations: The Real Workflow of OpenStack” in which attendees less versed in the ins and outs of what it means to implement a feature in OpenStack learned about the process from more experienced contributors. They gained insight into the real process of developing a feature from inception to deployment and why that process, while at times frustrating, yields better quality implementations.

    What companies need to know is that there needs to be some management of expectations when asking the community for new features. This can seem like a drawback, but the diversity of the community ensures a higher quality and better tested feature than if your team had done it in-house. Managing expectations is important to the success of any company that wants to be a part of an open source community like OpenStack. It’s not possible to rush the process and it’s crucial to remember that the community adds a lot of variables along with many benefits.

    Moving forward

    Understanding how to actually contribute is only part of the puzzle, which isn’t clear to those new to the community. It’s vital to make this barrier to entry as low as possible so they can focus on the other challenges that come with working on OpenStack. For this reason, education like the Upstream Training sessions and documentation become especially important. Once people know about the tools and resources they have to push code or documentation, they can focus on being a member of the community which will make it easier to get the features they wanted merged.

    <figure class="wp-caption alignnone" id="attachment_4694" style="width: 650px">image00<figcaption class="wp-caption-text"> Kendall Nelson and Patrick East, software engineer at Pure Storage, speaking at OpenStack Days France. // CC BY</figcaption> </figure>

    Attendees received advice on how to make the contribution process go more smoothly once they are familiar with the process and the tools to contribute. They learned about what it means to be an active member of the community, the baby steps they can take to establish themselves, that everyone has an opinion and it’s important to not take things personally and that setting expectations with management is just as important as being able to work with others.

    I hope to give this presentation again and continue to make OpenStack and better place for new and veteran contributors alike. If you want to hear more about this presentation or want to know if it is being given again, I’m available on IRC (nick: diablo_rojo) or by email (knelson@openstack.org).

     

    The post Why do we open source again? appeared first on OpenStack Superuser.

    by Kendall Nelson at January 09, 2017 02:10 PM

    Amrith Kumar

    Effective OpenStack contribution: Seven things to avoid at all cost

    There are numerous blogs and resources for the new and aspiring OpenStack contributor, providing tips, listing what to do. Here are seven things to avoid if you want to be an effective OpenStack contributor. I wrote one of these. There have been presentations at summits that share other useful newbie tips as well, here is … Continue reading "Effective OpenStack contribution: Seven things to avoid at all cost"

    by amrith at January 09, 2017 01:47 PM

    NFVPE @ Red Hat

    Running Stackanetes on Openshift

    Stackanetes is an open-source project that aims to run OpenStack on top of Kubernetes. Today we’re going to use a project that I created that uses ansible plays to setup Stackanetes on Openshift, openshift-stackanetes. We’ll use an all-in-one server approach to setting up openshift in this article to simplify that aspect, and later provide playbooks to launch Stackanetes with a cluster and focus on HA requirements in the future.

    by Doug Smith at January 09, 2017 01:20 PM

    Galera Cluster by Codership

    In 2016 Galera Cluster surged past one million downloads – setting up for a great year in 2017

    Helsinki, Finland & London, UK  – January 9th 2017 – Codership, whose Galera Cluster technology brings high availability (HA) and scalability to open source databases worldwide, has seen its technology break the one million download barrier. Codership’s customer roster has grown over 100 percent thanks to the adoption of OpenStack by enterprises and the recent strategic partnership with MariaDB.

     

    Galera has been recognised as the most widely used OpenStack High Availability technology for the second year in a row. According to the OpenStack Survey 2016, one-third of users rely on Galera Clusters running on MariaDB and MySQL.

     

    To cope with an increase of customer demand coming from various industries such as financial services, retail and telecoms, Codership has grown its engineering hires by 50 percent. The new team members will ensure Galera continues to deliver on its mission to protect enterprise applications from unplanned downtime.

     

    Codership looks set for further growth. Industry analysts Gartner, predicts by 2018, more than 70 percent of new in-house applications will be developed on an Open Source Database Management System (OSDBMS), and 50 percent of existing commercial Relational Database Management System (RDBMS) instances will have been converted.

     

    Another addition for this year is Galera joining Mirantis’ unlocked partner programme. Mirantis and Codership are collaborating to provide high availability for OpenStack infrastructure datastore used by companies such as online marketplace MercadoLibre, 8th biggest e-commerce platform in the world with over 100 million consumers. Galera Cluster is now the default for OpenStack high availability in Mirantis OpenStack distributions and reference architecture.

     

    Sakari Keskitalo, Codership’s Chief Operating Officer said: “Our mission is to provide world-class, web-scale technology as widely as possible. OpenStack provides an excellent platform for Galera to accomplish our mission. Galera Cluster’s success and popularity has made it ready for deployment ahead of the new data deluge, created by connected machines, coming into the work environment. Codership being the most popular high availability solution for the databases gives us confidence enterprises take clustering very seriously. The more they do, the more we will drive to stay ‘Top of the Stack’.”

    by Sakari Keskitalo at January 09, 2017 01:01 PM

    Hugh Blemings

    Lwood-20170108

    Introduction

    Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

    Basic Stats for the week 2 to 8 January for openstack-dev:

    • ~284 Messages (down about 50% relative to the long term average)
    • ~115 Unique threads (down about 35% relative to the long term average)

    Traffic quiet but heading back up after the break.  Note that I’ve changed the reporting slightly to be against the long term average (calculated since 22 June 2015). As noted previously it’s been suggested to me a graph might be a nice thing to do – scheming towards same has begun…

    Welcome back and Happy New Year – this is the first Lwood for 2017, I hope yours was a pleasant break if you had one!  Mine was dominated by boxes and interstate travel, but done in excellent company making it well and truly agreeable enough :)

    Notable Discussions – openstack-dev

    While we were out…

    A few quickies from the break that seemed worth flagging even if briefly;

    • OSSN-0074 “Nova metadata service should not be used for sensitive information”, courtesy of Luke Hinds
    • An update penned by Kendall Nelson from the Storyboard team that includes an overview on the decision to move to Storyboard from Launchpad.
    • A reminder about the upcoming PTL election season coming up in January, also from Kendall Nelson
    • A piece on a prototype HW VNC console for certain Dell servers by Pavlo Shchelokovskyy
    • A report on doing live migration performance test of 100 compute nodes courtesy of Pawel Koniszewski

    OpenStack Release calendar in ICS form

    Doug Hellmann writes that there is now a ICS version of the Ocata release schedule available online here.

    Lwood Feedback Survey

    Flagging this one more time and again noting my thanks to those readers who have already provided feedback through the survey mentioned previously. If you haven’t already done so and would like to, I’d welcome your thoughts :)

    The feedback has been very positive, thank you, it seems this modest effort does indeed fill a useful niche for folk in among the other sources out there so will keep at it, with a few tweaks to come.

    New Projects

    • Picasso – Functions as a Service (FaaS) – Details here in the Wiki – From Derek Schultz

    End of Week Wrap-ups

    Just the two from the last weeks of December from Ruby Loo and Richard Jones for Ironic and Horizon respectively.

    Notable Discussions – other OpenStack lists

    The Forum in More Detail

    From Tom Fifield an email over on the OpenStack mailing list where he points to an article he wrote that goes into some detail about the upcoming “Forum”.  The Forums along with the PTG events are the nominal replacement of the Design Summits of old.

    People and Projects

    Core nominations & changes

    • [All] Proposing Steve Martinelli for project-team-guide core – Thierry Carrez
    • [Docs] Stepping down from Core – Matt Kassawara
    • [Ironic] Stepping down as PTL after this cycle – Jim Rollenhagen
    • [Kolla] Removal of Dave Wang from the kolla-kubernetes-core team – Steven Dake

    Miscellanea

    Further reading

    Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

    Credits

    Due in part to the absence of a fully operational home office (and in particular speakers) post move, no tunes for this week’s Lwood :)

    Last but by no means least, thanks, as always, to Rackspace :)

    by hugh at January 09, 2017 08:07 AM

    Aptira

    How to vote in the OpenStack Election 2017

    How to vote - OpenStack Election 2017

    Voting is now open for the OpenStack Election 2017. I thank you all for your continued support over the last few years in helping us grow into one of the largest OpenStack communities and helping me secure the post of Director for two consecutive terms.

    Please find the steps below for voting:

    1. You will have received an email with the subject “OpenStack Foundation – 2017 Individual Director Election” from OpenStack Foundation <secretary@openstack.org>
    2. That email has an individual link for you to cast your vote. The link is the last link in the email (www.bigpulse.com)
    3. Click on that link and cast your vote
    4. Don’t forget to hit the “confirm” button after you vote. Your vote will not be counted as valid until then!

    Please stay tuned this week for more reasons to vote for me in the OpenStack Election 2017.

    The post How to vote in the OpenStack Election 2017 appeared first on Aptira Cloud Solutions.

    by Kavit Munshi at January 09, 2017 06:30 AM

    Opensource.com

    Landing a job, becoming the de facto private cloud, and more OpenStack news

    Explore what's happening this week in OpenStack, the open source cloud computing project.

    by Jason Baker at January 09, 2017 06:00 AM

    About

    Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

    Subscriptions

    Last updated:
    January 19, 2017 04:54 PM
    All times are UTC.

    Powered by:
    Planet