February 10, 2016

Sébastien Han

OpenStack Summit Austin: Time To Vote

The summit is almost there and it is time to vote for the presentation you want to see :). Here are the presentations we submitted (me and my colleagues):


Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Speakers: Sébastien Han, Sean Cohen, Federico Lucifredi


Persistent Containers for Transactional Workloads

Speakers: Sébastien Han, Kyle Bader


How to seemlessly migrate CEPH with PB’s of data from one OS to other with no impact

Speakers: Sébastien Han, Shyam Bollu, Michael DeSimone


I hope to see you there ;)

February 10, 2016 10:06 AM

Adam Young

A Holla out to the Kolla devs

Devstack uses Pip to install packages, which conflict with the RPM versions on my Fedora system. Since I still need to get work done, and want to run tests on Keystone running against a live database, I’ve long wondered if I should go with container based approach. Last week, I took the plunge and started messing around with Docker. I got the MySQL Fedora container to run, then found Lars Keystone container using Sqlite, and was stumped. I poked around for a way to get the two containers talking to each other, and realized that we had a project dedicated to exactly that in OpenStack: Kolla. While it did not work for me right out of a git-clone, several of the Kolla devs worked with me to get it up and running. here are my notes, distilled.

I started by reading the quickstart guide. Which got me oriented (I suggest you start there, too). But found a couple things I needed to learn. First, I needed a patch that has not quite landed, in order to make calls as a local user, instead of as root. I still ended up creating /etc/kolla and chowning it to ayoung. That proved necessary, as the work done in that patch is “necessary but not sufficient.”

I am not super happy about this, but I needed to make docker run without a deliberate sudo. So I added the docker group, added myself to it, and restarted the docker service via systemd. I might end up doing all this as a separate developer user, not as ayoung, so at least I need to su – developer before the docker stuff. I may be paranoid, but that does not mean they are not out to get me.

Created a dir named ~/kolla/ and put in there:

~/kolla/globals.yml

kolla_base_distro: "centos"
kolla_install_type: "source"

# This is the interface with an ip address you want to bind mariadb and keystone too
network_interface: "enp0s25"
# Set this to an ip address that currently exists on interface "network_interface"
kolla_internal_address: "10.0.0.13"

# Easy way to change debug to True, though not required
openstack_logging_debug: "True"

# For your information, but these default to "yes" and can technically be removed
enable_keystone: "yes"
enable_mariadb: "yes"

# Builtins that are normally yes, but we set to no
enable_glance: "no"
enable_haproxy: "no"
enable_heat: "no"
enable_memcached: "no"
enable_neutron: "no"
enable_nova: "no"
enable_rabbitmq: "no"
enable_horizon: "no"

I also copied the file ./etc/kolla/passwords.yml from the repo into that directory, as it was needed during the deploy.

To build the images, I wanted to work inside the kolla venv (didn’t want to install pip packages on my system) so I ran the

tox -epy27

Which, along with running the unit tests, created a venv. I activated that venv for the build command:

. .tox/py27/bin/activate
./tools/build.py --type source keystone mariadb rsyslog kolla-toolbox

Note that I had first built the binary versions using:

./tools/build.py keystone mariadb rsyslog kolla-toolbox

But then tried to deploy the source version. The source versions are downloaded from tarballs on http://tarballs.openstack.org/ whereas the binary versions are the Delorean RPMS, and the trail the source versions by a little bit (not a lot).

I’ve been told “if you tox gen the config you will get a kolla-build.conf config. You can change that to git instead of url and point it to a repo.” But I have not tried that yet.

I had to downgrade to the pre 2.0 version of Ansible, as I was playing around with 2.0’s support for Keystone V3 API. Kolla needs 1.9

dnf downgrade ansible

There is an SELinux issue. I worked round for now by setting SELInux into permissive mode, but we’ll revisit that shortly. It was only for deploy; once the containers were running, I was able to switch back to enforcing mode.
We will deal with it here.

./tools/kolla-ansible --configdir /home/ayoung/kolla   deploy

Once that ran, I wanted to test Keystone. I needed a keystone RC file. To get it:

./tools/kolla-ansible post-deploy

It put it in /etc/kolla/.

. /etc/kolla/admin-openrc.sh 
[ayoung@ayoung541 kolla]$ openstack token issue
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2016-02-08T05:51:39.447112Z      |
| id         | 4a4610849e7d45fdbd710613ff0b3138 |
| project_id | fdd0b0dcf45e46398b3f9b22d2ec1ab7 |
| user_id    | 47ba89e103564db399ffe83d8351d5b8 |
+------------+----------------------------------+

Success

I have to admin that I removed the warning.

usr/lib/python2.7/site-packages/keyring/backends/Gnome.py:6: PyGIWarning: GnomeKeyring was imported without specifying a version first. Use gi.require_version('GnomeKeyring', '1.0') before import to ensure that the right version gets loaded.
  from gi.repository import GnomeKeyring

Huge thanks to SamYaple and inc0 (Michal Jastrzebski) for their help in getting me over the learning hump.

I think Kolla is fantastic. It will be central to my development for Keystone moving forward.

by Adam Young at February 10, 2016 12:00 AM

February 09, 2016

SUSE Conversations

Fujitsu’s Commitment to OpenStack Open Source

At the SUSECon 2015 conference, Dr. Wolfgang Ries from Fujitsu talked about why and how Fujitsu is contributing to Open Source software projects in his break-out session. First of all, he emphasized the fact that Fujitsu now is not only involved in the open source world, but is seriously committed to be part of it …

+read more

The post Fujitsu’s Commitment to OpenStack Open Source appeared first on SUSE Blog. EST

by EST at February 09, 2016 09:34 PM

Carl Baldwin

Neutron Address Scopes

An exciting new feature was just merged to Openstack Neutron in the Mitaka release; it’s called address scopes. Address scopes build from subnet pools added in Kilo. While subnet pools give us a mechanism for controlling the allocation of addresses to subnets, address scopes give Neutron a way to know where addresses are viable. They are also the thing within which addresses are not allowed to overlap. If you’re unfamiliar with them, you might want to review subnet pools before you read on. Read more...

February 09, 2016 04:00 PM

OpenStack Superuser

Keep your OpenStack projects on track with StoryBoard

Collaboration is at the heart of OpenStack. As the number of projects grow and people around the world contribute, the challenge is keeping teams on task.

Superuser talked to Zara Zaimeche, software developer at Codethink, about StoryBoard, a task tracking system for cross-team projects.

If you’d like to learn more in person, the StoryBoarders are meeting up in Manchester on February 17 for a Mitaka MidCycle.

The agenda includes updates on what the team is doing and Zaimeche, who is organizing the event, has promised some very tasty cloud cake.

Who will StoryBoard be most useful for?

It's most useful for someone who wants to track work that spans multiple projects (or a very big project with lots of moving parts). This could be code, but doesn't have to be.

For background: a story is a goal, or set of requirements. A task is a concrete step someone takes toward meeting those requirements. Sometimes work is necessary in multiple projects in order to fulfill the requirements, so different tasks in the same story can link to different projects.

It's easiest to just show an example.

alt text here

Wasn't this project dead? Why resurrect it?

It was! I think the dust's clearing a little now...

Codethink (where I work) were interested because we wanted to extend StoryBoard to use an instance internally, for project management. OpenStack's infra, especially the CI system, meant it was more appealing to develop upstream than to fork.

We work in the open a lot and we're familiar with Gerrit, so it was easier for us to do it this way. It also made more sense for us to use a tool based around the needs of software developers, and make a project management layer on top of that, than to use generic management software that we would then struggle to extend with a Gerrit plugin, etc.

In the process, we ended up fixing some old bugs and talking to people who'd worked on StoryBoard in the past, and now more people are interested than when we started. So that's quite encouraging.

How long before it's in production?

There are instances in production at the moment, though some features are more mature than others.

It splits into roughly three parts: 1) a bug-tracker, 2) a task-tracker for developers, 3) a task-tracker for project managers. The last is the newest and creakiest, but also our development focus at the moment.

OpenStack Infra have a production instance, though they've planned to move away (the decision was made before we resurrected the project). So their instance https://storyboard.openstack.org/ is the one we use for tracking development on StoryBoard itself, as it's the most comprehensive task list.

Outside of OpenStack, the Baserock project also uses it for bug and task tracking, here: https://storyboard.baserock.org/

We expect more users as we develop it further, especially the project-management side. We've kept relatively quiet while we've been fixing bugs, but we're chipping away at them!

What contributions do you need most from the community?

We're a small team, so the absolute best thing people can do is play around with StoryBoard, find some bugs, and try to fix them. (that may be true for a team of any size...:))

Known tasks and bugs:

 Python API: https://storyboard.openstack.org/#!/project/456

 AngularJS Webclient: https://storyboard.openstack.org/#!/project/457

The best place to get further context (or just chat, really) is  #storyboard on freenode.

Other ways people can help: making stories and joining #storyboard to tell us more about their needs. And, as always, review! AngularJS knowledge is top of our wishlist, since in the world of OpenStack it's rarer than Python.

You made some serious promises about cake when announcing the meetup. Tell me more!

Well, the original plan was to host a meetup, order lots of cake, then somehow forget to invite anybody to the meetup, and eat all the cake ourselves. Then I forgot the master plan and foolishly advertised the meetup.

So now, planned cakes include doughnuts and rainbow cupcakes. This kind of thing: http://imgur.com/qnTBO2M the clouds are made from candyfloss. Unless I eat them all first.

[Cover Photo]http://morguefile.com/archive/#/?q=toy%20train) // CC BY NC

by Nicole Martinelli at February 09, 2016 03:04 PM

How OpenStack is helping SK Telecom roll out the next 5G LTE network

SK Telecom, South Korea's largest wireless carrier, has big plans to develop the 5th generation mobile network technology.

Superuser spoke to Jaesuk Ahn, project manager at SKT R&D about how software is supplanting the hardware-centric approach as network functions virtualization takes one step beyond.

Tell us about your role with SK Telecom.

I work at Network IT Convergence Lab, SKT, where I focus mostly on software-defined data center (SDDC) technologies like OpenStack. The Network IT Convergence Lab (NIC Lab) was established last year to create an All-IT Network. The All-IT Network is part of SKT’s vision that all network infrastructure and services will be virtualized and converged with IT technologies. With the All-IT Network, SKT will transform to the Intelligent Platform Company. SDDC is a core part of the All-IT Network, which NIC Lab will help make a reality for SKT.

As you may already know, Korea is one of the most connected countries in the world with extremely good mobile network connectivity. SK Telecom is the number one mobile service provider in Korea with a 50 percent market share. SKT has been at the forefront of developing and commercializing advanced wireless technology. For instance, we were the first to commercialize CDMA technology in 1990.

More recently, we were the first to deploy LTE and the LTE advanced network with maximum 375 Mbps, and we are working to develop the 5th generation mobile network technology. SKT has a strong vision to become an Intelligent Platform Company by 2020, focusing on four major platforms: MNO, Lifestyle, Media and Internet of things. In short, we have innovation in our genes.

What challenges do you face in creating the all-IT network?

Because it’s almost impossible to reuse existing hardware and software, SKT faces an enormous capital expenditure any time it deploys a new wireless network.

To ensure network nodes work during all periods, including peak traffic, telco network infrastructure tends to over provision network hardware and software. This is a very inefficient way to operate our wireless network, which requires telco to invest heavily to maintain the quality of its wireless network. Since each generation (2G, 3G, 4G LTE) is separate, built and operated separately, the complexity of operating them continues to rise.

What has been your experience dealing with vendor dependency?

Up until now, network equipment has been very specific. In other words, it has been a closed world. Even worse, software running on the network equipment has been highly proprietary. Unfortunately, this forces us to rely heavily on specific vendors.

The IT industry was the same in the past. However, with virtualization technology, cloud computing, and open source software technologies, IT was able to transform from a proprietary world to an open world. OpenStack has been the core of this transformation.

Telco network infrastructure has unique requirements regarding performance, stability, and reliability. It has struggled to adopt a best-of-breed or a de facto standard solution. However, with cloud technology maturing, with the introduction of more “carrier-grade” solutions, it is time to “actively” consider open technology based “software defined infrastructure” for Telco Network Infrastructure.

What are your business requirements?

Traditionally, mobile networks have been used for voice and data. Currently, they are used for video streaming, social networks and online payment. In the future, we envision the mobile networks will be used for rich immersive media such as augmented reality and virtual reality, and the Internet of everything. The telco network infrastructure must be more flexible and open to address and satisfy these rapid changes. That means the telco network must adopt software-defined technologies to be more flexible, adaptable, and open.

As you plan to lead again with 5G, how is software supplanting the old hardware-centric approach?

Regardless of 5G, SK Telecom has been providing cloud services for public and internal IT services for years and has already begun to bring in software defined infrastructure solutions to commercial services like vIMS, EPC for IoT, and T-OVEN.

We plan to apply these proven technologies to network equipment. And, in the long term, we intend to achieve software-defined data center (SDDC) based all-IT infrastructure, which provisions telecommunication service, internal IT service, and public cloud service on a unified ICT infrastructure.

To provide a variety of services in the 5G era, we believe that an open architecture based on software defined technologies is essential. Network slicing, which provisions multiple logical network topologies on top of one physical network, is a good example.

As a first step, we are considering converting to a software-defined core network. That is, legacy network appliances composed of closed hardware and vendor specific software will be virtualized on commercial off-the-shelf (COTS) by provisioning all network functions as a software module, or virtualized network function (VNF).

Provisioning a new service will be fast and easy with VNFs. This is an extension of service-oriented architecture, which was introduced in the IT area a decade ago. SK Telecom will phase in software-defined technologies to the access network and the transport network as well as the core network. We are already working with international organizations for standardization like 3rd Generation Partnership Project (3GPP) for Cloud Radio Access Network (C-RAN), and also leading a global operator alliance to open RU-DU architecture.

What role is OpenStack playing in the new software-defined approach?

Telco network virtualization (e.g., network slicing and 5G Core Network/RAN virtualization) is a key feature in the upcoming 5G networks. NFV and SDN are key technologies to enable telco network virtualization.

NFV uses software-defined infrastructure technologies to achieve carrier grade network virtualization infrastructure. ETSI NFV ISG has already developed a reference architecture, and there is OPNFV to realize this NFV concept with various open source technologies. OpenStack is the top most key technology for that. From 5G’s viewpoint, network functions to construct various network services are the most important software. OpenStack is the “baseline infrastructure technology” as well as the “integration engine” to make and run these network functions. OpenStack was a clear choice for us because of its openness, growing community and the rich features it provides.

Telco network virtualization requires a stricter requirement standard for reliability, performance, and availability than IT infrastructure virtualization. SKT is conducting research on software-defined technologies and services, mainly based on OpenStack. SKT is also deploying cloud services on top of OpenStack technologies. Along with all these research activities, SKT will work on preparing software defined technologies to meet with Telco’s strict requirements and service scenarios so SKT can use these technologies for network infrastructure virtualization to run various network functions for 5G networks.

Which OpenStack projects will be included in your 5G architecture?

As described above, OpenStack is not directly used for a 5G deployment right now. However, SKT is actively working on software-defined technologies based on OpenStack and other open source software projects.

The NIC Lab is the main entity to lead these R&D activities in SKT. 5G will leverage this software-defined technology developed by the NIC lab. Most core projects (Nova, Cinder, Neutron, Keystone, Glance, Heat) are included in SKT’s software-defined technology research.

Other OpenStack projects are also being reviewed based on SKT’s requirements and use cases. SKT wants to leverage all the relevant projects as much as possible. The most important project NIC lab is working on is Neutron and the ONOS project to make the ONOS-based SDN controller. SKT hopes to contribute to both ONOS and OpenStack community to make Neutron more reliable and production-ready, especially regarding Telco’s specific needs.

Just how fast is 5G, and when do you expect it to be available?

We expect 10~100X to be faster than 4G LTE, 10x lower latency, and 100~1000x to offer more capacity than 4G LTE. We expect to have a 5G phase one standard by late 2018.

However, SKT is planning to have a 5G-grade pilot service in early 2018 as a leading operation in the 5G domain. Official commercialization of the 5G network will be after 2020. We also expect the first commercialization will happen in a large city where there are lots of customer needs, with mission critical IoT application.

What is SKT’s vision toward the 5G era?

For SKT, 5G is not just another faster network infrastructure. 5G is SKT’s chance to transform from a “network” to a “platform” company. SKT is not only doing a technical research on 5G network technologies, but also focusing on the real value SKT can give to its users; from a person’s lifestyle to an innovation that improves enterprise’s productivity. To achieve that, SKT will present various possibilities we can have on top of the 5G network, such as entertainment like a virtual reality-based game and video, and massive and mission critical IoT.

SKT cannot do this alone. SKT will open the 5G Service Platform to third parties and customers to achieve its dream to become a platform company. SKT and other telcos made the mistake to sit in a walled garden. However, SKT will not make the same mistake twice. With the new possibility brought by the 5G network, SKT will be more open and more innovative. More importantly, this will not be possible without open source technologies and an eco-system, such as the OpenStack community.

What can you tell us about your current projects with OpenStack?

The first application of OpenStack-based SDDC has been applied to our Bundang Network Operation Center. We have developed a multi-cloud datacenter operation system that we call TROS to manage development and NOC servers in our NOC. We have also employed OpenStack in our private cloud 2.0, which will be used for mobile service development. This is a big change from our previous private cloud that was built on top of a vendor-specific solution. In our private cloud, we will also have Cloud Foundry as a PaaS platform to support mobile application developers. We are also building our public cloud service based on OpenStack. It will be our chance to prove open architecture with open source technologies can fully compete with a vendor-specific solution.

SDN is an important technology for a network provider like us. We are engaged with the ON Lab at Stanford, which develops ONOS, an open source project to develop a carrier-grade SDN controller. We are happy to announce our simplified open networking architecture (SONA) project will be the core component to connect ONOS to OpenStack via the Neutron API.

Managing software-defined networks is a challenging task due to complex overlay structures. To help network operators, we are developing a 3D network operation system. It can visualize complex virtual networking structure in a more intuitive way than a traditional network management dashboard can. But what’s more, it can complete the feedback loop by allowing network operators to take actions as they discover problems through the SDN controller.

These are just a few examples how OpenStack and software-defined technologies will be leveraged in the next generation mobile networks, as well as our cloud services. We have been working with many partners to develop these technologies. They include HP, Intel, and Mellanox.

What are you learning from others in the community?

SKT is learning how others have been through cultural changes to embrace open source technologies as well as cloud computing services. Working with open source software and the community [requires] change to our own genes as a traditional network operator. Personally, whenever I come to an OpenStack Summit, l learn that OpenStack is not just software, it represents an open culture, a strong passion and harmony...

This post is part of our focus on OpenStack innovators. If you'd like to be featured, write to editor@openstack.org

Cover Photo // CC BY NC

by Bill Robbins at February 09, 2016 02:15 PM

Mirantis

Hybrid cloud takes center stage, pushed by Microsoft and … Walmart??

The post Hybrid cloud takes center stage, pushed by Microsoft and … Walmart?? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

In the public cloud, acknowledged champion Amazon Web Services is followed up by Microsoft Azure. Now Redmond is entering the private cloud market with Azure Stack, which purports to be identical to Azure, but for the private cloud.  The company is pushing this solution as a hybrid cloud play, intended to enable developers to use a single API for both public and private environments.

With the service running on (fairly hefty) Windows Server 2016 machines, Microsoft is clearly attempting to hold into the enterprise computing market, which has been slowly but surely moving to virtualization, and now cloud-based, solutions.  The idea here is to provide a hybrid cloud opportunity that enables companies to use a single API to run both private and public cloud workloads.

Pricing has not yet been determined.

Interestingly, the news comes at just about the same time as the open-sourcing of Walmart’s OneOps tool, which enables enterprises to seamlessly run workloads on both public and private clouds — but includes OpenStack, AWS, and, yes, Azure. Walmart acquired OneOps in 2013, and has built both walmart.com and samsclub.com with it.  OneOps is now available on GitHub under an Apache 2.0 license.

OneOps is an operations tool that enables 3000+ users to make more than 30,000 changes per month, including basic IaaS services and more advanced autoscaling and auto-repair operations.

Resources

The post Hybrid cloud takes center stage, pushed by Microsoft and … Walmart?? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at February 09, 2016 10:07 AM

February 08, 2016

Andreas Jaeger

Creating new test jobs in OpenStack CI

Reviewing patches for the OpenStack CI infrastructure, there's one piece that often confuse contributors: The question how Zuul and Jenkins configuration are working together.

While we have the Infra Manual with a whole page on how to create a project - and I advise everyone to read it - , let me try to tackle the specific topic of adding new jobs from a different angle.

What we're discussing here are job, or tests, that are run. Jenkins actually runs these jobs. Zuul watches for changes in gerrit (URL for OpenStack is review.openstack.org) to trigger the appropriate jobs so that Jenkins runs them.


To understand the relationship between these two systems, let's try as an analogy programming languages: As a developer, you create a library of functions that do a variety of actions. You also write a script that uses this library to execute them. Jenkins can be considered the library of test functions. But just defining these is not enough, you have to call them. Zuul takes care of calling them, so in the analogy is your script.

So, to actually get a job running for a repository, you first need to define it in the Jenkins "library", and then you trigger its invocation in Zuul. You can also add certain conditions to limit when the job runs or whether it is voting.

If you dig deeper into Jenkins and Zuul, keep in mind that these are two different programming languages, even if both use YAML as format. Jenkins runs jobs and these are defined as text files using the Jenkins job builder. To define them, you can write a job, or use a job-template and instantiate it, or group several job-template in a job-group and instantiate that job-group to create with a few lines many jobs. Zuul uses these jobs and has as syntactic sugar templates to reuse jobs and the queues they run in.

Let's look at a simple examples, adding a new docs job to your repository called amazing-repo:

  1. Check out the project-config repository and make it ready for patch submission like creating a branch where you work on.
  2. Since for the docs job already a template exists, you can reuse it. It is called'gate-{name}-docs', so add it to your repository in file jenkins/jobs/projects.yaml:
    - project:
      name: amazing-repo
      node: bare-trusty
      jobs:
          - gate-{name}-docs
  3. Now define how to trigger the job. Edit file zuul/layout.yaml and update your repository entry to add the job:

    - name: openstack/amazing-repo
      template:
        - name: merge-check
      check:
        - gate-amazing-repo-docs
      gate:
        - gate-amazing-repo-docs

    This adds the job to both the check and gate queue. So, it will notonly be run when a patch is initially submitted for review in the check queue but also after a patch gets approved in the gate queue. Since your tree might be different when you submitted a change and when it merges, we run jobs in both situations so that the tree istested exactly as it merges.
  4. Let's go one step back: Your repository is not ready yet to havethe docs job voting, so you only want to run it as non-voting.
    In that case add a condition in the jobs section of zuul/layout.yaml:

    - name: gate-amazing-repo-docs
        voting: false


    And in your repository, only add it to the check queue. Non-voting jobs should not be in the gate, they get ignored completely and just waste resources:

      - name: openstack/amazing-repo
        template:
          - name: merge-check
        check:
          - gate-amazing-repo-docs
        gate:
          - ...

So, these are simple jobs. Stay tuned for a followup article that will cover how to use templates in Zuul - and how to modify your repository in the context of templates.

Thanks to Doug Fish for reviewing the text and giving suggestions on how to improve it - and urging me to write a follow-up.

P.S. Follow-up article is called "Templates in OpenStack's Zuul".

by Andreas Jaeger (noreply@blogger.com) at February 08, 2016 06:43 PM

Templates in OpenStack's Zuul

This is a followup to my post  "Creating new test jobs in OpenStack CI". Last time I covered the basic setup of jobs by Jenkins and Zuul. Since many OpenStack projects run the same jobs, the Zuul developers have introduced templates to easily group and reuse these jobs.

Let's look at one common example, it's the python-jobs template. Like all examples in this article, it is defined in file zuul/layout.yaml in the openstack-infra/project-config repository and to use it, you need to edit the same file.

Defining your own templates

Since you now know how to use a template, let's explain how they really look like. A template consists of a name, definitions for which jobs should run in which queue and allows to substitute the name of the repository in the job, so {name} gets replaced by your repository, in the example by amazing-repo.

Let's look at the python-jobs template:



  - name: python-jobs
    check:
      - 'gate-{name}-pep8'
      - 'gate-{name}-docs'
      - 'gate-{name}-python27'
    gate:
      - 'gate-{name}-docs'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python27'
    post:
      - '{name}-branch-tarball'


The template has the name python-jobs, adds three jobs to the check queue and the same jobs also to the gate queue. An additional job is added to the post queue. Jobs in the check get queue get triggered when a change gets submitted, jobs in the gate queue get triggered when a change gets approved by a core reviewer and jobs in the post queue get triggered after a change has merged.
If you are adding the same class of jobs to several repositories, create a template for it. A template can contain of a single job that is associated with one queue, or contain several jobs in several queues like the example above.

Using a template

So, if your project amazing-project wants to reuse the python-jobs template as is, just add it as template:

  - name: openstack/amazing-repo
    template:
      - name: merge-check
      - name: python-jobs

You can also limit, on which branches those are jobs are triggered. For example, to run the docs job only on stable/liberty and newer branches, you can add a condition:

  - name: gate-amazing-project-docs
    branch: ^(?!stable/kilo).*$



So, instead of saying run on liberty and newer, we block it on older supported branches, in this case kilo is the only older supported branch.

If you're introducing jobs, best practice is to add them first to the experimental queue, and then add them as non-voting, and only finally as voting. In this case, the templates do not help you at all for the first two steps, you have to look at their definition and add them manually.

First step, using the jobs in the experimental queue:

  - name: openstack/amazing-repo
    template:
      - name: merge-check
      - name: noop-jobs
    experimental:
      - gate-amazing-repo-pep8
      - gate-amazing-repo-docs
      - gate-amazing-repo-python27


Note that we use noop-jobs as a template, so that both check and gate queue have at least one job. The noop jobs do nothing but are important since Zuul requires at least one job to run with success, otherwise you will not be able to merge anything.

With this definition, you can now submit a change and add as review comment "check experimental" and the jobs are run and the results are reported.

Later, the manually triggered jobs run fine, so it's time to run them on each change but keep them non-voting to not block any merges:

  - name: gate-amazing-repo-docs
    voting: false

  - name: gate-amazing-repo-pep8
    voting: false

  - name: gate-amazing-repo-python27
    voting: false
....

  - name: openstack/amazing-repo
    template:
      - name: merge-check
    check:
      - gate-amazing-repo-pep8
      - gate-amazing-repo-docs
      - gate-amazing-repo-python27
    gate:
      - noop

Here we added the noop job to the gate since otherwise no job would run in the gate and Zuul requires at least one job to run.

Once the jobs all run fine, you can add them to the gate as well - and for that case, let's finally use the template:

  - name: openstack/amazing-repo
    template:
      - name: merge-check
      - name: python-jobs



Defining your own templates

Since you now know how to use a template, let's explain how they really look like. A template consists of a name, definitions for which jobs should run in which queue and allows to substitute the name of the repository in the job, so {name} gets replaced by your repository, in the example by amazing-repo.
Let's review the python-jobs template again:

 
  - name: python-jobs
    check:
      - 'gate-{name}-pep8'
      - 'gate-{name}-docs'
      - 'gate-{name}-python27'
    gate:
      - 'gate-{name}-docs'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python27'
    post:
      - '{name}-branch-tarball'


The template has the name python-jobs, adds three jobs to the check queue and the same jobs also to the gate queue. An additional job is added to the post queue. Jobs in the check get queue get triggered when a change gets submitted, jobs in the gate queue get triggered when a change gets approved by a core reviewer and jobs in the post queue get triggered after a change has merged.
If you are adding the same class of jobs to several repositories, create a template for it. A template can contain of a single job that is associated with one queue, or contain several jobs in several queues like the example above.

References

For more information about templates, you can look at the file zuul/layout.yaml on definitions and usage.  Zuul has been written for OpenStack CI and has its own documentation. For information about Zuul's OpenStack instance, read the Project Config Infrastructure page about Zuul. The best starting place learn about using the OpenStack CI infrastructure is the Infra Manual.

Followup?

If you liked this post and like to learn more about OpenStack CI, please leave a comment with details.

by Andreas Jaeger (noreply@blogger.com) at February 08, 2016 06:42 PM

Kenneth Hui

Between A Rock And A Hard Place: Will OpenStack Become Niche?

vector-of-a-cartoon-man-stuck-between-a-rock-and-a-hard-place-outlined-coloring-page-by-ron-leishman-18262

A chief concerns I’ve discussed with friends in the OpenStack community about and have expressed at past Summits is the possibility that OpenStack is turning into a niche technology. As readers of this blog know, I am a big OpenStack supporter and have spend the past several years working to help the project succeed in the open source space and also commercially. The truth, however,  is my belief that OpenStack can be a ubiquitous platform for running modern applications is waning and I am seeing trends that lead me to believe it is becoming a niche technology that will be used only by a small slice of the market. I want to share why I think this way and where OpenStack may be headed. Again, I do so as a member of the OpenStack community who wants the project and technology to succeed.

Fundamentally, I believe that certain trends have put the project between a rock and a hard place. In particular, I see OpenStack being pressured by three trends.

  1. The continuing advancement of public clouds – Public clouds, like AWS and Azure, continue to gain adoption not only among startups but also enterprise customers. As that trend continues, OpenStack finds itself in a tough position. The more workloads which move to the public cloud, the smaller the addressable market is for OpenStack private clouds. And the past few years seems to support  the thesis that the market can only support so many OpenStack vendors. Initially, the project had hoped to compete with AWS by offering an open source cloud alternative. But that has not worked out because OpenStack has not been able to deliver on either the richness of services to help developers or the economies of scale of the leading public clouds. Much of that failure has to do with the project’s decision to integrate OpenStack with an ever growing ecosystem of legacy vendors. As a result, the project has been more focused on integrating vendor products than on building useful services for developers. As noted cloud architect, Adrian Cockcroft observed, “OpenStack was focused on solving the operations problems, and didn’t do enough to solve problems for developers early on.”
  2.  The Enterprise’s desire for an Open Source VMware alternative – OpenStack was initially created to provide an alternative open source platform to AWS for running modern cloud-native applications. However, with the increasing influence of traditional enterprises and legacy vendors, the focus is turning towards making OpenStack a vSphere alternative for running legacy applications (I am talking specifically about using OpenStack with KVM). Legacy vendors, in particular, see this turn as a way to stay relevant in a world where the public cloud and open source technologies grow in dominance. Those vendors who recognize that OpenStack was not initially architected to run these applications are now working to move the community towards adding infrastructure resiliency into the project. This has created a situation where we are actually trying to mesh two opposing architectural patterns. As a result, the project is becoming overly complex as OpenStack attempts to be a platform suitable for every workload that integrates with every infrastructure technology.
  3. The rise of containers for running modern applications – One trend that was not expected by the OpenStack community is the momentum behind containers for running modern applications. While the OpenStack community has been busy deciding what type of cloud they want to be, the container ecosystem has emerged to create an ecosystem focused  on building a platform for modern applications. It appears likely now that users who want to run these applications on-premises will bypass OpenStack in favor of platforms that have been designed to use containers, such as Kubernetes, Apache Mesos, or Cloud Foundry. It is true that OpenStack has started focusing on running containers through projects such as Magnum. This makes sense if you are an a current OpenStack user and want to leverage the work you’ve put into creating your cloud for running containers. However, if you are not a current OpenStack user, what advantage do you gain by taking on the burden and added complexity of trying to stand up an OpenStack cloud underneath Docker and Kubernetes or Mesos? The incentive for running OpenStack in order to or in addition to running containers doesn’t seem to exist given the relatively low rate of current OpenStack adoption.

As I’ve said, I am not writing this as an OpenStack detractor but as a community member. I want OpenStack to succeed. But for OpenStack to progress beyond where it is today, the project should focus on a narrower set of use cases, like building an elastic infrastructure that enables developers building modern applications. OpenStack should focus on making it easier for developers to consume its services and less on how to integrate with every vendor technology in the world. Otherwise, we face a future where VMware and Microsoft dominate for running legacy applications, Container based platforms dominate for running modern applications, and OpenStack is squeezed into a narrow niche where it has only a small slice of the larger pie.


Filed under: Cloud, Cloud Computing, cloud native, Containers, OpenStack, Private Cloud, Public Cloud Tagged: Cloud, Cloud computing, cloud native, Containers, OpenStack, Private Cloud, Public Cloud

by kenhui at February 08, 2016 05:25 PM

Hugh Blemings

Lwood-20160207

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week ending 7 February 2016. For more background on Lwood, please refer here.

Basic Stats for week 1 to 7 February 2016:

  • ~585 Messages (down about 6% relative to last week)
  • ~185 Unique threads (up about 4% relative to last week)

Steady as it goes this week pretty much.

Notable Discussions

Review requested for Rolling Upgrades and Updates user story

Kenny Johnston writes on behalf of the Product Working Group seeking feedback/additional reviews on a user story that discusses Rolling Updates and Upgrades.  If you’re working on or interested in improving upgrades please comment whether you’re a user or developer or both.  Given the importance of upgrades/updates for the OpenStack community (users and developers alike) this is a conversation worth joining :)

What makes it “Open” enough for OpenStack ?

Thierry Carrez kicked off a long but interesting thread where he opines “…If you need proprietary software or a commercial entity to fully use the functionality of a project or getting serious about it, then it should not be accepted in OpenStack as an official project…”  Thierry’s post and the discourse that follows is an intriguing and (generally!) well thought out discussion around this knotty topic.

Doug Hellman notes early in the thread that there is a more concrete case being considered at present in the form of the Poppy Project – a project that provides an Open Source front end/API to otherwise commercial services.

Well worth a read, particularly if you work for commercial entity that provides OpenStack related services and contribute resources to same.

The Trouble With Names

Sean Dague posted a thought provoking piece on the difficulty with names in OpenStack.  This in the context of project names (e.g. “nova”), common/generic names (e.g. “compute”) as well as namespaces in code repositories and elsewhere.

A lengthy discussion ensued with a general agreement that there was a need to do better, slightly less consensus on how that might achieved.

My impression and certainly my own experience is that many newcomers to OpenStack struggle with the “in joke” nature of some names in the early days and so it becomes part of the barrier to entry during that crucial period.  Please take a read of the thread and throw your views into the mix if you’re so inclined :)

Time to separate Design Summits from OpenStack conferences ?

In what looks like it will become a busy thread this week, Jay Pipes posits that perhaps the time has come to separate the OpenStack Design Summits from the more commercially oriented aspects of the Conference.

An interesting dialog follows with folk speaking for and against, but invariably constructively about the idea. James Bottomley is one of a number of folk who contributed to the discussion – he gives an interesting perspective from the experiences of the Linux Foundation and broader Linux community.

New API Guidelines for cross project review

From Michael McCune one new API guideline up for cross project review – “Must not return server-side tracebacks

OpenStack Mentoring

Mike Perez followed up last weeks post with more details about the OpenStack Mentoring program and the importance of same to growing the OpenStack community.  if you’re interested in helping out, or finding a mentor yourself, Mike’s post has the details.

Further DocImpact Changes

Lana Brindley gave an update on changes to the behaviour of the DocImpact script.  The result of this most recent set of changes is to revert the earlier tweaks and in so doing shift the responsibility for triage of incoming DocImpact changes into the project teams themselves.  More details in Lana’s post or here.

Upcoming OpenStack Events

Some of the OpenStack related events that cropped up on the mailing list this past week.  Don’t forget the OpenStack Foundation’s comprehensive Events Page for a comprehensive list!

Midcycles & Sprints

People and Projects

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news

This edition of Lwood brought to you by Joe Walsh (Funk 49-50) and Booker T Jones (Live, The Road from Memphis, Potato Hole) amongst other tunes.

by hugh at February 08, 2016 12:59 PM

Opensource.com

Dispatches from FOSDEM, new survey data, and more OpenStack news

Interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at February 08, 2016 07:59 AM

February 06, 2016

Adam Young

Keystone Implied roles with CURL

Keystone now has Implied Roles.  What does this mean?  Lets say we define the role Admin to  imply the Member role.  Now, if you assigned someone Admin on a project they are automatically assigned the Member role on that project implicitly.

Let’s test it out:

Since we don’t yet have client or CLI support, we’ll have to make due with curl and jq for now.

This uses the same approach Keystone V3 Examples

#!/bin/sh 
. ~/adminrc

export TOKEN=`curl -si -d @token-request.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | awk '/X-Subject-Token/ {print $2}'`

export ADMIN_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=admin | jq --raw-output '.roles[] | {id}[]'`

export MEMBER_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=_member_ | jq --raw-output '.roles[] | {id}[]'`

curl -X PUT -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles/$ADMIN_ID/implies/$MEMBER_ID

curl  -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/role_inferences 

Now, create a new user and and assign them only the user role.

openstack user create Phred
openstack user show Phred
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 117c6f0055a446b19f869313e4cbfb5f |
| name      | Phred                            |
+-----------+----------------------------------+
$ openstack  user set --password-prompt Phred
User Password:
Repeat User Password:
$ openstack project list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
+----------------------------------+-------+
$ openstack project list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
+----------------------------------+-------+
openstack role add --user 117c6f0055a446b19f869313e4cbfb5f --project fdd0b0dcf45e46398b3f9b22d2ec1ab7 e3b08f3ac45a49b4af77dcabcd640a66

Copy token-request.json and modify the values for the new user.

 curl  -d @token-request-phred.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | jq '.token | {roles}'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1643  100  1098  100   545  14742   7317 --:--:-- --:--:-- --:--:-- 14837
{
  "roles": [
    {
      "id": "9fe2ff9ee4384b1894a90878d3e92bab",
      "name": "_member_"
    },
    {
      "id": "e3b08f3ac45a49b4af77dcabcd640a66",
      "name": "admin"
    }
  ]
}

by Adam Young at February 06, 2016 03:03 AM

OpenStack Blog

OpenStack Developer Mailing List Digest Jan 23 – Feb 9

SuccessBot Says

  • odyssey4me: OpenStack Ansible Liberty 12.0.5 released.
  • stevemar: Devstack now defaults to v3 for Keystone.
  • boris-42: osprofiler functional job passed [1].
  • odyssey4me: OpenStack Ansible Kilo 11.2.9 released [2].
  • odyssey4me: OpenStack Ansible Liberty 12.0.6 released [3].
  • All: https://wiki.openstack.org/wiki/Successes

Cross-Project Specs

  • A common policy scenario across all projects [4].
  • Query config from web UI [5]

API Guidelines

  • Must not return service-side tracebacks [6].

Service Type vs. Project Name For Use In Headers

  • There’s a question of whether we should be using service type or project names in headers. Some reviews involving this [7][8][9][10].
  • We should be selecting things that better serve the API consumers and according to Dean Troyer the API working group is going in the right direction.
  • The service type as the primary identifier for endpoints and API services is well established, and is how the service catalog has and will always work. Service types therefore should be the way to go.
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/085145.html

OpenStack Ansible Without Containers

  • Gyorgy annouces a new installer for OpenStack under GPLv3 using Ansible, but without containers.
    • Reasons for another installer since we already have the OpenStack Ansible project and Kolla:
      • Containers adding unnecessary complexity.
      • Packages: avoid mixing pip and distributor packages. Distributor packages includes things like init scripts, proper system users, upgrade possibilities, etc.
        • Kevin Carter mentions that these benefits are actually also included with the OpenStack Ansible project.
  • Without containers, upgrading a single controller can be tricky and disruptive since you have to upgrade every service at the same time. Rollbacks are also easier.
  • OpenStack Ansible project can already today do deployment without containers using the is_metal=true variable.
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/084963.html

Release Countdown for Week R-8, Feb 8-12

  • Focus:
    • 2 more weeks before final releases for non-client libraries for this cycle.
    • 3 more weeks before the final releases for client libraries.
    • Projects should focus on wrapping up feature work in all libraries.
  • Release Actions:
    • The release team will be strictly enforcing library release freeze before M3 in 3 weeks.
  • Important Dates:
    • Final release for  non-client libraries: Feb 24
    • Final release for client libraries: Mar 2
    • Mitaka 3: Feb 29-Mar 4 (includes feature freeze and soft string freeze)
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-February/085705.html

“No Open Core” in 2016

  • Before OpenStack had a name, the “four opens” principles were created to define how we operate as a community.
  • In 2010 when OpenStack started, it was different from other open source cloud platforms (Eucalyptus) which followed open core strategy of producing a crippled community edition and an “enterprise version”.
  • Today we have a non-profit independent foundation, which cannot do an “enterprise edition”.
    • Today member companies build “enterprise products” on top of the apache licensed upstream project. Some are drivers that expose functionality in proprietary components.
  • What does it mean to “not do open core” in 2016? What is acceptable and what’s not?
  • Thierry Carrez believes it’s time to refresh this of what is an acceptable official project in OpenStack.
    • It should have a fully-functional production grade open source implementation
    • If you need proprietary software of commercial entity to fully use the project, then it should not be accepted in OpenStack as an official project.
      • These projects can still be non-official projects and still be hosted by OpenStack infrastructure.
  • Doug Hellmann brings up Poppy [11] which is applying to be an official OpenStack project.
    • A wrapper to content delivery networks, but there is no open source solution.
    • Is this something that can be an official project, or is open core?
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-February/085855.html

The Trouble with Names

  • A few issues have crept up recently with the service catalog, API headers, API end points, and even similarly named resources in different resources (e.g. backup), that are all circling around a key problem. Distributed teams and naming collision.
  • Each project has a unique name that is ensured by their git repository in the OpenStack namespace.
  • There’s a desire to replace project names with generic names like nova/compute in:
    • service catalog
    • api headers
  • Options we have are:
    • Use the code names we already have: nova, glance, swift, etc.
      • Upside: collision problem solved.
      • Downside: You need a secret decoder ring to know what a project does.
    • Have a registry of common names.
      • Upside: we can safely use common names everywhere and not fear collision down the road.
      • Downside: yet another contention point.
  • Approvals by the various people in the community to have a registry of the common names. Maybe in the governance projects.yaml file [12]?
    • This list does include only the official projects by the technical committee, therefore only those projects can reserve the common names.
  • OpenStack Client has already encoded some of these common names to projects [13].
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-February/085748.html

Announcing Ekko – Scalable Block-Based Backup for OpenStack

  • The goal of Ekko is to provide incremental block-level backup and restore of Nova instances.
  • Two place with overlapping goals:
    • Cinder volume without having the incremental backups be dependent.
    • Nova instances
      • OpenStack Freezer today leverages Nova’s snapshot feature.
      • Ekko would leverage live incremental block-level backup of a nova instance.
  • Jay Pipes begins the discussion on the two projects (Freezer and Ekko) working together to make sure their REST API endpoints are not overlapping. Having two APIs for performing backups that are virtually identical is not good.
  • The creator of Ekko sees the reason for another backup project because of “actual implementation and end results are wildly different” even if they are the same API call.
  • Jay doesn’t like that today all the following endpoints exist:
    • Freezer’s /backups
    • Cinder’s /{tenant_id}/backups
  • Having these endpoints make for bad user experience and is just confusing.
  • The current governance model does not prevent competition of projects. So if two projects overlap in API endpoints, there should be an attempt in collaboration.
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/084739.html

by Mike Perez at February 06, 2016 12:11 AM

February 05, 2016

OpenStack Superuser

Meet the OpenStack Ambassadors: Marton Kiss

In this series of interviews, OpenStack takes you around the world to meet our Ambassadors. These tireless volunteers act as liaisons between multiple user groups, the Foundation and the general community in their regions. Launched in 2013, the OpenStack Ambassador program aims to create a framework of community leaders to sustainably expand the reach of OpenStack around the world. More on the program and how to apply here.

Here we introduces you to Marton Kiss, our Budapest-based Ambassador and organizer of the OpenStack Day Budapest in June 2016.

He talks to Superuser about what drives (and slows) OpenStack adoption in the region, the current flow of talent and what still surprises him about deployment.

What's the most important OpenStack debate in your region right now?

It is very hard to highlight a single challenge related to OpenStack, but software-defined networking (SDN) and network functions virtualization (NFV), container technologies and deployment/orchestration are leading the day-to-day discussions in the community. It was a huge surprise for me that we still talking about deployment questions, meanwhile simple and straightforward solutions are available if someone wants to deploy a relatively simple OpenStack cluster.

The advantage of OpenStack is that users can build customized deployments, but sometimes the operation and upgrade of those custom systems can be really challenging.

What’s the key to closing the talent gap in the OpenStack community?

The talent gap seems to be a global IT industry problem, not related to the European region only, but there are definitely some region-specific characteristics. For example, there's a flow of talent from Eastern to Western Europe where people can enjoy the benefits of a more advanced economic environment and this effect creates a much larger talent vacuum in the Eastern European countries.

The other side of the story that learning OpenStack, devops and agile technologies can provide a better career path for students. For example, large vendors have very good ties to universities and they support a lot of community events like meetups and conferences to bring visibility for students.

To help find properly skilled people for the OpenStack ecosystem players and boost the number of contributors in upstream projects, we plan to launch the Upstream Training in the region. The Training already works well at the Summits, it provides a great overview of OpenStack and helps people make the first steps to become a project contributors.

Doing frequent Upstream Trainings is just one side of the story, I truly believe the OpenStack community must provide user-friendly tools and processes for developers and deliver a better integrated solution instead of existing fragmented toolset. (Not forgetting the huge effort that our infra team did in the last few years to operate and keep alive what we have now.) It's also important to build more guides and training materials related to OpenStack software development.

What trends have you seen across your region’s user groups? How do these compare to the trends from the global OpenStack Summit?

The trends are very similar to U.S. ones, the only thing I experience that Europe is in "follow mode," and we experience a one-to two-year gap in technology transformation. Of course, we have exceptions -- cities like Berlin, Budapest and Amsterdam already act like technology hubs and definitely have a direct connection with the Bay Area, so technology and knowledge transfer between them is faster.

What drives cloud adoption in your region?

Related technologies and methodologies like agile development, project management and devops culture are the major forces behind the cloud adoption. Of course, startup companies and research institutes are the leaders in the area, but internet service providers (ISPs) and enterprises have started to catch up.

What’s the biggest obstacle to adoption?

The technology exists and is actually available to everyone through different channels -- from real open-source self-made clouds to boxed vendor provided ones. The obstacle exists in people's minds -- it's a lot like before the previous industrial revolution. Sometimes the internal IT business units are the greatest opponents of cloud adoption, they often try to protect their own kingdoms and very slowly follow the changes. Also, European companies are very conservative, they need more time to catch up with new technologies.

What types of clouds/organizations are most active in the community and at local events, including meetups and OpenStack Days?

Large vendors, startups and the research and education sector are very active in community events and support the growth of the European community. This very diverse community produced very similar growth trends compared to the global pattern and rose more than 20 percent in just the last 6 months. In our local community, the telecommunication sector is also very active. This activity is based on two area, first of all the major winners and early adopters of the SDN/NFV innovation are the Telcos, and for example Budapest hosts research / development center for major players.

Which users have come forward in your local community to share their story and can people get involved?

Check the schedule for previous OpenStack Days to find out who represents the local community.

We are proud to be the host of a CERN datacenter here and MTA-SZTAKI (the Hungarian Academy of Sciences Institute for Computer Science and Control) are planning to switch from their previous cloud platform to OpenStack, meanwhile there's been strong growth in software-defined storage technologies like Ceph. Docler just released an open-sourced their Ansible OpenStack Deployment scripts and made a huge contribution for the community. Local companies have several in-progress projects, and we are waiting for their public release, so I’m sure our ecosystem will be much richer within six months from now.

Cover Photo // CC BY NC

by Allison Price at February 05, 2016 03:33 PM

Women of OpenStack: Meet Comcast's Holly Bazemore

This post is part of the Women of OpenStack open mic series to spotlight women in various roles within our community who have helped make OpenStack successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’re interested in being featured, please email editor@openstack.org.

This time we're talking to Holly Bazemore, Comcast's director of elastic cloud strategy and deployments. She tells Superuser about why she's thrilled to join the community and how she's challenging her team to get more involved.

Five words that describe your character:

  • Happy
  • Accountable
  • Loyal
  • Passionate
  • Tenacious

What’s your role in the OpenStack community?

While new to the OpenStack community myself, I am a user, an operator and an advocate. My goal is to contribute to the community as much as I can and get more people at Comcast involved along with me. I am thrilled that community is one of my responsibilities in my new role on Comcast’s OpenStack team.

Why is it important for women to get involved with OpenStack?

For the same reasons it is important to have women involved with technology; studies show that diversity promotes a more innovative and creative workforce which leads to better products. Secondly, OpenStack is used globally. It requires a diverse community to build a product that brings value to the global marketplace.

What obstacles do women face when getting involved in the OpenStack community?

I have not found that the obstacles in the OpenStack community are any different that other technical communities. The Women of OpenStack put in a lot of effort to get women involved and welcome them with open arms which can only service to increase female contributions and longevity.

You’ve been active in many women in tech groups over the years - what are the most significant changes you’ve seen?

When I first started in the technology field in the 90s, I was described as “one of the guys who just happened to be a girl.” If any of my coworkers valued the diversity I brought to the team, they were not consciously aware of it. In my current workplace, many people are more inclusive and acknowledge the distinct advantages diversity brings with it. I am celebrated for being a woman as well as for being capable.

You tweeted about women in devops teams,what would you do/are you doing to get more women aboard?

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

I advocate for things I feel passionate about and challenge women to get involved at any level. When I attend a conference or event, I try to take another woman with me and I go out of my way to make women attendees feel included and valued.

OpenStack has been called a lifelong learning project for those involved - how do you stay on top of things and/or learn more?

I have found that it is harder for me to stay on top of things if I am not actively involved in them. For 2016, I have challenged my entire team to be active participants in the community and am requiring that any who wish to participate in a Summit must be giving a talk, be an active contributor to the community or must be on a panel.

This keeps me involved as well as my entire team, which keeps us all focusing on and talking about the projects we’ve elected to participate in.

What's the best piece of advice you have received from a mentor?

Change what you can change, influence what you can influence, and let go of the rest.

Cover Photo // CC BY NC

by Superuser at February 05, 2016 11:05 AM

RDO

Why does Red Hat contribute to RDO?

Red Hat's philosophy is 'Upstream First'. When we participate in an open source project, our contributions go into the upstream project first, as a prerequisite to deliver it in the downstream offering. Our continued focus, over the past years and in the future, is to reduce to a bare minimum the differences between Upstream, RDO and RHEL OpenStack Platform at General Availabilty time, as we believe this is the only way we can maximise our velocity in delivering new features. In doing so, we, as any successful enterprise would do, need to focus our efforts on what matters in respect to our "downstream" strategy, and it means that we do prioritize our efforts accordingly as we are contributing particular features and fixes.

Thus, it's useful to consider why Red Hat participates in RDO in the specific ways that we do.

Red Hat is focusing on delivering a distribution of OpenStack targeting Enterprise private clouds and Telco NFVi needs. In order to do so, we invest on the upstream features that are the most commonly requested by our user base and by our investigations on the needs of these markets. Common themes on these markets are:

  • easy and automatizable deployments
  • life cycle and upgrade management, limiting down time
  • optional intergrated operational tools
  • high availability of the control plane
  • optional high availability of VMs
  • disaster recovery scenarios
  • scalability & composability of deployments
  • multi-site deployments
  • performance improvements in networking, storage and compute

We are also keen to enable integration of OpenStack with third party products below (storage, networking, compute, etc…) and above (management, orchestration, reporting, etc..) through well defined and stable interfaces defined upstreams, and using upstream developed integration, so that those integration are the easiest to consume by our customers and allowing them as much choice as possible at all level of the stack.

by Nick Barcet at February 05, 2016 10:29 AM

February 04, 2016

OpenStack Superuser

OpenStack keeps the buzz going at FOSDEM '16

BRUSSELS — Even persistent drizzle couldn’t take the fizz out of the Free Open Source Developers’ European Meeting, FOSDEM’16.

The freewheeling weekend — there’s no registration and you are welcome to imbibe from the lengthy local beer list during sessions — offered up 618 events to over 5,000 people who ducked in and out of the rain on the Université libre de Bruxelles campus January 30-31.

alt text here

Christophe Sauthier at the OpenStack booth

The OpenStack booth was packed all weekend as attendees grabbed t-shirts (in black or light blue), picked up white papers and talked cloud. They were met by a team of tireless volunteers including Ubuntu developer and Debian contributor Adrien Cunin, European Union OpenStack Ambassadors Erwan Gallen and Marton Kiss, Christophe Sauthier of Objectif Libre and Alessandro Vozza organizer of DevOps Meetup Amsterdam.

“There was a massive amount of people, it was almost hard to move,” says Kiss, who came from Budapest for the second year to represent the Foundation. “It was great to be here, it’s one of the largest open-source events in Europe.”

What were some common questions? “This time I met a lot of new people who had heard something about OpenStack and wanted to know more about the use cases and how they can deploy it,” he says adding that while the event attracts many Europeans he spoke to a lot of Americans and even a developer who came from India specifically for FOSDEM. “It’s nice to think about the visibility of OpenStack when all those t-shirts go back home with people!”

alt text here

Kiss says he hopes to see many of the people he spoke to at upcoming OpenStack Days in Europe, especially the one he’s organizing in Budapest in what may be the coolest venue of them all.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

OpenStack was also the subject of a number of talks in the virtualization and infrastructure-as-a-service track. They included one from Nova project team lead (PTL) John Garbutt on getting your ideas into OpenStack upstream and evaluating Magnum from DreamHost’s Rosario di Somma. We’ll have write-ups on these soon, stay tuned!

Cover photo and booth photo: Nicole Martinelli.

by Nicole Martinelli at February 04, 2016 06:20 PM

RDO

RDO Community Day at FOSDEM

On Friday, in Brussels, 45 RDO enthusiasts gathered at the IBM Client Center in Brussels (Thanks, IBM!) for a full day of RDO content and discussion.

Most of the event was recorded. The camera battery ran out near the end, but we got audio for the rest of the event. I am in the process of uploading all of the video. (80GB of video!) It will be available on the event page as soon as possible. Some of it is there already.

The most exciting part of the event, for me anyways, was discovering that RDO is no longer completely a Red Hat project. More than half of those in attendance were from not from Red Hat. For comparison, last year we had about 30 people in attendance, and all but about 5 were from Red Hat.

Below you can see some photos from the event.

RDO Community Day, FOSDEM 2016<script async="" charset="utf-8" src="https://embedr.flickr.com/assets/client-code.js"></script>

The day started with a marvelous keynote from Thomas Oulevey about CERN's OpenStack deployment, which uses RDO. From there, we had presentations about RDO-Manager, and other talks about deploying, configuring, and managing OpenStack.

I'd like to thank the folks that worked so hard to make this event happen, including especially the speakers, and Eliska Malakova who handled many of the logistics.

by Rich Bowen at February 04, 2016 04:42 PM

Mirantis

OpenStack Community App Catalog Review: Issue #2

The post OpenStack Community App Catalog Review: Issue #2 appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Welcome to the second edition of the OpenStack Community App Catalog Digest, where we’ll fill you in on what’s going on and how you can contribute.

In the news

Since our last issue, the Community App Catalog group approved two new types of assets. The first, from the January 14 meeting, is Mistral Workflows, which enable you to define very specifically what should happen when, and in response to what. The other, from the January 21 meeting, is TOSCA Service Templates, which are much like Heat templates.  (In fact, the Heat Translator project, which converts them to Heat templates, simply calls them “Templates”.) Also under discussion has been the issue of whether the cloud-side backed project, Glare (Glance artifacts) would be a separate project under the Big Tent or a part of the Community App Catalog itself.  At issue is the idea that if Glare is part of Glance, it may be in a position of simply proxying or duplicating Glance functionality, and in a limited way, as Glance only works with the command line and web interfaces (as opposed to having a REST interface).  The community expects to make some sort of decision at the next meeting, Thursday, February 4th, 2016 at 17:00 UTC (9am Pacific). Also on the agenda for Thursday’s meeting:
  • Status updates
  • Resource commitments for Glare/API work (check our first issue to learn more about Glare)
  • Integration with Horizon: Using form parameters may not work in near future
  • Integration with Horizon: How about a stab at integration tests?
  • Open discussion
You can see all of the available agendas at Meetings/app-catalog wiki page. Don’t forget, these meetings are open to anyone in the community; if you want to be involved in some of these decisions, put them in your calendar: the OpenStack Community App Catalog team meets on Thursdays at 1700 UTC in the Freenode IRC channel #openstack-meeting-3.

Further Reading

Dear all, in our first issue we talked about two editorial articles. The first, a comparison of using a Murano App versus using a Glance image to add an application to the Community App Catalog by Ilya Stechkin, Kirill Zaitsev, Dmytro Dovbii, Alexey Deryugin, and Pavel Karpov, has been published and we’d love to hear your comments. We’d also like to know: what’s on your mind?  What do you want to know more about? Next up is an article about the importance of the Community App Catalog in the Mirantis Unlocked validation process. We’ll explain how to validate your application and why you’d want to add the application to the Community App Catalog. Using the Kubernetes Murano App as an example, we’ll show you how we are moving this app through all steps of the validation process.

Helpful hint: Adding a new app to the Community App Catalog

Interested in adding your app to the Community App Catalog right now? The process is well described on the wiki page, where you can also find information on how to update content that has already been published. For example, you might want to update the hash for a binary asset, or update a link if the URL has changed.

One more thing…

Is your app already published in the Community App Catalog? Feel free to send Ilya the news about this glorious event, and we’ll be happy to celebrate you in our next monthly Community App Catalog Review.   Have a great month. We’ll see you in March!

The post OpenStack Community App Catalog Review: Issue #2 appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Ilya Stechkin at February 04, 2016 10:33 AM

February 03, 2016

Cloudwatt

New available OS distribution : CentOS 6.7

Cloudwatt provides its clients the latest version of the CentOS distribution 6.7

The image includes all system and security updates available as of February 5th, 2016 applied.

You will find detailed information about this release on the CentOS website.

Do not hesitate to use the n2 type of instances, our n1 instance family being nearly sold out.

by Sinh Chung NGUYEN at February 03, 2016 11:00 PM

5 Minutes Stacks, épisode 22: Cassandra

Episode 22: Cassandra

cassandralogo

Apache Cassandra is a distributed NoSQL database management system. Initially developed by Facebook, it was released as an open source project in 2008 and is now a top-level Apache projet. Cassandra allows to handle large amounts of structured, semi-structured, and unstructured data across multiple data centers and in the cloud. By basing on BigTable principles, Cassandra offers flexible data models and ensures a fast reponse request time. It delivers also many other features: continuous availability, fault-tolerant, decentralized and scalability.

Descriptions

The “Casssandra” stack bootstrap a 3 node Cassandra cluster with one seed.

Preparations

Les versions

  • Cassandra 3.1.1
  • Docker 1.8.3
  • CoreOS 835.9.0

The prerequisites to deploy this stack

Size of the instance

Per default, the script is proposing a deployement on an instance type “Standard 2” (n1.cw.standard-2). Instances are charged by the minute and capped at their monthly price (you can find more details on the Tarifs page on the Cloudwatt website). Obviously, you can adjust the stack parameters, particularly its defaut size.

What will you find in the repository

Once you have cloned the github, you will find in the bundle-coreos-cassandra/ repository:

  • bundle-coreos-cassandra.heat.yml: HEAT orchestration template. It will be used to deploy the necessary infrastructure.
  • stack-start.sh: Stack launching script. This is a small script that will save you some copy-paste.

Start-up

Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

~~~ bash $ source COMPUTE-[…]-openrc.sh Please enter your OpenStack Password:

~~~

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Adjust the parameters

With the bundle-coreos-cassandra.heat.yml file, you will find at the top a section named parameters. The sole mandatory parameter to adjust is the one called keypair_name. Its default value must contain a valid keypair with regards to your Cloudwatt user account. This is within this same file that you can adjust the instance size by playing with the flavor_name parameter.

~~~ yaml heat_template_version: 2013-05-23

description: Cassandra 3 nodes cluster with Docker on CoreOS

parameters: keypair_name: type: string description: Keypair to inject in instance label: SSH Keypair default: my-keypair-name <– Indicate here your keypair

flavor_name: type: string description: Flavor to use for the server default: n1.cw.standard-2 label: Instance Type (Flavor) constraints: - allowed_values: […]

resources: network: type: OS::Neutron::Net

subnet: type: OS::Neutron::Subnet properties: network_id: { get_resource: network } ip_version: 4 cidr: 10.0.1.0/24 dns_nameservers: [8.8.8.8, 8.8.4.4] allocation_pools: - { start: 10.0.1.100, end: 10.0.1.254 } […]


### Start the stack

 In a shell, run the script `stack-start.sh`:

 ~~~ bash
 $ ./stack-start.sh Cassandra
 +--------------------------------------+------------+--------------------+----------------------+
 | id                                   | stack_name | stack_status       | creation_time        |
 +--------------------------------------+------------+--------------------+----------------------+
 | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | Cassandra  | CREATE_IN_PROGRESS | 2025-10-23T07:27:69Z |
 +--------------------------------------+------------+--------------------+----------------------+
 ~~~

 Within 5 minutes the stack will be fully operational. (Use watch to see the status in real-time)

 ~~~ bash
 $ watch -n 1 heat stack-list
 +--------------------------------------+------------+-----------------+----------------------+
 | id                                   | stack_name | stack_status    | creation_time        |
 +--------------------------------------+------------+-----------------+----------------------+
 | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | Cassandra  | CREATE_COMPLETE | 2025-10-23T07:27:69Z |
 +--------------------------------------+------------+-----------------+----------------------+
 ~~~

### Enjoy

Once all this done, you have a ready-to-use 3-nodes Cassandra cluster. Instance's IP can be obtained with the following command (and will be listed in the "output" section): 

~~~ bash
$ heat stack-show Cassandra
+-----------------------+---------------------------------------------------+
| Property              | Value                                             |
+-----------------------+---------------------------------------------------+
|                     [...]                                                 |
| outputs               | [                                                 |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.100",                 |
|                       |     "description": "server3 private IP address",  |
|                       |     "output_key": "server3_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.102",                 |
|                       |     "description": "server1 private IP address",  |
|                       |     "output_key": "server1_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "XX.XX.XX.XX",                |
|                       |     "description": "server3 public IP address",   |
|                       |     "output_key": "server3_public_ip"             |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "YY.YY.YY.YY",                |
|                       |     "description": "server1 public IP address",   |
|                       |     "output_key": "server1_public_ip"             |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.103",                 |
|                       |     "description": "server2 private IP address",  |
|                       |     "output_key": "server2_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "ZZ.ZZ.ZZ.ZZ",                |
|                       |     "description": "server2 public IP address",   |
|                       |     "output_key": "server2_public_ip"             |
|                       |   }                                               |
|                       | ]                                                 |
|                     [...]                                                 |
+-----------------------+---------------------------------------------------+

Launch cqlsh command

ssh -i <keypair> core@<node-ip@>
docker exec -it cassandra cqlsh

Manage Cassandra cluster

The nodetool utility is a command line interface for managing a cluster.

ssh -i <keypair> core@<node-ip@>
docker exec cassandra nodetool <nodetool_command>

Access to Cassandra logs

Cassandra log can be viewed via docker logs command

ssh -i <keypair> core@<node-ip@>
docker logs -f cassandra

Cassandra also saves its logs inside the container. By default, logging output is placed in /var/log/cassandra/system.log file.

ssh -i <keypair> core@<node-ip@>
docker exec cassandra cat /var/log/cassandra/system.log

Other resources you could be interested in:

by Sinh Chung NGUYEN at February 03, 2016 11:00 PM

5 Minutes Stacks, Episode 21: Docker and CoreOS

Episode 21: Docker

coreoslogo

CoreOS is an open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of applications deployment, security, reliability and scalability. As an operating system, CoreOS provides only the minimal functionality required for deploying applications inside software containers, together with built-in mechanisms for service discovery and configuration sharing.

This tutorial will help you creating a CoreOS three node cluster. By default each instance is only accessible trough SSH port 22. You will have to create additional rules in your security groups to manage services you aim to deploy.

Preparations

The version

  • CoreOS 835.9.0
  • Docker 1.8.3

The prerequisites to deploy this stack

These should be routine by now:

Size of the instance

By default, the stack deploys on an instance of type “Standard 2” (n2.cw.standard-2). A variety of other instance flavors exist to suit your various needs, allowing you to pay only for the services you need. Instances are charged by the minute and capped at their monthly price (you can find more details on the Tarifs page on the Cloudwatt website).

Stack parameters, of course, are yours to tweak at your fancy.

What will you find in the repository

Once you have cloned the github, you will find in the bundle-coreos-docker/ repository:

  • bundle-coreos-docker.heat.yml: HEAT orchestration template. It will be used to deploy the necessary infrastructure.
  • stack-start.sh: Stack launching script. This is a small script that will save you some copy-paste.

Start-up

Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

 $ source COMPUTE-[...]-openrc.sh
 Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Adjust the parameters

In the bundle-coreos-docker.heat.yml file (heat template), you will find a section named parameters near the top. The only mandatory parameter is the keypair_name. The keypair_name’s default value should contain a valid keypair with regards to your Cloudwatt user account, if you wish to have it by default on the console.

Within these heat templates, you can also adjust (and set the defaults for) the instance type by playing with the flavor_name parameter accordingly.

By default, the stack network and subnet are generated for the stack. This behavior can be changed within the bundle-coreos-docker.heat.yml file as well, if need be, although doing so may be cause for security concerns.

heat_template_version: 2013-05-23

description: CoreOS 3 nodes cluster for docker

parameter_groups:
- label: CoreOS
  parameters:
    - keypair_name
    - flavor_name

parameters:
  keypair_name:
    type: string
    description: Name of keypair to assign to CoreOS instances
    label: SSH Keypair
    default: my-keypair-name                <-- Indicate here your keypair

  flavor_name:
    type: string
    description: Flavor to use for the server
    default: n2.cw.standard-2
    label: Instance Type (Flavor)
    constraints:
      - allowed_values:
        - n2.cw.standard-2
        - n2.cw.standard-4
        - n2.cw.standard-8
        - n2.cw.standard-12
        - n2.cw.standard-16
        - n2.cw.highmem-2
        - n2.cw.highmem-4
        - n2.cw.highmem-8
        - n2.cw.highmem-12

resources:
  network:
    type: OS::Neutron::Net

  subnet:
    type: OS::Neutron::Subnet
    properties:
      network_id: { get_resource: network }
      ip_version: 4
      cidr: 10.0.1.0/24
      dns_nameservers: [8.8.8.8, 8.8.4.4]
      allocation_pools:
        - { start: 10.0.1.100, end: 10.0.1.254 }

[...]

Start the stack

In a shell, run the script stack-start.sh:

~~~ bash $ ./stack-start.sh DOCKER +————————————–+————+——————–+———————-+ | id | stack_name | stack_status | creation_time | +————————————–+————+——————–+———————-+ | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | DOCKER | CREATE_IN_PROGRESS | 2025-10-23T07:27:69Z | +————————————–+————+——————–+———————-+ ~~~

Within 5 minutes the stack will be fully operational. (Use watch to see the status in real-time)

~~~ bash $ watch -n 1 heat stack-list +————————————–+————+—————–+———————-+ | id | stack_name | stack_status | creation_time | +————————————–+————+—————–+———————-+ | xixixx-xixxi-ixixi-xiixxxi-ixxxixixi | DOCKER | CREATE_COMPLETE | 2025-10-23T07:27:69Z | +————————————–+————+—————–+———————-+ ~~~

Enjoy

Once all of this done, instance’s IP can be obtained with the following command (and will be listed in the “output” section):

$ heat stack-show DOCKER
+-----------------------+---------------------------------------------------+
| Property              | Value                                             |
+-----------------------+---------------------------------------------------+
|                     [...]                                                 |
| outputs               | [                                                 |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.100",                 |
|                       |     "description": "server3 private IP address",  |
|                       |     "output_key": "server3_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.102",                 |
|                       |     "description": "server1 private IP address",  |
|                       |     "output_key": "server1_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "XX.XX.XX.XX",                |
|                       |     "description": "server3 public IP address",   |
|                       |     "output_key": "server3_public_ip"             |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "YY.YY.YY.YY",                |
|                       |     "description": "server1 public IP address",   |
|                       |     "output_key": "server1_public_ip"             |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "10.0.1.103",                 |
|                       |     "description": "server2 private IP address",  |
|                       |     "output_key": "server2_private_ip"            |
|                       |   },                                              |
|                       |   {                                               |
|                       |     "output_value": "ZZ.ZZ.ZZ.ZZ",                |
|                       |     "description": "server2 public IP address",   |
|                       |     "output_key": "server2_public_ip"             |
|                       |   }                                               |
|                       | ]                                                 |
|                     [...]                                                 |
+-----------------------+---------------------------------------------------+

How to use CoreOS

To access the CoreOS instances through SSH, the default user is called core The following command can be used to connect to instances:

ssh -i <keypair> core@<node-ip@>
etcd - distributed reliable key-value store

Show cluster health:

ssh -i <keypair> core@<node-ip@>
etcdctl cluster-health

List all keys in etcd:

ssh -i <keypair> core@<node-ip@>
etcdctl ls --recursive

Get the value of a key:

ssh -i <keypair> core@<node-ip@>
etcdctl get <name/of/key>

Create/Update a key:

ssh -i <keypair> core@<node-ip@>
etcdctl set <name/of/key> <value>
systemd - init system for Linux

How to create a “Hello World” service under docker with systemd:

First, you will have to add the following file in /etc/systemd/system. It is used to declare your service. Let’s name it hello.service.

[Unit]
Description=MyApp
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"

[Install]
WantedBy=multi-user.target

Once your file created, the service needs to be enabled and started:

sudo systemctl enable /etc/systemd/system/hello.service
sudo systemctl start hello.service

Logs can be seen with the following command:

journalctl -f -u hello.service

To stop the service, just type:

sudo systemctl stop hello.service

And this is how to deactivate it:

sudo systemctl disable hello.service
flannel - shared virtual network for containers

Flannel is a virtual network that gives a subnet to each host for use with container runtimes. It is started with your stack and allows you connect to any container from any other container, regardless of the hosts those containers are running on.

Other resources you could be interested in:

Have fun. Hack in peace.

by Xavier Maillard at February 03, 2016 11:00 PM

OpenStack Superuser

Superuser Awards nominations now open

Nominations for the Austin Summit Superuser Awards are open and will be accepted through March 11. The winner will be chosen by the community at large and announced onstage at the Summit in April.

The Superuser awards recognize teams using OpenStack to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the community. Teams of all sizes are encouraged to apply. If you fit the bill, or know a team that does, we encourage you to submit a nomination here.

Learn more about the Superuser Awards.

Following the successful launch of OpenStack’s inaugural Superuser Awards at the Paris Summit in November 2014, the community has continued to award winners at every Summit to users who show how OpenStack is making a difference and provide strategic value in their organization. At the Paris Summit, CERN was chosen as the winner from an impressive roster of finalists, Comcast took the award home in Vancouver and NTT Group joined the roster of winners at the Tokyo Summit in October 2015.

Submissions are open until March 11 and once the finalists are selected by the Superuser Editorial Advisory Board, the OpenStack community will vote among the finalists to determine the overall winner. Polling will begin in late March.

When evaluating winners for the Superuser Award, judges and community members take into account the unique nature of use case(s), as well as integrations and applications of OpenStack performed by a particular team.

Additional selection criteria includes how the workload has transformed the company's business, including quantitative and qualitative results of performance as well as community impact in terms of code contributions, feedback, knowledge sharing, etc.

Winners will take the stage at the OpenStack Summit in Austin. Submissions are open now until March 11, 2016. You're invited to nominate your team or nominate a Superuser here.

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

by Superuser at February 03, 2016 04:49 PM

Aptira

GUTS

GUTS splat logo
GUTS: A Workload migration engine designed to automatically move existing workloads and virtual machines from various previous generation virtualisation platforms on to OpenStack.

When organisations move from their existing virtualised infrastructures to OpenStack, one of the biggest problems they face is the migration of VMs running on VMWare, Hyper-V, etc., to OpenStack. Most of the time, this process can involve time consuming, repetitive and complicated steps like moving machines with multiple virtual disks, removal and installation of customised hypervisor specific tools and manually copying the data across.

GUTS solves this problem by providing an automated, efficient and robust way to migrate VMs from existing clouds to OpenStack.

GUTS is an Open Source project that aims to make the move to an OpenStack cloud easier. It addresses the various difficulties operators and administrators face when migrating workloads from existing clouds on to OpenStack.

Stay tuned for details on how you can download and install GUTS yourself.

The post GUTS appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Kavit Munshi at February 03, 2016 05:46 AM

February 02, 2016

Cloudwatt

Update of the CentOS 6.5 image

Cloudwatt provides an update of the CentOS 6.5 distribution image with all the system and security updates available as of January 27th. We encourage you to use this latest version from now.

To all users:

  • if you have developed scripts or applications referencing the old identifier (id), you should update them with the id of this new version to ensure continuity of your service. Indeed, the id of the image is new and the old version of the image has been removed from our public catalog.

  • If you have a snapshot of your instance done with the previous image, know that the restoration will work because even though the old picture was removed from the public catalog and is no longer visible, Cloudwatt stores and keeps a history of all published images.

by Florence Arnal at February 02, 2016 11:00 PM

Solinea

Five Questions with Hiroshi Koiwai, Deputy General Manager of ITOCHU Techno-Solutions

Koiwai.pngTokyo-based ITOCHU Techno-Solutions is a US$4 billion reseller of IT products and solutions to enterprises and service providers throughout Japan. The company is an investor in the Solinea Series A. We asked Deputy General Manager Hiroshi Koiwai a few questions about the opportunity the two companies are pursuing together.

by Francesco Paola (fpaola@solinea.com) at February 02, 2016 01:27 PM

Aptira

StackBuffet: Not an OpenStack distro

StackBuffet Header Image
Over the last six months or so we’ve learned some interesting things from talking to operators:

  • Quite a few people are carrying local patches.
  • Not everyone loves distros. Actually some people really hate them.
  • Some people find running a CI system taxing, or worse.
  • Sometimes you need to patch something ASAP, but your vendor gets in the way.

Furthermore, both vendor supplied distros and roll-your-own packages appear to be major drivers of dissatisfaction in the latest user survey.

So here at Aptira, we decided to do something about it. Russell Sim, our most sophisticated and well dressed engineer, had a brain wave in the lounge in Tokyo. 

For your OpenStacking pleasure, we’re proud to introduce StackBuffet
StackBuffet can be seen as an anti-distro, a personalised distro or an OpenStack-CI-as-a-service. Here’s how it works:

  1. You select a starting point from the official OpenStack repo, eg 2015.1
  2. You apply the patches you want. These could be:
    • Your own local patches.
    • Patches from upstream, eg a recent bug fix.
    • Common patches as recommended via Aptira and other StackBuffet customers.
    • Patches you got from a swap meet, or operator meetup
  3. You designate any additional tests you need done.
  4. On demand, StackBuffet builds packages of the format your want and runs them through Tempest and Rally testing.

The end result are personalised OpenStack packages that are ready for deployment via your deployment mechanism. Aptira will even support those packages for you, if that’s what you want.

If you’re running your own CI system, then this will help you stop thinking about that, and focussing more on your core business. If you’re using a vendor distro then StackBuffet will give you agility far beyond what a distro can provide.

Either way your peace of mind is undisturbed, knowing that Aptira’s highly adjectived engineers are supporting you. We’ll help you solve your problems, identify fixes and even back-port patches if that’s needed.

Merging two things that people find troublesome might seem to be a counter-intuitive approach to some, but we think it works. When it boils down to it you can’t treat OpenStack like Linux, and many people can’t afford to commit all the resources needed to properly solve these issues.

We’re looking for beta customers. If you think this is you, then sign up here:

STACKBUFFET: BETA SIGN UP

We’re looking for people to trial our StackBuffet service when it launches in the near future. If you’re interested, could you tell us a little bit about how you currently use OpenStack?

[contact-form-7]

Close

The post StackBuffet: Not an OpenStack distro appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Roland Chan at February 02, 2016 07:06 AM

February 01, 2016

OpenStack Superuser

Contemplating OpenStack in 2016

The first 2016 episode of Superuser TV takes the long view.

Host Shamail Tahir, an offering manager for OpenStack initiatives at IBM, sounds out Randy Bias, vice president of technology at EMC, and Boris Renski, the co-founder and CMO of Mirantis.

In this 40-minute segment, they discuss the most important changes to OpenStack in 2015, what they’re hearing from clients these days, what they’re excited about and why now is the time to push on through. Bias and Renski also engage in a bit of light acronym sparring and gentlemanly, if spirited, disagreements over vendor lock-in and interoperability. This episode closes with predictions for OpenStack's 2016.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/9upsmam0O3U" width=""></iframe>

Superuser TV posts monthly, highlighting community-driven activities, technical content, updates and other ecosystem news. We're also out in full force at the next Summit.

Interested in participating? Contact superusertv@openstack.org.

by Superuser at February 01, 2016 09:08 PM

StackMasters

Cloud 9: a sysadmin meets OpenStack

For the past 7 years I have been working as a head system administrator with a tech startup incubator.

Mine is the (often thankless) job of ensuring that the data center runs smoothly and securely and that business software, application servers and development environments are working and available to our users.

As our sysadmin team is often overloaded with tasks such as server provisioning and re-purposing, and network troubleshooting, the time left for engineering and improving upon the offered IT services is limited.

While this might be true in most IT environments, it doubly so in an incubator, where you need to manage all kinds of different development and production stacks that different startups opt for. Being able to respond quickly to the demands of all these agile teams — all growing and experimenting at the same time — can be really hard.


…and then things get really complicated

Working with many startups at the incubation stage, also means that a few of them will suddenly grow too large, too fast — and their IT requirements becoming substantially higher as well.

Legacy system administration wisdom and Perl mastery, while adequate for last decade’s data centers, won’t really help us here. The only way to address this kind of demand is by adopting advanced high availability solutions, coupled with a full disaster-recovery plan.

And while many traditional sysadmins fear the cloud, in my opinion, the move from in-house hosting to cloud-based solutions has been a great enabler in the delivery and operation of more advanced systems.

Server virtualization, monitoring tools and linux clusters have been part of our everyday job for a long time now, giving us the opportunity to respond to highly customized needs.

Efficiency and time to delivery have been two of the most critical aspects we had focused to improve.

open-source-cloud


Enter OpenStack

The last few days I had the chance to experiment with and learn about the internals of OpenStack, which is a new approach to building your IT infrastructure, and comes with solutions to a lot of the aforementioned problems baked in.

OpenStack, at its core, is a set of software tools for building and managing private and public clouds.

With OpenStack you deploy virtual machines (complete with storage and network access) over your physical servers, getting your own turn-key Infrastructure-as-a-Service solution.

It’s like your own private Amazon AWS or Google Compute Engine, offering a “drop-in” replacement for public clouds, at least the way I had experienced them so far.

As an old-school system administrator, what impressed me about OpenStack is that it extends resource management over to storage and network — that is, going beyond the CPU and memory management options that you get with the typical virtual machine offerings.

Having a unified view of your computing resources utilization, and having the ability to manage it from a single place is a very powerful feature. And it’s especially mind blowing, even to an old hat like me, raised up on the CLI, that you can access all that power from an easy to use web-based UI.

The ease of creating images and customized flavors of your virtual machines, allows you to deploy a new server in minutes without having to repeat trivial configurations all over again.

Heck, you can literally create an HTTP Load Balancer AND the back-end service farm for it in just a few minutes.

Even though I didn’t have yet the time to explore and experiment in depth with the APIs exposed by OpenStack, I feel that this would be the most powerful tool in the hands of an operator. Already there is a great variety of external tools and applications based on OpenStack services and aiming to streamline complicated solutions provisioning.


Looking for more

Even though it has only been a week of trying out OpenStack, and I still feel like I’ve only scratched the surface of what it can do, I’m sure that as I go deeper I will discover even more things that would enhance my productivity as an IT administrator and improve my ability to manage complex solutions in the infrastructure.

To paraphrase Jon Landau, “I’ve seen the future of service provisioning, and it’s name is OpenStack”.


by Michalis Giannos at February 01, 2016 02:50 PM

Opensource.com

Mid-cycle meetups, high performance computing advances, and more OpenStack news

Interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at February 01, 2016 07:59 AM

Hugh Blemings

Lwood-20160131

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week ending 31 January 2016. For more background on Lwood, please refer here.

Basic Stats for week 25 to 31 January 2016:

  • ~623 Messages (up about 24% relative to last week)
  • ~178 Unique threads (down about 3% relative to last week)

Messages up a fair bit, threads pretty flat this week…

Notable Discussions

New OpenStack Security Notices (OSSN 0060)

Glance configuration option can lead to privilege escalation (OSSN 0060)

From the summary “Glance exposes a configuration option called `use_user_token` in the configuration file `glance-api.conf`.  It should be noted that the default setting (`True`) is secure. If, however, the setting is changed to `False` and valid admin credentials are supplied in the following section (`admin_user` and `admin_password`), Glance API commands will be executed with admin privileges regardless of the intended privilege level of the calling user.”

More information and discussion in the original post or the OSSN itself.

Upstream University

Mike Perez announced a call for mentors and mentees to be involved in the upcoming Austin summits’ Upstream University. A feature of summits since Paris, this well attended and well regarded event provides an opportunity for developers new to OpenStack to “learn the ropes” in a friendly and supportive environment.  If you’re interested in assisting or attending, please sign up here indicating which you wish to do (mentor or mentee!)

Help improve the User Portal

Pieter Kruithof Jr noted that the UX group are seeking people who are “developing, testing and deploying apps to the cloud” for interviews.  The intent is to improve the end user information available through the User Portal to the benefit of all developers.

Upcoming OpenStack Events

A summary of OpenStack related events that cropped up on the mailing list this past week.  Don’t forget the OpenStack Foundation’s excellent Events Page for a comprehensive list!

Midcycles & Sprints

People and Projects

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news

This edition of Lwood was prepared while sitting in of a few different sessions at linux.conf.au – so no tunes, but some great presentations and, admittedly, a shorter Lwood :)

by hugh at February 01, 2016 06:03 AM

January 31, 2016

Cloudwatt

Update of the CentOS 7.0 image

Cloudwatt provides an update of the CentOS 7.0 distribution image with all the system and security updates available as of January 27th. We encourage you to use this latest version from now.

To all users:

  • if you have developed scripts or applications referencing the old identifier (id), you should update them with the id of this new version to ensure continuity of your service. Indeed, the id of the image is new and the old version of the image has been removed from our public catalog.
  • If you have a snapshot of your instance done with the previous image, know that the restoration will work because even though the old picture was removed from the public catalog and is no longer visible, Cloudwatt stores and keeps a history of all published images.

by Florence Arnal at January 31, 2016 11:00 PM

January 29, 2016

The Official Rackspace Blog

OpenStack Mid-Cycle Session Leads to Collaborative Production

Co-locating two OpenStack mid-cycle sessions — Security and Barbican — at Rackspace last month ended up with the two groups collaborating so productively that by the end, whiteboards overflowed with data flow diagrams, threat models and documentation.

The Barbicaneers and the OpenStack Security group vowed to continue their joint planning and delivery on the Barbican Federation and Bring Your Own Key workflows that started in Tokyo.

The goal of an OpenStack mid-cycle was achieved in San Antonio: to foster intense collaborative efforts on active projects in the community. It’s an opportunity to review project status, bridge the gaps with face-to-face collaboration, strategize for the next conference and to focus on deadlines and major tasks.

The sessions are intensive and task driven, and they’re imperative to keeping the momentum moving forward on efforts proposed at OpenStack conferences.

More than 43 developers attended this mid-cycle, which kicked off Jan. 12, from Rackspace, Hewlett-Packard, IBM, Cisco, RedHat, VMWare, Mirantis, Symantec, John Hopkins University and many individual contributors. Rackspace’s sponsorship was possible thanks to the support and dedication of Paul Voccio, vice president of Rackspace OpenStack Development and Gigi Geofferion, vice president of Software Development and Quality Engineering.

The OpenStack Security group is responsible for leading and implementing initiatives that improve and ensure the overall security of OpenStack. During this session, they focused on several security development projects:

  • Bandit: a Python AST-based code security analyzer designed to identify and report security issues by sifting through large volumes of code efficiently, rapidly identifying potential flaws; for example, unsafe function calls or the use of outdated/unsafe libraries.
  • Anchor: a lightweight open source Public Key Infrastructure tool that uses automated provisioning of short-term certificates to enable cryptographic trust in OpenStack services.
  • Syntribos: an open source API-fuzzing tool created to detect new input sanitization, denial of service and other interesting attack vectors. It will automate a majority of our current manual testing protocols for OpenStack APIs.
  • OpenStack-Ansible Security: an Ansible role that provides a simple, configurable method for applying STIG hardening standards to OpenStack deployments, enabling users to build environments that meet the requirements of various compliance programs, such as the Payment Card Industry Data Security Standard.
  • Threat Analysis Project: designed to proactively identify threats and weaknesses in OpenStack cloud and contribute to building a secure and robust platform. Threat modeling takes a comprehensive look at the system at hand — components, protocols and code — against the existence and capability of an adversary looking for known vulnerabilities.

The Security session also included a Syntribos demonstration by Michael Dong of Rackspace against an OpenStack API. Racker Major Hayden demonstrated how to run the OpenStack-Ansible Security tool to check systems for security misconfigurations and how to fix the issues automatically. Tim Kelsey from HPE gave the team a deep dive into creating plugins for Bandit and hacking Bandit for bug fixing, feature enhancement and plugin creation. Finally, the OpenStack Security group developed an improved threat analysis process and a security reference list.

More than 20 Barbicaneers worked tirelessly to improve Barbican, a REST API designed for the secure storage, provisioning and management of passwords, encryption keys and X.509 certificates The goal is to make it useful for all environments, including large ephemeral clouds. The Barbicaneers focused on simplifying deployment, improving performance, planning for the future of certificate provisioning, improving management tooling of the Barbican database and improving gate checks.

The team also added auditing capabilities for requests made to Barbican and stricter validation requirements for API requests.

Sheena Gregson shows off the giant cookie team Barbican devoured after a job well done.

Co-locating the two mid-cycle sessions provided a lot of insight to the teams on how to improve their collaborative efforts. The groups joined forces to discuss improvements on certificate management, complete a threat analysis on Barbican, and write related threat analysis documentation. Look for more from these two groups during the OpenStack Summit, to be held in Austin April 25-29.

What’s next?

In the upcoming weeks contributors from both OpenStack Security and Barbican will work on a cross-project initiative to document and blueprint the work needed to bring push-model BYOK to OpenStack clouds.

This effort will require participation from the greater OpenStack community, since it will involve changes across a handful of projects that are currently providing or will soon provide encryption services. The goal is to nail down the requirements ahead of the Austin summit, so they can present their findings to the larger community during the cross-project sessions at the Design Summit.

by Michael Xin at January 29, 2016 07:39 PM

January 28, 2016

Opensource.com

7 new OpenStack guides and tips

Learning how to deploy and maintain OpenStack can be difficult, even for seasoned IT professionals. The number of things you must keep up with seems to grow every day.

Fortunately, there are tons of resources out there to help you along the way, whether you are a beginner or a cloud guru. Between the official documentation, IRC channels, books, and a number of training options available to you, as well as the number of community-created OpenStack tutorials, help is never too far away.

by Jason Baker at January 28, 2016 08:03 AM

January 27, 2016

OpenStack Superuser

OpenStack in production: CERN's cloud in Kilo

Following on from previous upgrades, CERN migrated the OpenStack cloud to Kilo during September to November 2015. Along with the bug fixes, we are planning on exploiting the significant number of [new features][1], especially as related to performance tuning. The overall cloud architecture was covered at the Tokyo OpenStack summit video

As the Large Hadron Collider continues to run 24x7, these upgrades were done while the cloud was running and virtual machines were untouched. The staged approach was used again. While most of the steps went smoothly, a few problems were encountered.

  • Cinder - we encountered the bug https://bugs.launchpad.net/cinder/+bug/1455726 which led to a foreign key error. The cause appears to be related to UTF8. The patch (https://review.openstack.org/#/c/183814/) was not completed so did not get included into the release. More details at the thread at http://lists.openstack.org/pipermail/openstack/2015-August/013601.html .
  • Keystone - one of the configuration parameters for caches had changed syntax and this was not reflected in the configuration generated by Puppet. The symptoms were high load on the Keystone servers since caching was not enabled.
  • Glance - given the rolling upgrade on Glance, we took advantage of having virtualised the majority of the Glance server pool. This allows new resources to be brought online with a Juno configuration and the old ones deleted.
  • Nova - we upgraded the control plane services along with the QA compute nodes. With the versioned objects, we could stage the migration of the thousands of compute nodes so that we did not need to do all the updates at once. Puppet looked after the appropriate deployments of the RPMs.
    • Following the upgrade, we had an outage of the metadata service for the OpenStack specific metadata. The EC2 metadata works fine. This is a cells related issue and we'll create a bug/blueprint for the fix.
    • The VM resize functions are giving errors during the execution. We're tracking this with the upstream developers. https://bugs.launchpad.net/nova/+bug/1459758
      https://bugs.launchpad.net/nova/+bug/1446082
    • We wanted to use the latest Nova NUMA features. We encountered a problem with cells and this feature, although it worked well in a non-cells cloud. This is being tracked in https://bugs.launchpad.net/nova/+bug/1517006 . We will use the new features for performance optimisation once these problems are resolved.
    • The [dynamic][2] migration of flavors was only partially successful. With the cells database having the flavors data in two places, the migration needed to be done simultaneously. We resolved this by forcing the migration of the flavors to the new endpoint,
    • The handling of ephemeral drives in Kilo seems to be different from Juno. The option default_ephemeral_format now defaults to vfat, rather than ext3. The aim seems to have been to give vfat to Windows and ext4 to Linux but our environment does not follow this. This was reported by [Nectar][3] but we could not find any migration advice in the Kilo release notes. We have set the default to ext3 while we are working out the migration implications.
    • We're also working through a scaling problem for our most dynamic cells at https://bugs.launchpad.net/nova/+bug/1524114 . Here all VMs are being queried by the scheduler, not just the active ones. Since we create/delete hundreds of VMs an hour, there are large volumes of deleted VMs which made one query take longer than expected.

Catching these cases with cells early is part of the work for the scope of the the Cell V2 project at https://wiki.openstack.org/wiki/Nova-Cells-v2 to which we are contributing along with the BARC centre in Mumbai so that the cells configuration becomes the default (with only a single cell) and the upstream test cases are enhanced to validate the multi cell configuration.

As some of the hypervisors are still running Scientific Linux 6, we used the approach from GoDaddy to package the components using software collections. Details are available at https://github.com/krislindgren/openstack-venv-cent6 . We used this for nova and ceilometer which are the agents installed on the hypervisors. The controllers were upgraded to CentOS 7 as part of the upgrade to Kilo.

Overall, getting to Kilo enables new features and includes bug fixes to reduce administration effort. Keeping up with new releases requires careful planning and sharing upstream activities such as the Puppet modules but has proven to be the best approach. With many of the CERN OpenStack team in the summit in Tokyo, we did not complete the upgrade before Liberty was released but this has been completed soon afterwards.

With the Kilo base in production, we are now ready to start work on the Nova network to Neutron migration, deployment of the new[ EC2 API ][4]project and enabling [Magnum][5] for container native applications.

Further reference: https://wiki.openstack.org/wiki/ReleaseNotes/Kilo
http://www.danplanet.com/blog/2015/10/06/upgrades-in-nova-objects/
https://support.rc.nectar.org.au/news/18-09-2015/vfat-filesystem-secondary-ephemeral-disk-mnt-devvdb
https://github.com/openstack/ec2-api
https://github.com/openstack/magnum

This post first appeared on CERN's OpenStack blog.

If you're using OpenStack at your research institution or university, check out the Austin Summit track dedicated to high-performance computing.

Superuser is always interested in how-tos and other contributions, please get in touch: editor@superuser.org

Cover Photo // CC BY NC

by Tim Bell at January 27, 2016 10:18 PM

Visa Inc. turns to OpenStack to boost developer productivity

To boost Visa Inc.’s efforts to reach “Everywhere you want to be,” the global payments technology giant is leveraging the cloud.

“What we’re trying to do with OpenStack is enable developer productivity,” says Stan Chan, chief architect of systems in Visa’s infrastructure, architecture and engineering group.

There are two main goals for OpenStack at Visa, he adds. The first is giving developers tools to build products and services that don’t require them to worry about underlying management of those applications once they go into a specific environment. The second is to provide a platform that is invisible to developers so they can focus on building value for the business.

“We needed to partner with someone in engaged with the community and who has the expertise to build OpenStack at scale,” says Chan, whose current role focuses on infrastructure-as-a-service and platform-as-a-service. Visa chose HdPE Helion and is using their distro.

“Ultimately, it's about finding the right balance,” says Chan. “It would be great if we actually built the in-house talent but that requires time and effort to build that knowledge with in our organization. We are taking those steps as we speak.”

Another important aspect, he says, is that “the Def core OpenStack and the simple implementation of OpenStack doesn’t provide too much value in terms of giving developers the ability to manage their applications or basically any infrastructure. It’s all the added features, like the Paas layer, solutions like Cloud Foundry and Docker and those different layers that you put on top of it that provide the value for developers.”

When asked about lessons learned from adopting OpenStack, Chan highlighted the importance of clear focus - on customers and on creating a minimal viable product.

“Don’t boil the ocean! Get the simple aspects out of the way first and work with your customers to build what they need,” he says. “Treat it like any software development project, implement important concepts like dev ops, infrastructure-as-code and continuous integration and deployment (CI/CD) best practices. Then you can iterate over and over again to build a product that fits the needs of the customer.”

Chan shared his experience on a panel of OpenStack operations all-stars including Anant Kumar of PayPal, Edgar Magana of Workday, Joseph Sandoval of Lithium Technologies, Inc. and moderated by Sumeet Singh CEO of AppFormix.

Titled “Challenges in Planning, Building, and Operating Large Scale Infrastructure,” you can catch insights from the 45-minute session on the Foundation YouTube channel.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/RyLDvDob-rk" width=""></iframe>

Cover Photo // CC BY NC

by Nicole Martinelli at January 27, 2016 10:18 PM

Takeaways from the OpenStack Mitaka Mid-cycle Security Meetup

A castle is the perfect place to talk about OpenStack security.

Some 30 people gathered in the Rackspace Castle in San Antonio — ok, it was previously a mall — for the recent Mitaka Mid-cycle Security Meetup. Experts from companies including Hewlett Packard Enterprise, Rackspace, VMWare, IBM, Mirantis and Symantec participated.

Superuser talked to Rob Clark, lead security architect for Hewlett-Packard Cloud Service and project team lead (PTL) of the OpenStack Security Project and Travis McPeak, security architect at Hewlett-Packard Helion about Ansible, cleaning up Bandit’s pesky config file and Anchor, the “most stealthy” project they’re working on.

In all, there were 24 attendees from the OpenStack Security Project and six attendees from the Barbican Project, which deliberately overlapped with the Security project Mid-Cycle. Although there was no moat or drawbridge, host Rackspace offered the group a large space and kept the Texas barbecue and breakfast tacos coming over the course of the Meetup held January 12-15.

What makes the event unique, say Clark and McPeak, is this kind of cross-project collaboration. A large number of the people working on OpenStack Security who are cores or contributors on other key OpenStack projects link brains to solve common issues. “The fact that we all come together to share our cross-domain expertise makes this group particularly special,” they added.

alt text hereMitakta Mid-Cycle security participants gather in the Rackspace Castle.

Clark kicked off the unconference by inviting people to write topics of interest on Post-it notes and stick them up on the whiteboard. Participants then voted on which topics were most interesting and votes were tallied. Topics were allotted time by how much interest they earned and to ensure everyone had something they wanted to work on.  “The unconference format allows us to follow interesting threads more easily than a traditional time-boxed conference style,” McPeak adds.

alt text here Participants produced several action items on an Etherpad and written a couple of blog posts so far with more to come: https://etherpad.openstack.org/p/security-mitaka-midcycle They also walked away with the cool shield stickers (in honor of the Castle?) at left.

Here are some of the other Mid-Cyle takeaways from Clark and McPeak:

  • Bandit development sessions. We’re in the process of tweaking Bandit to remove a common source of user frustration: the config file.  We’re also cleaning up documentation, writing unit tests and ensuring stability ahead of our upcoming Bandit 1.0 release.

  • Security project outreach. The Security Project has grown steadily over the last few years.  We’ve got a great group of people participating now, but we’re always looking for new colleagues.  This session was aimed at refining our “security evangelism” presentation, creating a new blog location for security project posts and figuring out how to utilize resources like Superuser.

  • Threat Analysis. Threat Analysis is a critical part of the secure development lifecycle yet isn’t being done upstream.  This session focused on taking some of the best elements of the threat analysis that both HPE and Rackspace are already doing and standardizing them into a format that is easy to use for project teams.  For more information, please see this blog post.
  • OpenStack Security Ansible. The OpenStack Security Ansible project is Ansible to automatically make security enhancements on deployed systems.  This session covered the work that has already been done, came up with a few enhancements and generally educated the attendees about the project.

  • Anchor. This is probably the most stealthy of our technology projects. It’s a public key infrastructure (PKI) system that uses short-life certificates to achieve “passive revocation” - side stepping the pitfalls of certificate revocation list (CRL) distribution and online certificate status protocol (OCSP) availability that hinders most cloud scale PKI deployments. It’s the technology that we hope will enable OpenStack to deploy “TLS Everywhere” for secure internal communication. The mid-cycle focussed on the core technology and as a result other projects (namely openstack-ansible-security) are interested in leveraging it. There’s a draft blog post on Anchor here: https://openstack-security.github.io/tooling/2016/01/20/ephemeral-pki.html


How to get involved

The best way to get started? Drop in on one of the security team’s weekly IRC meetings Thursdays on Freenode at #openstack-meeting-alt 17:00 UTC.  “Otherwise come introduce yourself anytime on #openstack-security,” McPeak says, we’re always happy to meet new people!”

Cover Photo // CC BY NC

by Nicole Martinelli at January 27, 2016 04:58 PM

Beth Elwell

OpenStack, Devstack and Horizon

This post is designed to help you set up a basic OpenStack environment. Whether you are new to OpenStack, re-installing your environment from scratch or just wanting to play around with Devstack, this is a simple and fast way to get everything you need to run Horizon on Devstack from outside of your virtual machine and install a custom plugin on it.

It is recommended install Devstack on a virtual machine as you can easily get rid of it without having much to lose if it were to crash and also ensuring that everything related to it has been removed so there are no unexpected clashes or errors when you come to reinstall it.

With my particular setup, as a front end developer, I prefer working on my mac rather than directly on my virtual machine. Therefore in this walk through I will be talking about how to set up your virtual machine, installing devstack there, then accessing that devstack from outside of your vm and setting up horizon locally on your mac or whatever system you choose to operate on.

Setting up your virtual machine

For this setup I am using VMware Fusion, however you can also setup your DevStack virtual environment using Parallels or by using the free software VirtualBox.

1. Download the relevant version of Ubuntu for your operating system http://www.ubuntu.com/download/desktop

2. Start up your virtual machine software and create a new machine.

3. Select linux from the list of supported operating systems followed by ‘install from image’ and select the ubuntu image you downloaded in step 1.

4. Provide a name you want to give your VM and the location for your VM files i.e. C:\VM\

5. Under the settings for processors and memory, ensure that you give the VM a minimum of 2 processor cores and the memory to 4096mb.

6. If you are using a new version of VMware you will be able to just set the VM to inherit the same network settings as the host computer, however if you are using an older version or VirtualBox set the network type to use bridged networking.

7. Leave other settings as default, press finish and let the Ubuntu installation run it’s course.

With the above settings you should have an Ubunutu VM ready you to install Devstack.

Installing Devstack

1. Inside your working Ubuntu virtual machine, install git

sudo apt-get install git

2. Clone the Devstack repository and cd into it:

git clone http://github.com/openstack-dev/devstack
cd devstack/

3. Execute the following command to generate a sudo user account called “stack” required to run Devstack. This is a passwordless account:

sudo ./devstack/tools/create-stack-user.sh

4. Once this is done, change your identity to the newly created stack account:

sudo su - stack

5. Make a copy of the Devstack development code:

git clone http://github.com/openstack-dev/devstack
cd devstack/

6. Run the stack.sh file to start the build process:

./stack.sh

The script will take some time to build. It will ask you to enter passwords for the default services enabled in the script.

7. After it is done it will return a URL for horizon. Open that link in your browser and you should then be able to access the OpenStack Dashboard from inside your virtual machine. The IP in the URL will also be the IP of the virtual machine which you will use in the next section to setup the OpenStack host IP. To clarify this you might want to run ifconfig from the terminal which will return your IP.

Installing Horizon outside of the virtual machine

1. Clone the Horizon git repository from OpenStack to your main working environment (i.e. outside of your virtual machine):

git clone https://git.openstack.org/openstack/horizon

2. Under horizon/openstack_dashboard/local replicate and then rename the file local_settings.py.example to local_settings.py:

cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py

3. Open up the local_settings.py file in your preferred text editor. Change the OPENSTACK_HOST to match the hostname of your OpenStack server – i.e. the IP of your devstack virtual machine. This should be at about line 149 of the code.

4. Run the Horizon startup tests. This will also create a virtual environment for you in your local Horizon repository under .venv

./run_tests.sh

5. Boot up Horizon by running the following from the root of the Horizon repository:

./run_tests.sh --runserver

6. Login to the horizon dashboard using the username: admin and password: as set in stack.sh build of devstack.

Installing the Ironic UI plugin

Intro to the Ironic UI: The Ironic UI is a Horizon plugin that will allow users to view and manage bare metal nodes, ports and drivers.

1. Clone the Ironic UI repository:

git clone https://git.openstack.org/openstack/ironic-ui

2. Change into your local Horizon repository and run the venv. NOTE: this has been preinstalled when horizon was setup with ./run_tests.sh – do not reinstall venv

source .venv/bin/activate

3. Copy the enabled file from ironic-ui/enabled to horizon/openstack_dashboard/enabled

4. Change into the ironic-ui repository and package the plugin:

pip install -e .

This will create a python egg in the dist folder and will install the plugin into horizon’s python virtual environment

5. Change back into the horizon repository and bring up your environment:

./run_tests.sh --runserver

The Ironic Bare Metal Provisioning plugin should now be visible in the Horizon navigation.

To uninstall, use pip uninstall (find the name of the package to uninstall by running pip list from inside the horizon .venv). You will also need to remove the enabled file from the openstack_dashboard/enabled folder.

by Beth Elwell at January 27, 2016 03:59 PM

OpenStack Blog

Technical Committee Highlights January 22, 2016

Upstream development track – please submit

We’ll have an “Upstream development” track at the Austin Summit. It will happen on the Monday, before the Design Summit starts. This is a classic Summit conference track with recorded videos, so we want polished proposals for this track. We expect these to include general communication about development process changes, new features in a central project that need adoption in other projects, corner use cases that may need support from development, developer-oriented infra talks and upstream development best practices. To propose a talk for this track, use the Summit proposal system, select Upstream development, and meet the February 1st deadline.

Our mission

The OpenStack overall mission has stood proudly for years now, and is due for an update to increase focus on cloud consumers rather than solely on cloud builders. So we have proposed an amendment to the original mission. You can read the current and proposed new mission on governance.openstack.org.

And now, even more doc

In a clarification effort, we pushed the definition of the 4 opens, as well as clarified OpenStack licensing requirements as reference documents under the governance repository. Previously those were maintained in oral tradition, the wiki, or left as an exercise to the reader of the Foundation bylaws. You can now find them published (like all Technical Committee resolutions and reference information) on the governance website.

The names are here! The names are here!

The N and O releases directly after Mitaka will be Newton and Ocata. For the Austin Summit, the tie-in is to the “Newton House”, located at 1013 E. Ninth Street in Austin, Texas. It’s listed on the National Register of Historic Places. For the Barcelona Summit, know that Ocata is a beach about 20 minutes north of Barcelona by train.

Newton House (Austin,TX)

Clarifying licensing requirements

A new governance page clarifies guidelines for licensing for projects in and around OpenStack.  We want to raise awareness and highlight that page for future reference. In the subset of OpenStack projects that may be included in a Defcore trademark program, the project must be licensed under Apache Software License v2, ASLv2. Libraries and software built in the OpenStack infrastructure system should use OSI-approved licenses that do not restrict distribution of the consuming project. Read more on the governance website.

 

by Anne Gentle at January 27, 2016 05:38 AM

January 26, 2016

Rob Hirschfeld

Post-OpenStack DefCore, I’m Chasing “open infrastructure” via cross-platform Interop

Like my previous DefCore interop windmill tilting, this is not something that can be done alone. Open infrastructure is a collaborative effort and I’m looking for your help and support. I believe solving this problem benefits us as an industry and individually as IT professionals.

2013-09-13_18-56-39_197So, what is open infrastructure?   It’s not about running on open source software. It’s about creating platform choice and control. In my experience, that’s what defines open for users (and developers are not users).

I’ve spent several years helping lead OpenStack interoperability (aka DefCore) efforts to ensure that OpenStack cloud APIs are consistent between vendors. I strongly believe that effort is essential to build an ecosystem around the project; however, in talking to enterprise users, I’ve learned that that their  real  interoperability gap is between that many platforms, AWS, Google, VMware, OpenStack and Metal, that they use everyday.

Instead of focusing inward to one platform, I believe the bigger enterprise need is to address automation across platforms. It is something I’m starting to call hybrid DevOps because it allows users to mix platforms, service APIs and tools.

Open infrastructure in that context is being able to work across platforms without being tied into one platform choice even when that platform is based on open source software. API duplication is not sufficient: the operational characteristics of each platform are different enough that we need a different abstraction approach.

We have to be able to compose automation in a way that tolerates substitution based on infrastructure characteristics. This is required for metal because of variation between hardware vendors and data center networking and services. It is equally essential for cloud because of variation between IaaS capabilities and service delivery models. Basically, those  minor  differences between clouds create significant challenges in interoperability at the operational level.

Rationalizing APIs does little to address these more structural differences.

The problem is compounded because the differences are not nicely segmented behind abstraction layers. If you work to build and sustain a fully integrated application, you must account for site specific needs throughout your application stack including networking, storage, access and security. I’ve described this as all deployments have 80% of the work common but the remaining 20% is mixed in with the 80% instead of being nicely layers. So, ops is cookie dough not vinaigrette.

Getting past this problem for initial provisioning on a single platform is a false victory. The real need is portable and upgrade-ready automation that can be reused and shared. Critically, we also need to build upon the existing foundations instead of requiring a blank slate. There is openness value in heterogeneous infrastructure so we need to embrace variation and design accordingly.

This is the vision the RackN team has been working towards with open source Digital Rebar project. We now able to showcase workload deployments (Docker, Kubernetes, Ceph, etc) on multiple cloud platforms that also translate to full bare metal deployments. Unlike previous generations of this tooling (some will remember Crowbar), we’ve been careful to avoid injecting external dependencies into the DevOps scripts.

While we’re able to demonstrate a high degree of portability (or fidelity) across multiple platforms, this is just the beginning. We are looking for users and collaborators who want to want to build open infrastructure from an operational perspective.

You are invited to join us in making open cross-platform operations a reality.


by Rob H at January 26, 2016 04:54 PM

Aptira

OpenStack Australia Day – Sponsorship & Speaking Opportunities

We are currently in the process of organising OpenStack Australia Day 2016 – the first OpenStack conference down under! This event will be held on the 5th of May at The Menzies in Sydney.

Focusing on Open Source cloud technology, the aim of this conference is to gather industry users, vendors and solution providers to showcase the latest technologies and share real-world experiences of the next wave of IT virtualisation. OpenStack Australia Day will showcase a range of sessions on the broader cloud and Software Defined Infrastructure ecosystem including OpenStack, containers, PaaS and automation. The conference also features keynote presentations from industry leading figures, workshops and a networking event for a less formal opportunity to engage with the community.

We have a range of sponsorship packages and speaking slots available. To become a speaker or for a copy of the sponsorship prospectus, check out the OpenStack Australia Day 2016 website: http://australiaday.openstack.org.au/. We look forward to having you involved in the first OpenStack conference down under!

The post OpenStack Australia Day – Sponsorship & Speaking Opportunities appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Jessica Field at January 26, 2016 11:26 AM

OpenStack in Production

EPT, Huge Pages and Benchmarking


Having reported that EPT has a negative influence on the High Energy Physics standard benchmark HepSpec06, we have started the deployment of those settings across the CERN OpenStack cloud,
  • Setting the flag in /etc/modprobe.d/kvm_intel.conf to off
  • Waiting for the work on each guest to finish after stopping new VMs on the hypervisor
  • Changing the flag and reloading the module
  • Enabling new work for the hypervisor
According to the HS06 tests, this should lead to a reasonable performance improvement based on the results of the benchmark and tuning. However, certain users reported significantly worse performance than previously. In particular, some workloads showed significant differences in the following before and after characteristics.

Before the workload was primarily CPU bound, spending most of its time in user space. CERN applications have to process significant amounts of data so it is not always possible to ensure 100% utilisation but the aim is to provide the workload with user space CPU.


When EPT was turned off. some selected hypervisors showed a very difference performance profile. A major increase in non-user load and a reduction in the throughput for the experiment workloads. However, this effect was not observed on the servers with AMD processors.


With tools such as perf, we were able to trace the time down to handling the TLB misses. Perf gives

78.75% [kernel] [k] _raw_spin_lock<o:p></o:p>
6.76% [kernel] [k] set_spte<o:p></o:p>
1.97% [kernel] [k] memcmp<o:p></o:p>
0.58% [kernel] [k] vmx_vcpu_run<o:p></o:p>
0.46% [kernel] [k] ksm_docan<o:p></o:p>
0.44% [kernel] [k] vcpu_enter_guest<o:p></o:p>

The process behind the _raw_spin_lock is qemu-kvm.
<o:p></o:p>

Using systemtap kernel backtraces, we see mostly page faults and spte_* commands (shadow page table updates)
<o:p></o:p>
Both of these should not be necessary if you have hardware support for address translation: aka EPT.

There may be specific application workloads where the EPT setting was non optimal. In the worst case, the performance was several times slower.  EPT/NPT increases the cost of doing page table walks when the page is not cached in the TLB. This document shows how processors can speed up page walks - http://www.cs.rochester.edu/~sandhya/csc256/seminars/vm_yuxin_yanwei.pdf and AMD includes a page walk cache in their processor which speeds up the walking of pages as described in this paper  http://vglab.cse.iitd.ac.in/~sbansal/csl862-virt/readings/p26-bhargava.pdf

In other words, EPT slows down HS06 results when there are small pages involved because the HS06 benchmarks miss the TLB a lot. NPT doesn't slow it down because AMD has a page walk cache to help speed up finding the pages when it's not in the TLB. EPT comes good again when we have large pages because it rarely results in a TLB miss. So, HS06 is probably representative of most of the job types, but the is a small share of jobs which are different and triggered the above-mentioned problem.

However, we have 6% overhead compared to previous runs due to EPT on for the benchmark as mentioned in the previous blog. Mitigating the EPT overheads following the comments on the previous blog, we looked into using dedicated Huge Pages. Our hypervisors run CentOS 7 and thus support both transparent huge pages and huge pages. Transparent huge pages performs a useful job under normal circumstances but are opportunistic in nature. They are also limited to 2MB and cannot use the 1GB maximum size.

We tried setting the default huge page to 1G using the Grub cmdline configuration.

$ cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
$ cat /boot/grub2/grub.cfg | grep hugepage
linux16 /vmlinuz-3.10.0-229.11.1.el7.x86_64 root=UUID=7d5e2f2e-463a-4842-8e11-d6fac3568cf4 ro rd.md.uuid=3ff29900:0eab9bfa:ea2a674d:f8b33550 rd.md.uuid=5789f86e:02137e41:05147621:b634ff66 console=tty0 nodmraid crashkernel=auto crashkernel=auto rd.md.uuid=f6b88a6b:263fd352:c0c2d7e6:2fe442ac vconsole.font=latarcyrheb-sun16 vconsole.keymap=us LANG=en_US.UTF-8 default_hugepagesz=1G hugepagesz=1G hugepages=55 transparent_hugepage=never
$ cat /sys/module/kvm_intel/parameters/ept
Y

It may also be advisable to disable tuned for the moment until the bug #1189868 is resolved.

We also configured the XML manually to include the necessary huge pages. This will be available as a flavor or image option when we upgrade to Kilo in a few weeks.

  <memoryBacking>
        <hugepages>
          <page size="1" unit="G" nodeset="0-1"/>
        </hugepages>
  </memoryBacking>

The hypervisor was configured with huge pages enabled. However, we saw a problem with the distribution of huge pages across the NUMA nodes.

$ cat /sys/devices/system/node/node*/meminfo | fgrep Huge
Node 0 AnonHugePages: 311296 kB
Node 0 HugePages_Total: 29
Node 0 HugePages_Free: 0
Node 0 HugePages_Surp: 0
Node 1 AnonHugePages: 4096 kB
Node 1 HugePages_Total: 31
Node 1 HugePages_Free: 2

Node 1 HugePages_Surp: 0

This shows that the pages were not evenly distributed across the NUMA nodes., which would lead to subsequent performance issues. The suspicion is that the Linux boot up sequence led to some pages being used and this made it difficult to find contiguous blocks of 1GB for the huge pages. This led us to deploy 2MB pages rather than 1GB for the moment, while may not be the optimum setting allows better optimisations than the 4K settings and still gives some potential for KSM to benefit. These changes had a positive effect as the monitoring below shows when the reduction in system time.




At the OpenStack summit in Tokyo, we'll be having a session on Hypervisor Tuning so people are welcome to bring their experiences along and share the various options. Details of the session will appear at https://etherpad.openstack.org/p/TYO-ops-meetup.

Contributions from Ulrich Schwickerath and Arne Wiebalck (CERN) and Sean Crosby (University of Melbourne) have been included in this article along with the help of the LHC experiments to validate the configuration.


References

by Tim Bell (noreply@blogger.com) at January 26, 2016 07:31 AM

Mirantis

OpenStack:Unlocked Podcast Ep 15: OPNFV’s Heather Kirksey

The post OpenStack:Unlocked Podcast Ep 15: OPNFV’s Heather Kirksey appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/_BY5BUVKd_o" width="420"></iframe> Hosts Nick Chase and John Jainschigg talk to Open Platform for Network Functions Virtualization (OPNFV) Director Heather Kirksey about what NFV is, why it matters to telcos, and why telcos aren’t the only organizations that can benefit from it. Links we referenced: Like news like this? Sign up for the OpenStack:Unlocked newsletter.

The post OpenStack:Unlocked Podcast Ep 15: OPNFV’s Heather Kirksey appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at January 26, 2016 05:33 AM

OpenStack and NFV, as explained by the OpenStack Foundation

The post OpenStack and NFV, as explained by the OpenStack Foundation appeared first on Mirantis | The #1 Pure Play OpenStack Company.

This week the OpenStack Foundation released a whitepaper explaining the platform’s relationship with Network Functions Virtualization, or NFV.  NFV, a use case for Software Defined Networking, enables telcos and other organizations to replace specialized hardware with software-based Virtualized Network Functions (VNFs), creating a much more flexible system that enables more agility and control while reducing costs. The paper explains what NFV is and why it’s important to the telecommunications industry, as well as explaining “Why OpenStack is ‘synonymous’ with NFV”. It also explains how OpenStack is the infrastructure behind the Open Project for NFV (OPNFV), a Linux Foundation project meant to accelerate the development of NFV.  OPNFV uses OpenStack as the foundation for NFV, contributing needed features back into the upstream project rather than creating a downstream project of its own.  In other words, rather than creating an actual software artifact of its own, OPNFV concentrates on defining how existing projects such as OpenStack, OpenDaylight, and so on fit together, and filling any gaps that may prevent NFV from working properly. NFV has been taking off in recent months, and is widely seen as the “killer app” for Software Defined Networking.  Already major telcos and service providers such as  AT&T, Bloomberg LP, China Mobile, Deutsche Telekom, NTT Group, SK Telekom and Verizon have put NFV into place using OpenStack, according to the report. The OpenStack community has several projects that figure into NFV.  Some, such as Tacker, a project that aims to create a VNF orchestrator, are specifically targeted at NFV environments.  Others, such as Neutron and Astara, are general networking services. Still others, such as Congress (Policy as a Service), Mistral (Taskflow as a Service), and Senlin (a clustering service for homogeneous objects) are general OpenStack services.  Even the OpenStack Compute service, Nova, has had multiple improvements that have been aimed at NFV functionality, such as NUMA pinning. Some pundits are interpreting the paper to mean that OpenStack is abandoning its cloud focus to work instead on being a virtualization platform for telcos, but that’s not the case.  In fact, while the focus has been on telcos use of NFV, applications for the enterprise are also starting to emerge even in this area.  For example, Palo Alto Networks recently released the Palo Alto VM-Series virtual firewall, a VNF that provides extremely granular control over traffic between VMs within an OpenStack cluster. Like news like this? Sign up for the OpenStack:Unlocked newsletter.

Resources

The post OpenStack and NFV, as explained by the OpenStack Foundation appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at January 26, 2016 04:37 AM

OpenStack community tackles skills shortage; Mirantis trains 5000 students

The post OpenStack community tackles skills shortage; Mirantis trains 5000 students appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Ninety-six percent of companies agree that private clouds can bring benefits of reducing costs and increasing agility and innovation, but more than have have had trouble implementing OpenStack, held back by, among other things, a lack of trained professionals, according to a survey by Suse. Fortunately, that seems to be changing; Mirantis reports having trained more than 5000 students in 2015, and those numbers are growing throughout the industry. According to Suse, “as enterprises increasingly look to OpenStack for their private cloud investments, they are also wary of challenges and complications, including:
  • High degree of difficulty: Half of all enterprises that tried to implement an OpenStack cloud have failed, and 65 percent of companies report they have found the implementation experience difficult. In addition, nearly half (44 percent) plan to download and install OpenStack software themselves, potentially adding to the degree of difficulty.
  • Vendor lock-in constraints: 92 percent of respondents have concerns about vendor lock-in when it comes to choosing a private cloud infrastructure solution.
  • Skills shortage: 86 percent of respondents said the lack of skills in the market is making their companies reluctant to pursue private cloud. In addition, 78 percent of companies that have yet to adopt private cloud are deterred by the skills shortage.”
All of this does appear to be changing, however; the OpenStack community has made significant strides in the last few months in terms of compatibility between releases, and “pure play” distributions such as Mirantis OpenStack (disclosure: Mirantis sponsors OpenStack:Unlocked) do exist. Moreover, the industry has been making headway on the skills shortage issue; Mirantis, one of the leading players in the OpenStack training space, reported that it had trained 5000 students in 2015, doubling the number of students trained since 2012.  The company also added two new courses and 15 new locations last year.  In addition to training courses, the company also offers two different OpenStack certifications. According to the OpenStack Foundation, since the launch of the OpenStack marketplace in September 2013, training offerings grew from 17 unique courses in eight cities to 119 courses in 99 cities. In fact, the OpenStack Foundation has launched its own certification exam, as has the Linux Foundation. “Training is often a leading indicator to a technology’s impact. In 2015 OpenStack advanced beyond early adopters, and we saw an uptick in individuals and businesses scrambling to develop OpenStack skills,” said Mirantis Head of OpenStack Training Services, Lee Xie. “Students choose Mirantis Training because our courses cover vanilla OpenStack, equipping them with true technical understanding of what it’s like to deploy and operate OpenStack in the real world.” Like news like this? Sign up for the OpenStack:Unlocked newsletter.

Resources

The post OpenStack community tackles skills shortage; Mirantis trains 5000 students appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at January 26, 2016 04:10 AM

January 25, 2016

RDO

RDO blog roundup, 25 Jan 2016

Here's what RDO enthusiasts have been writing about in the last week:

Deploying an OpenStack undercloud/overcloud on a single server from my laptop with Ansible. by Harry Rybacki

During the summer of 2014 I worked on the OpenStack Keystone component while interning at Red Hat. Fast forward to the end of October 2015 and I once again find myself working on OpenStack for Red Hat — this time on the RDO Continuous Integration (CI) team. Since re-joining Red Hat I’ve developed a whole new level of respect not only for the wide breadth of knowledge required to work on this team but for deploying OpenStack in general.

… read more at http://tm3.org/4q

Ceilometer Polling Performance Improvement, by Julien Danjou

During the OpenStack summit of May 2015 in Vancouver, the OpenStack Telemetry community team ran a session for operators to provide feedback. One of the main issues operators relayed was the polling that Ceilometer was running on Nova to gather instance information. It had a highly negative impact on the Nova API CPU usage, as it retrieves all the information about instances on regular intervals.

… read more at http://tm3.org/4m

AIO RDO Liberty && several external networks VLAN provider setup by Boris Derzhavets

Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack –allinone install doesn't allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was "y" , then delete router1 and created external network of VXLAN type

… read more at http://tm3.org/4l

Caching in Horizon with Redis by Matthias Runge

Redis is a in-memory data structure store, which can be used as cache and session backend. I thought to give it a try for Horizon. Installation is quite simple, either pip install django-redis or dnf –enablerepo=rawhide install python-django-redis.

… read more at http://tm3.org/4n

Red Hat Cloud Infrastructure Cited as a Leader Among Private Cloud Software Suites by Independent Research Firm by Gordon Tillmore

The Forrester report states that Red Hat “leads the evaluation with its powerful portal, top governance capabilities, and a strategy built around integration, open source, and interoperability. Rather than trying to build a custom approach for completing functions around operations, governance, or automation, Red Hat provides a very composable package by leveraging a mix of market standards and open source in addition to its own development.”

… read more at http://tm3.org/4o

Disable "Resource Usage"-dashboard in Horizon by Matthias Runge

When using Horizon as Admin user, you probably saw the metering dashboard, also known as "Resource Usage". It internally uses Ceilometer; Ceilometer continuously collects data from configured data sources. In a cloud environment, this can quickly grow enormously. When someone visits the metering dashboard in Horizon, Ceilometer then will accumulate requested data on the fly.

… read more at http://tm3.org/4p

by Rich Bowen at January 25, 2016 08:05 PM

OpenStack Superuser

Why large data centers and telcos are fast-tracking network function virtualization with OpenStack

As global telecom providers ramp up their adoption of network functions virtualization (NFV) to increase network agility and contain costs, OpenStack leads as the NFV infrastructure platform of choice.

And this is just the starting line. The global market for NFV equipment, software and services will hit $11.6 billion in 2019, up from $2.3 billion in 2015, according to research firm IHS© Infonetics.

A 21-page report from the OpenStack Foundation titled "Accelerating NFV Delivery with OpenStack," describes NFV, how OpenStack supports NFV and how global telecoms are hopping aboard this open source future of networking.

Along with insight from experts at Red Hat, Mirantis and the Linux Foundation, the report offers an inside look at how industry leaders including AT&T, Bloomberg, China Mobile, Deutsche Telecom and NTT Communications are thinking about NFV now or planning to implement it in 2016.

The report shows how big a shift NFV will be when it comes to how telecoms and large enterprise network operators create and manage networks in the near future. It’s a quantum leap from dedicated appliances and proprietary software to a more flexible, open and cost-effective way of network systems management. In NFV, virtual network functions (VNFs) run as software on virtual machines, containers or bare metal to assume the tasks of specific network devices.

The European Telecommunications Standards Institute (ETSI) and the Linux Foundation Open Platform for NFV (OPNFV) have defined specifications and released reference platforms for NFV that select OpenStack as the virtualization integration manager. Emerging OpenStack projects provide options for additional management and orchestration components.

The pace for NFV adoption has sped up as cloud computing and near ubiquitous network connectivity accelerate the use of mobile apps. Service providers and telecoms are striving to deliver the data and services demands of customers in the most agile, cost-effective ways possible. OpenStack for NFV continues its path to production this year, in line with development blueprints from telecom providers and the OPNFV community, as well as ETSI NFV requirements. An added plus: these stringent requirements and features of NFV help all OpenStack users -- especially in the areas of performance, resiliency and scaling geographically.

What's behind the big push with NFV? In a word: agility. The infrastructure and VNFs run on general purpose servers and switches and take advantage of open APIs. There are many operational and technical benefits that network operators expect from NFV implementation, including:

  • Network flexibility via programmatic provisioning
  • Taking advantage of the open source pace of innovation—ever-emerging improvements in both the telecom and the traditional IT space
  • Full choice of modular drivers and plug-ins
  • Accessibility via API, enabling faster time to market for new capabilities
  • Lower costs by replacing with commercial off-the-shelf (COTS) hardware, better price/performance
  • Reduced power consumption and space utilization
  • Operational efficiency across datacenters via orchestration: managing thousands of devices from one console
  • Visibility: automated monitoring, troubleshooting and actions across physical and virtual networks and devices
  • Boosts performance by optimizing network device utilization
  • Service-license agreement (SLA)-driven resource allocation (initial and ongoing)
  • Quality of service: performance, scalability, footprint, resilience, integration, manageability
  • Policy-driven redundancy
  • Application-level infrastructure support

The report also takes a look under the hood at companies making tracks with NFV. Verizon, for example, is turning to NFV as a way to build lower-cost network agility and flexibility without requiring the support staff demanded by proprietary network functions. “They are building a company-wide common OpenStack platform for running VNFs (Virtual Network Functions), as well as other internal applications. Production is around the corner,” reports co-author Kathy Cacciatore, the OpenStack Foundation's consulting marketing manager.

Here’s why Verizon chose OpenStack for its NFV effort:

  • It offers de facto implementation of a VIM (virtual infrastructure manager)
  • A critical mass of vendors are porting and developing applications (VNFs) targeted at OpenStack
  • Integrators have developed the necessary deployment expertise using OpenStack
  • OpenStack is a common environment that reduces vendor dependencies
  • OpenStack components are being tuned to the needs of carriers, essential to Verizon’s ongoing efforts
  • The ability to push fixes upstream so patches do not have to be retrofitted again and again, so they can focus on innovating.

The report also outlines how you can stay informed about NFV and get involved in contributing to NFV at OpenStack. You can download it here.

Cover Photo // CC BY NC

by Nicole Martinelli at January 25, 2016 05:12 PM

Mirantis

Mirantis OpenStack 7.0: NFVI Deployment Guide — Huge pages

The post Mirantis OpenStack 7.0: NFVI Deployment Guide — Huge pages appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Memory addressing on contemporary computers is done in terms of blocks of contiguous virtual memory addresses known as pages. Historically, memory pages on x86 systems have had a fixed size of 4 kilobytes, but today this parameter is configurable to some degree: the x86_32 architecture, for example, supports 4Kb and 4Mb pages, while the x86_64 architecture supports pages 4Kb, 2Mb, and more recently, 1Gb, in size. Pages larger than the default size are referred to as “huge pages” or “large pages” (the terms are frequently capitalized). We’ll call them “huge pages” in this document. Processes work with virtual memory addresses. Each time a process accesses memory, a kernel translates the desired virtual memory address to a physical one by looking at a special memory area called the page table, where virtual-to-physical mappings are stored. The hardware cache on the CPU is used to speed up lookups. This cache is called the translation lookaside buffer (TLB). The TLB typically can store only a small fraction of physical-to-virtual page mappings. By increasing memory page size we reduce the total number of pages that need to be addressed, thus increasing TLB hit rate. This can lead to significant performance gains when a process does many memory operations. Also, the page table may require a significant amount of memory in cases where it needs to store many references to small memory pages. in extreme cases, memory savings from using huge pages may amount to several gigabytes. (For example, see http://kevinclosson.net/2009/07/28/quantifying-hugepages-memory-savings-with-oracle-database-11g.) On the other hand, when the page size is large but a process doesn’t use all the page memory, unused memory is effectively lost as it cannot be used by other processes. So there is usually a tradeoff between performance and more efficient memory utilization. In the case of virtualization, a second level of page translation (between the hypervisor and host OS) causes additional overhead. Using huge pages on the host OS lets us greatly reduce this overhead. It’s preferable to give a virtual machine with NFV workloads exclusive access to a predetermined amount of memory. No other process can use that memory anyway, so there is no tradeoff in using huge pages.  Huge pages are thus the natural option for NFV workloads. For more information on page tables and the translation process, see https://en.wikipedia.org/wiki/Page_table

General recommendations on using huge pages on OpenStack

There are two ways to use huge pages on Linux in general:
  • Explicit – an application is  enabled to use huge pages by changing its source code
  • Implicit – via automatic aggregation of default-sized pages to huge pages by the transparent huge pages (THP) mechanism in the kernel
THP are turned on by default in MOS 7.0, but Explicit huge pages potentially provide more performance gains if an application supports them. Although we tend to think of the hypervisor as KVM, KVM is really just the kernel module; the actual hypervisor is QEMU. That means that QEMU performance is crucial for NFV. Fortunately, it supports explicit usage of huge pages via the hugetlbfs library, so we don’t really need THP here. Moreover, THP can lead to side effects with unpredictable results — sometimes lowering performance instead of raising it. Also be aware that when a kernel needs to swap out a THP, the aggregate huge page is first split to standard 4k pages. Explicit huge pages are never swapped to disk — this is perfectly fine for typical NFV workloads. In general, huge pages in general can be reserved at boot or at runtime (though 1GB huge pages can only be allocated at boot). Memory generally gets fragmented on a running system and the kernel may not be able to reserve as many contiguous memory blocks in runtime as it can at boot. For general NFV workloads we recommend using dedicated compute nodes with the major part of their memory reserved as explicit huge pages at boot time. NFV workload instances should be configured to use huge pages. We also recommend disabling THP on these compute nodes. As for preferred huge page sizes: the choice depends on the needs of specific workloads. Generally, 1Gb can be slightly faster, but 2Mb huge pages provide more granularity. For more information on explicit huge pages, see: For more information on THP, see:

Huge pages and physical topology

All contemporary multiprocessor x86_64 systems have non-uniform memory access architecture (NUMA). NUMA-related settings will be described in the following sections of this guide. but there are some subtle characteristics of NUMA that affect huge page allocation on multi-CPU hosts that you should be aware of when configuring OpenStack. As a rule, some amount of memory is reserved in the lower range of memory address space. This memory is used for memory-mapped I/O and usually it is reserved on the first NUMA cell — corresponding to the first CPU — before huge pages are allocated — but when allocating huge pages, the kernel tries to spread them evenly across all NUMA cells. If there’s not enough contiguous memory in one of the NUMA cells, the kernel will try to compensate by allocating more memory on the remaining cells. When the amount of memory used by huge pages is close to the total amount of free memory, you end up with uneven huge page distributions across NUMA cells. This is more likely to happen when using 1Gb pages. Here is an example from a host with 64 gigabytes of memory and two CPUs:
      # grep "Memory.*reserved" /var/log/dmesg
      [    0.000000] Memory: 65843012K/67001792K available (7396K kernel code, 1146K rwdata, 3416K rodata, 1336K init, 1448K bss, 1158780K reserved)
We can  see that the kernel reserves more than 1 Gb of memory. Now, if we try to reserve 60 1Gb pages the result will be:
     # grep . /sys/devices/system/node/node*/hugepages/hugepages*kB/nr_hugepages
     /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages:29
     /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages:31
This might lead to negative consequences.  For example, if we use a VM flavor that requires 30Gb of memory in one NUMA cell (or 60Gb in two) there would be a problem. One might think that the number of huge pages on this host is enough to run two instances with 30Gb memory each or one, two-cell instance with 60Gb, but in reality, only one 30 Gb instance will be started: the other one will be one 1Gb page short. If we try to start a 60Gb, two-cell instance with this distribution of huge pages between NUMA cells it will fail to start altogether because Nova will try to find a physical host with two NUMA cells having 30Gb of memory each and fail to do that because one of the cells has insufficient memory. You may want to use an option such as ‘Socket Interleave Below 4GB’ or similar if your BIOS supports it to avoid this situation. This option maps lower address space evenly between the NUMA cells, in effect splitting reserved memory between NUMA nodes. In conclusion, you should always test to verify the real allocation of huge pages and plan accordingly, based on the results.

Enabling huge pages on MOS 7.0

To enable huge pages you need to configure every compute node where you plan to run instances that will use them. You also need to configure nova aggregates and flavors before launching huge pages backed instances.

Compute hosts configuration

Below we provide an example of how to configure huge pages on one of the compute nodes. All the commands in this section should be run on the compute nodes that will handle huge pages workloads. We will only describe steps required for boot time configuration. For information on runtime huge pages allocation, please refer to kernel documentation (https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt).
  1. Check that your compute node supports huge pages:
    # grep -m1 "pse\|pdpe1gb" /proc/cpuinfo
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
    pse and pdpr1gb flags in the output indicate that the hardware supports ‘standard’ (2 or 4 Megabytes depending on hardware architecture) or 1Gb huge pages.
  2. Upgrade QEMU to 2.4 to use huge pages (see the Appendix A1 “Installing qemu 2.4”).
  3. Add huge pages allocation parameters to the list of kernel arguments in /etc/default/grub. Note that we are also disabling Transparent Huge Pages in the examples below because we’re using explicit huge pages to prevent swapping.Add the following to the end of /etc/default/grub:
    GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX  hugepagesz=<size of hugepages> hugepages=<number of hugepages>  transparent_hugepage=never”
    Note that is either 2M or 1G. You can also use both sizes simultaneously:
    GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=
    In the following example we preallocate 30000 2Mb pages:
    GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=2M hugepages=30000 transparent_hugepage=never”
    Caution: be careful when deciding on the number of huge pages to reserve. You should leave enough memory for host OS processes (including memory for Ceph processes if your compute shares the Ceph OSD role) or risk unpredictable results. Note: You can’t allocate different amounts of memory to each NUMA cell via kernel parameters. If you need to do so, you have to use command line or startup scripts. Here is an example in which we allocate 10 1Gb sized pages on the first NUMA cell and 30 on the second one:
    echo 10 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
    echo 30 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
  4. Change the value of KVM_HUGEPAGES in /etc/default/qemu-kvm from 0 to 1 to make QEMU aware of huge pages:
    KVM_HUGEPAGES=1
  5. Update the bootloader and reboot for these parameters to take effect:
    # update-grub
    # reboot
  6. After rebooting, don’t forget to verify that the pages are reserved according to the settings specified:
    # grep Huge /proc/meminfo
    AnonHugePages:         0 kB
    HugePages_Total:   30000
    HugePages_Free:    30000
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    # grep . /sys/devices/system/node/node*/hugepages/hugepages*kB/nr_hugepages
    /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages:15000
    /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages:15000

Nova configuration

To use huge pages, you need to launch instances whose flavor has the extra specification hw:mem_pages_size. By default, there is nothing to prevent normal instances with flavors that don’t have the extra spec from starting on compute nodes with reserved huge pages.  To avoid this situation, you’ll need to create nova aggregates for compute nodes with and without huge pages, create a new flavor for huge pages-enabled instances, update all the other flavors with this extra spec and reconfigure nova scheduler service to check extra spec when scheduling instances. Follow the steps below:
  1. From the commandline, create an aggregate for compute nodes with and without huge pages:
    # nova aggregate-create hpgs-aggr
    # nova aggregate-set-metadata hpgs-aggr hpgs=true
    # nova aggregate-create normal-aggr
    # nova aggregate-set-metadata normal-aggr hpgs=false
  2. Add one or more hosts to them:
    # nova aggregate-add-host hpgs-aggr node-9.domain.tld
    # nova aggregate-add-host normal-aggr node-10.domain.tld
  3. Create a new flavor for instances with huge pages:
    # nova flavor-create m1.small.hpgs auto 2000 20 2
    # nova flavor-key m1.small.hpgs set hw:mem_page_size=2048
    # nova flavor-key m1.small.hpgs set aggregate_instance_extra_specs:hpgs=true
  4. Update all other flavours so they will start only on hosts without huge pages support:
    # openstack flavor list -f csv|grep -v hpgs|cut -f1 -d,| tail -n +2| \
    xargs -I% -n 1 nova flavor-key % \
    set aggregate_instance_extra_specs:hpgs=false
  5. On every controller add the value AggregateInstanceExtraSpecFilter to the scheduler_default_filters parameter in /etc/nova/nova.conf:
    scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateInstanceExtraSpecsFilter
  6. Restart nova scheduler service on all controllers:
    # restart nova-scheduler

Using huge pages on MOS 7.0

Now that OpenStack is configured for huge pages, you’re ready to use it as follows:
  1. Create an instance with the huge pages flavor: nova boot –image TestVM  –nic net-id=`openstack network show net04 -f value | head -n1` –flavor m1.small.hpgs hpgs-test
  2. Verify that instance has been successfully created:# nova list –namehpgs-
    test+--------------------------------------+-----------+--------+------------+-------------+----------------------+
     | ID                                   | Name      | Status | Task State | Power State | Networks             |       +--------------------------------------+-----------+--------+------------+-------------+----------------------+
     | 593d461e-3ef2-46cc-a88d-5f147eb2a14e | hpgs-test | ACTIVE | -          | Running     | net04=192.168.111.15 |
     +--------------------------------------+-----------+--------+------------+-------------+----------------------+
    If the status is ‘ERROR’, check the log files for lines containing this instance ID. The easiest way to do that is to run the following command on the Fuel Master node:# grep -Ri <Instance ID> /var/log/docker-logs/remote/node-* If you encounter the error:
    libvirtError: internal error: process exited while connecting to monitor: os_mem_prealloc: failed to preallocate pages
    … it means there is not enough free memory available inside one NUMA cell to satisfy instance requirements. Check that the VM’s NUMA topology fits inside the host’s. This error:
    libvirtError: unsupported configuration: Per-node memory binding is not supported with this QEMU
    … means that you are using QEMU 2.0 packages. You need to upgrade QEMU to 2.4, see Appendix A1 for instructions on how to upgrade QEMU packages.
  3. Verify that the instance uses huge pages (all commands below should be run from a controller):Locate the part of the instance configuration that is relevant to huge pages:
    # hypervisor=`nova show hpgs-test | grep OS-EXT-SRV-ATTR:host | cut -d\| -f3`
    # instance=`nova show hpgs-test | grep OS-EXT-SRV-ATTR:instance_name | cut -d\| -f3`
    # ssh $hypervisor virsh dumpxml $instance |awk '/memoryBacking/ {p=1}; p; /\/numatune/ {p=0}’
    <memoryBacking>
      <hugepages>
        <page size='2048' unit='KiB' nodeset='0'/>
      </hugepages>
    </memoryBacking>
    <vcpu placement='static'>2</vcpu>
    <cputune>
      <shares>2048</shares>
      <vcpupin vcpu='0' cpuset='0-5,12-17'/>
      <vcpupin vcpu='1' cpuset='0-5,12-17'/>
      <emulatorpin cpuset='0-5,12-17'/>
    </cputune>
    <numatune>
      <memory mode='strict' nodeset='0'/>
      <memnode cellid='0' mode='strict' nodeset='0'/>
    </numatune>
    The ‘memoryBacking’ section should show that this instance’s memory is backed by huge pages. You may also see that the ‘cputune’ section reveals so-called ‘pinning’ of this instance’s vCPUs. This means the instance will only run on physical CPU cores that have direct access to this instance’s memory and comes as a bonus from hypervisor awareness of the host physical topology. We will discuss instance CPU pinning in the next section. You may also look at the QEMU process arguments and make sure they contain relevant options, such as:
    # ssh $hypervisor pgrep -af $instance  | grep -Po "memory[^\s]+”
    memory-backend-file,prealloc=yes,mem-path=/run/hugepages/kvm/libvirt/qemu,size=2000M,id=ram-node0,host-nodes=0,policy=bind
    … or directly examine  the kernel huge pages stats:
    # ssh $hypervisor "grep huge /proc/\`pgrep -of $instance\`/numa_maps”
    2aaaaac00000 bind:0 file=/run/hugepages/kvm/libvirt/qemu/qemu_back_mem._objects_ram-node0.VveFxP\040(deleted) huge anon=1000 dirty=1000 N0=1000
    We can see that the instance uses 1000 huge pages (since this flavor’s memory is 2Gb and we are using 2048Kb huge pages). Note: It’s possible to use more than one NUMA host cell  for a single instance with the flavor key hw:numa_nodes, but you should be aware that multi-cell instances may show worse performance than single-cell instances in the case when processes inside them aren’t aware of their NUMA topology. See more on this subject in the section about NUMA CPU pinning.
Some useful commands
Here are some commands for obtaining huge pages-related diagnostics.
  • To obtain information about the hardware Translation Lookaside Buffer (run ‘apt-get install cpuid’ beforehand):
     #cpuid -1| awk '/^   \w/ { p=0 } /TLB information/ { p=1; } p;'
           cache and TLB information (2):
           0x63: data TLB: 1G pages, 4-way, 4 entries
           0x03: data TLB: 4K pages, 4-way, 64 entries
           0x76: instruction TLB: 2M/4M pages, fully, 8 entries
           0xff: cache data is in CPUID 4
           0xb5: instruction TLB: 4K, 8-way, 64 entries
           0xf0: 64 byte prefetching
           0xc1: L2 TLB: 4K/2M pages, 8-way, 1024 entries
  • To show how much memory is used for Page Tables:
     # grep PageTables /proc/meminfo

     PageTables:      1244880 kB
  • To show current huge pages statistics:
     # grep Huge /proc/meminfo
             AnonHugePages:    606208 kB
             HugePages_Total:   15000
             HugePages_Free:    15000
             HugePages_Rsvd:        0
             HugePages_Surp:        0
             Hugepagesize:       2048 kB
  • To show huge pages distribution between NUMA nodes:
     # grep . /sys/devices/system/node/node*/hugepages/hugepages*kB/nr_hugepages
     /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages:29
     /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages:15845
     /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages:31
     /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages:15899

Learn more about deploying NFV—download our Mirantis OpenStack 7.0 NFVI Deployment Guide.

The post Mirantis OpenStack 7.0: NFVI Deployment Guide — Huge pages appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at January 25, 2016 02:12 PM

Opensource.com

New Outreachy interns, NFV deployment growth, and more OpenStack news

Interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at January 25, 2016 07:59 AM

Hugh Blemings

Lwood-20160124

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week ending 24 January 2016. For more background on Lwood, please refer here.

Basic Stats for week 18 to 24 January 2016:

  • ~502 Messages (down about 16% relative to last week)
  • ~183 Unique threads (down a bit under 7% relative to last week)

List traffic bouncing around the average level again now :)

Notable Discussions

All hail Newton and Ocata!

Monty Taylor announced the names of the next two release cycles.  The N release – Newton – is named after “Newton House” a historic site in Austin, Texas.  The O release – Ocata – is the name of a beach (and, apparently, bar) 20 minutes train ride from Barcelona, Spain.  If you read this, and are going to be in Barcelona and are the first to email me, I’ll buy you a drink in Ocata :)

Stabilisation cycles – a call to move a good idea forward

Flavio Percoco kicked off a long but worthwhile thread on the idea of having stabilisation cycles where projects can focus on stabilising the code base and consolidating things versus so much emphasis on new features.

It’s a cogent and positive proposal as is the discussion that follows – it speaks usefully into the matter of paying down technical debt as well as recognising that a cycle without new features is not without its difficulties for some.

New Ceilometer Plugin

collectd-ceilometer-plugin, announced here by Emma Foley, is a plugin that (as the name suggests) makes system statistics gathered by collectd available to Ceilometer.  Definitely useful stuff :)

Progress on Tap as a Service (TaaS)

As a follow up to the demonstration of TaaS in Vancouver, Reedip Banerjee sent a brief update on the work being done by the team and invited feedback on the latest cut of the specification.

Release Countdown for week R-10

Doug Hellmann posted a reminder that teams should be focussing on wrapping up new feature work and stabilising recent additions to the codebase now the second milestone is behind us.  Please take a moment to read his post for the full details.

Recording release information for Independent/Un-Managed Projects

Also from Doug Hellman – a note that independent and unmanaged projects should be providing information about releases so the the releases repo can make it available for reference.

He notes in a later post that there are plans to automate this in the future based on tagging, but for now it’s a manual process.

Demo of Vitrage – a Root Cause Analysis engine for OpenStack

Vitrage was announced at the Mitaka Summit and a later post to the list provided some more information on the project (mentioned also in Lwood-20151122)

Last week Alexey Weyl posted an update to the list announcing the project’s first demo – an encouraging looking few minutes of YouTube footage of Vitrage showing topology information based on information from Nova.

Update on Cross-Project Specs and your project

Following on from his post of last week, Mike Perez notes with thanks to all involved that the Project Team Guide has now been approved.  As he goes on to say “…please coordinate with your team to sign up…”

Tip: parsing json in logs and elsewhere

Matt Riedemann’s post kicked off a brief but helpful discourse on how to make otherwise fairly impenetrable chunks of linenoise json more human readable.  A brief thread but if you’ve ever had to decypher logs with chunks of json in it, might just be your friend and an opportunity to learn a neat tip from others…

Upcoming OpenStack Events

A summary of OpenStack related events that cropped up on the mailing list this past week.  Don’t forget the OpenStack Foundation’s excellent Events Page for a comprehensive list!

Midcycles

People and Projects

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news

This edition of Lwood brought to you by Steve Winwood (Chronicles), Sting (Ten Summoner’s Tales), Thin Lizzy (Lizzy Killers, Thunder and Lightning) amongst other tunes.

by hugh at January 25, 2016 06:49 AM

January 24, 2016

OpenStack Superuser

High-performance computing lands leading role on the OpenStack stage

A new track at the upcoming Austin Summit will explore where the cutting edge of the cloud meets the cutting edge of science.

The high-performance computing (HPC) research track will focus on how universities, labs, research institutions and the like are increasingly using OpenStack for everything from supercomputing — (HPC), high-throughput computing (HTC) — to research applications (like Matlab, RStudio, iPython) and much more.  OpenStack users from around the world will discuss reference architectures, best practices and case studies for HPC.

The pioneering track has three co-chairs: Guido Aben, director of eResearch, AARNET, Bernard Meade, HPC/Cloud compute manager at the University of Melbourne and Stig Telfer, at Cambridge University. Telfer hopes that the track will be “crammed with interesting talks.” Gentle reminder: you have until February 1 to get your proposals for talks ready for Austin.

Superuser talked to Telfer about why HPC no longer plays a "supporting actor role" in the OpenStack community and how you can get involved.

HPC has had ties to OpenStack more or less since the beginning, what's the impetus for a Summit track on it now?

There has indeed been a long-standing and successful use case of cloud compute for scientific applications and HPC is a demanding subset of that.  HPC facilities have tended to use OpenStack private clouds in a supporting actor role to a tightly-coupled supercomputer, as a high-throughput task farm performing post-processing analytics feeding on the data generated by the super.

What has changed for me is the tantalizing promise of the software-defined supercomputer: a system with the performance levels of HPC but the easy flexibility of cloud.  With every release, OpenStack is getting closer to delivering this.

Conversely, what has changed very little has been the pace of development in HPC system management.  Administrators in the HPC market space have long looked upon the innovations in flexible infrastructure management coming from the cloud space and demanded equivalent capabilities of HPC vendors.  Admins are also frustrated by the lack of consistent interfaces for management of HPC systems from different vendors - sometimes even different products from the same vendor.  At the same time, HPC users do not want to sacrifice performance for these new capabilities.

That’s a demanding set of requirements but now we are at a point where OpenStack appears able to meet those requirements.  How exciting is that?

What’s the biggest challenge operators in this area facing now?

I think it has to be the skill set.  Research institutions have deep expertise in HPC cluster management, but the OpenStack skill set is so different those institutions can find it a challenge to manage their new private cloud infrastructure.  We would love to help with the development of those skills through creation of a self-reinforcing OpenStack/HPC community.

The second challenge I can see is how to get effective performance from OpenStack.  Some of the HPC use cases are extremely demanding in areas where virtualization is typically weak: latency-driven patterns of communication and IO in particular appear to be the hypervisor’s achilles heel.  Bare metal and containers are emerging capabilities for OpenStack which address this overhead - how close to the cutting edge can we get while still maintaining production service-level agreements?

Who should get involved?

Anyone who provides infrastructure for HPC and has an interest in doing it through OpenStack! Why shouldn’t we benefit from sharing?  Unlike telcos for example, HPC institutions are not typically commercial rivals.  We all have the same motivation of providing effective infrastructure but we are not competing against one another by doing so.

How can they get involved?

It’s early days as yet, but there is an [hpc] topic tag already assigned for the openstack-operators mailing list and my assumption is that when we have things to discuss, that will be the forum to use.

HPC doesn’t cover all the use cases of academia, and I expect there will be other ways people will find to interact.  I would expect the whole thing to fit under the openstack-operators umbrella though.

Beyond that, I think we would benefit from regular social gatherings at the Summits and I’m hoping the Foundation’s new Scientific Working Group will be a focus for organizing that. Discussions for the Working Group are held on the user-committee@lists.openstack.org mailing list with the subject-line tag [scientific-wg] <link join="join" list.="list." mailing="mailing" to="to" user-committee="user-committee"/>

What contributions or involvement do you need most right now from the community?

We know there are problems to solve and we know there’s interest in solving them. What I would love to see now would be for OpenStack-using scientists and researchers from across the world to get together and compare notes in a way that enables useful information sharing.  I think what we need most right now is outreach and then we need to prove the benefits of doing so.

Cover Photo // CC BY NC

by Nicole Martinelli at January 24, 2016 09:33 PM

January 23, 2016

OpenStack Superuser

Setting up OpenStack Trove to use secured sockets layer: Enabling and troubleshooting

Trove, the OpenStack database-as-a-service provides database provisioning and life cycle management capabilities for a number of databases. It exposes a single REST’ful API that can be used to access all the capabilities provided by the service. A command line interface and the Horizon web interface interact with Trove via this REST API.Trove allows interactions on this REST API to be secured using secured sockets layer (SSL.) The previous two parts describe SSL, what it is, how it works, how one interacts with the Trove REST API, how the OpenStack Keystone service catalog is used in discovering service end-points. The final (third) part builds on the earlier two and describes how to enable SSL in Trove, and some useful trouble-shooting tips.

Enabling SSL with Trove

To enable SSL with Trove, we need to do several things. These are described in detail below.

Get a keypair and a certificate for the Trove Controller

The Trove Controller machine needs a keypair and a certificate and you can either get these from a trusted vendor (Certificate Authority) or you can create your own private CA and issue these yourself. While a private CA is sufficient for development and testing, if you wish to operate the service in production with clients on the internet, you will likely want to get a certificate and keypair issued by a trusted public CA. We describe below the steps involved in creating your own keypair and certificate using a private CA. We follow the instructions found at https://help.ubuntu.com/lts/serverguide/certificates-and-security.html Using the steps described there, we generate the server key (server.key), the server certificate (server.crt) and the CA Certificate (cacert.crt) and copy them into /etc/trove along with the Trove configuration files. We can then add the following line to trove.conf

[ssl]
cert_file=/etc/trove/server.crt
key_file=/etc/trove/server.key

The client needs access to the CA Certificate as it uses this to validate the public key and certificate that the server will send to it. Update the service catalog to indicate that Trove offers SSL First recreate the database service

amrith@trove-controller:~$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 06a410d51a26492baa0b40a73b4feb1f | glance      | image          |
| 2ae1579a4aae41a6b294cd7fffaa16cc | cinder      | volume         |
| 3f96de164d07400ba16af8eaea76d470 | cinderv2    | volumev2       |
| 4c4b209eab1140e7a968fdc25aab6dee | swift       | object-store   |
| 5543cb00401d45c69fca7265a862c83e | nova_legacy | compute_legacy |
| 63fb77352fea42329ba47063e8a91f34 | ec2         | ec2            |
| 6417b68b2dc64c179088a7ae806cef88 | trove       | database       |
| 6db4050bd9d0487f9ae453d7bafb7011 | keystone    | identity       |
| 8ecd1cd5bd924afa85e6964852cd6ed9 | heat-cfn    | cloudformation |
| 9338774d87534473a3ae8b883e180696 | nova        | compute        |
| a05c11b57b614e8bb89f66632fd52e32 | heat        | orchestration  |
+----------------------------------+-------------+----------------+
amrith@trove-controller:~$ openstack service delete 6417b68b2dc64c179088a7ae806cef88

amrith@trove-controller:~$ openstack service create --name trove --description 'Database as a Service with SSL' database
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Database as a Service with SSL   |
| enabled     | True                             |
| id          | 70ed0eb9558e4a73b6c884e5660be26a |
| name        | trove                            |
| type        | database                         |
+-------------+----------------------------------+

Register the endpoint for the service

amrith@trove-controller:~$ openstack endpoint create \
> --publicurl 'https://192.168.115.20:8779/v1.0/$(tenant_id)s' \
> --adminurl 'https://192.168.115.20:8779/v1.0/$(tenant_id)s' \
> --internalurl 'https://192.168.115.20:8779/v1.0/$(tenant_id)s' \
> --region RegionOne database
+--------------+------------------------------------------------+
| Field        | Value                                          |
+--------------+------------------------------------------------+
| adminurl     | https://192.168.115.20:8779/v1.0/$(tenant_id)s |
| id           | d87532f2d036451eae2be193a945fef5               |
| internalurl  | https://192.168.115.20:8779/v1.0/$(tenant_id)s |
| publicurl    | https://192.168.115.20:8779/v1.0/$(tenant_id)s |
| region       | RegionOne                                      |
| service_id   | 70ed0eb9558e4a73b6c884e5660be26a               |
| service_name | trove                                          |
| service_type | database                                       |
+--------------+------------------------------------------------+

Finally, restart the Trove API service

How exactly you do this depends on how you installed OpenStack on your trove-controller. If it is a development machine which was installed using devstack, then reconnect to the devstack screen session and restart the Trove API process. . If you use the Tesora distribution, then you would restart the trove-taskmanager service (on Ubuntu) or the openstack-trove-taskmanager service (on Centos or RHEL).

Retry your trove CLI command

You can now retry your trove CLI command by providing the CA Certificate to the CLI as shown below.

amrith@trove-client:~$ trove --os-auth-url http://192.168.115.20:5000/v2.0 \
> --os-tenant-name admin \
> --os-username admin \
> --os-password 3de4922d8b6ac5a1aad9 \
> --os-cacert ./cacert.crt list
+----+------+-----------+-------------------+--------+-----------+------+
| ID | Name | Datastore | Datastore Version | Status | Flavor ID | Size |
+----+------+-----------+-------------------+--------+-----------+------+
+----+------+-----------+-------------------+--------+-----------+------+

Why do you need to provide the CA Certificate to the CLI?

Recall that SSL uses Certificates to establish identity. In the example above, the CLI will attempt to establish a secure connection with the server at https://192.168.115.20:8779. Note that this is the endpoint provided in the earlier ‘endpoint create’ command. We are illustrating how to configure Trove to use SSL using a self-signed certificate. Therefore, when the server provides the client with its certificate to prove that it is, in fact, the true server “192.168.115.20”, this certificate would have been issued by the CA that we setup. Providing the CA certificate to the client enables the client to validate the server’s certificate. If we had instead purchased a certificate from a well-known third party CA, and the client machine already had a server certificate for that CA installed, it would be able to validate the server certificate using that well known CA’s server certificate.

Some errors that you may encounter

We now describe some errors that you may encounter in configuring SSL. Providing the wrong command line option on the CLI We have shown above how you configure server side SSL. For this, client certificates are not validated. Therefore, if you were to accidentally pass the CA Certificate to the trove CLI with --os-cert (instead of the correct --os-cacert option) you would get an error like this.

amrith@trove-client:~$ trove --os-auth-url http://192.168.115.20:5000/v2.0 --os-tenant-name admin --os-username admin --os-password 3de4922d8b6ac5a1aad9 --os-cert ./cacert.crt list
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:100: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
ERROR: SSL exception connecting to https://192.168.115.20:8779/v1.0/b0aea5682aea4074b6f215931eaebf56/instances: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

Incorrectly specifying client certificate validation

The Trove configuration file has three SSL settings and we have illustrated how to configure server side SSL. That requires only two of the settings.

[ssl]
cert_file=/etc/trove/server.crt
key_file=/etc/trove/server.key

If you were to also provide the ca_file setting in trove.conf, you are instructing Trove API to validate client certificates and you will get a failure like this:

SSLError: SSL exception connecting to https://192.168.115.20:8779/v1.0/b0aea5682aea4074b6f215931eaebf56/instances: [Errno 1] _ssl.c:510: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
ERROR: SSL exception connecting to https://192.168.115.20:8779/v1.0/b0aea5682aea4074b6f215931eaebf56/instances: [Errno 1] _ssl.c:510: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure

Enabling client side certificates

To enable client certificates (which allow the server to validate the client) you must install a certificate on the client machine and provide that to the trove command with the --os-cert command line option. If the certificate is from a trusted CA, then you need not provide anything further. However, if you have a private CA, you must provide the CA Certificate to the Trove API Service via the ca_file option in the [ssl] section. Examining the CLI to Server interactions with SSL enabled Finally, we conclude this write-up by looking at the interactions between the CLI and the Controller when SSL is enabled for Trove. We illustrate this with the trove flavor-list command.

amrith@trove-client:~$ trove --debug --os-auth-url http://192.168.115.20:5000/v2.0 --os-tenant-name admin --os-username admin --os-password 3de4922d8b6ac5a1aad9 --os-cacert ./cacert.crt flavor-list
[…]
INFO (connectionpool:207) Starting new HTTP connection (1): 192.168.115.20
[…]

DEBUG (v2:86) Making authentication request to http://192.168.115.20:5000/v2.0/tokens
[…]

INFO (connectionpool:756) Starting new HTTPS connection (1): 192.168.115.20
[…]

+-----+----------------+-------+
|  ID | Name           |   RAM |
+-----+----------------+-------+
|   1 | m1.tiny        |   512 |
|  10 | eph.rd-smaller |   768 |
|   2 | m1.small       |  2048 |
|   3 | m1.medium      |  4096 |
|   4 | m1.large       |  8192 |
|  42 | m1.nano        |    64 |
| 451 | m1.heat        |   512 |
|   5 | m1.xlarge      | 16384 |
|   6 | tinier         |   506 |
|   7 | m1.rd-tiny     |   512 |
|   8 | m1.rd-smaller  |   768 |
|  84 | m1.micro       |   128 |
|   9 | eph.rd-tiny    |   512 |
+-----+----------------+-------+

References

Ubuntu Guide for Self Signed Certificates, Certificate Signing Requests, and private CA’s at https://help.ubuntu.com/lts/serverguide/certificates-and-security.html What is SSL? at https://www.digicert.com/ssl.htm Wikipedia entry about SSL at https://en.wikipedia.org/wiki/SSL

Amrith Kumar, an active technical contributor and core reviewer of the OpenStack Trove project, as well as founder and CTO of Tesora Inc. He's also the co-author of the “OpenStack Trove” book, available online.

Superuser is always interested in how-tos and other contributions, get in touch at editor@openstack.org

Cover Photo // CC BY NC

by Amrith Kumar at January 23, 2016 11:02 PM

OpenStack Blog

OpenStack Developer Mailing List Digest January 16-22

Success Bot Says

  • mriedem: nova liberty 12.0.1 released [1].
  • OpenStack Ansible Kilo 11.2.7 has been released.
  • OpenStack-Ansible Liberty 12.0.4 has been released.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All: https://wiki.openstack.org/wiki/Successes

Governance

  • License requirement clarification for big tent projects [2].
  • Make constraints opt in at the test level [3].
  • OSprofiler is now an official OpenStack project [4].

Cross-Project

Release Count Down For Week R-10, Jan 25-29

  • Focus: with the second milestone behind us, project teams should be focusing on wrapping up new feature work and stabilizing recent additions.
  • Release actions:
    • Strictly enforcing library release freeze before M3 (5 weeks).
    • Review client/integration libraries and whatever other libraries managed by your team.
    • Ensure global requirements and constraints lists are up to date with accurate minimum versions and exclusions.
    • Quite a few projects with unreleased changes on stable/liberty branch. Check for your project [7].
  • Important dates:
    • Final release for non client libraries: February 24th
    • Final release for client libraries: March 2nd
    • Mitaka-3: Feb 29 through March 4 (includes feature freeze and soft string freeze).
    • Mitaka release schedule [8].
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/084678.html

Stabilization Cycles: Elaborating on the Idea To Move It Forward

  • At the Tokyo summit, the OpenStack Development Theme session, in which people discuss overall focus in shared efforts, having cycles to stabilize projects was brought up.
  • A project could decide to spend some percentage of time of the cycle on focusing on bug fixing, review backlog, refactoring, instead of entirely on new features.
  • Projects are already empowered to do this, however, maybe the TC could work on formalizing this process so that teams have a reference when they want to.
  • Some contributors from the summit feel they need the Technical Committee to take leadership on this, so that they can sell it back to their companies.
  • Another side of discussion, healthy projects should naturally come up with bursts of feature additions and burst of repaying technical debt continuously.
    • Imposing specific periods of stabilization prevents reaching that ideal state.
  • Full thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/084564.html

by Mike Perez at January 23, 2016 01:27 AM

January 22, 2016

Tesora Corp

Short Stack: Accelerating NFV Delivery with OpenStack, Outreachy Welcomes Seven New Interns, OpenStack Demand Hampered by Skills Shortage

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

Accelerating NFV Delivery with OpenStack | OpenStack Foundation

On Wednesday, the Openstack Foundation published their newest whitepaper on global telecommunication companies’ use of OpenStack to accelerate network function virtualization (NFV) delivery. Although NFV is in its infancy, the Foundation published the report with the intention of bringing light to the fact that many global telecoms are working to make OpenStack the best infrastructure platform for NFV. This paper describes NFV, its business value, and how OpenStack supports NFV. It details specific projects, use cases, and the experience of major carriers and enterprises including AT&T, Verizon, NTT Group, SK Telecom, Deutsche Telekom, and Bloomberg.ses.

Mirantis and Others Steer Big Growth in OpenStack Cloud Training | Ostatic Blogs

Mirantis significantly expanded its OpenStack training efforts in 2015, and is featured prominently among the top OpenStack training and certification programs. Mirantis has added new courses, expanded to 15 new locations and trained more than 5,000 students.  That is double the total number of students the company has trained since 2012.

Outreachy program welcomes seven new OpenStack interns | OpenStack Superuser

Last month, the OpenStack Foundation welcomed seven new interns into the Outreachy program. The interns will be working with the Foundation for the next three months on various OpenStack projects, and receive mentorship from several members of the community. Intern Sonali Goyal will be working with Tesora CTO Amrith Kumar on the project “Users and databases CRUD operations for CouchDB.” This internship brings diverse groups of students real world experience on open source projects.

6 non-code contributions you can make to open source | Opensource.com

This week, Safia Abdalla provided great insight as to how members of the open source community can make contributions without writing a single line of code. Abdalla says there are six main options: to evangelize, report bugs, mentor, write, host a meetup, or improve security. Abdalla placed a heavy emphasis on community building, and maintained that some of the most important contributions to software don’t happen in front of a computer.

SUSE: OpenStack Cloud Demand High, But Hampered by Skills Shortage | The VAR Guy

In a survey released by SUSE this week, the results highlighted some important facts about OpenStack adoption trends. The survey revealed that companies want to build OpenStack private clouds, but are concerned about lack of skills and vendor neutrality. The results also showed that more than eighty percent of enterprises plan to adopt OpenStack as a cloud computing solution or already have — yet half of organizations that have tried to implement OpenStack have failed, hampered by a lack of open source cloud computing skills. Essentially, it is not lack of interest but shortage of skills that is affecting adoption.

The post Short Stack: Accelerating NFV Delivery with OpenStack, Outreachy Welcomes Seven New Interns, OpenStack Demand Hampered by Skills Shortage appeared first on Tesora.

by Alex Campanelli at January 22, 2016 04:32 PM

Rackspace Developer Blog

OpenStack Innovation Center (OSIC) Updates - January 2016

Since the initial launch of the OpenStack Innovation Center back in July of 2015, much work has been done. Wanted to take a moment to share the current status and some details about its next phases. If you are unfamiliar with OSIC, let me start off with some very quick background information.

The OpenStack Innovation Center, abbreviated to ‘OSIC’, is a joint partnership between Rackspace and Intel implemented to further accelerate the development of enterprise capabilities into upstream OpenStack code. It is intended to bring together Rackspace and Intel engineers in order to:

“advance the scalability, manageability and reliability of OpenStack by adding new features, functionality and eliminating bugs through upstream code contributions”

as mentioned in the Rackspace press release.

It is also intended to offer an OpenStack cloud (at an enterprise scale) to the greater OpenStack community in order to develop/test new OpenStack functionality. The OSIC hub can be found within Rackspace’s corporate headquarters in San Antonio, TX nicknamed “Castle”. Personally, I thought the OSIC launch was a very exciting one. This was mainly because it was aimed at solving some of OpenStack’s greatest challenges thus far, which are enterprise focused features and the basic idea that OpenStack was not enterprise ready. Those two 1,000 node OpenStack clusters sort of dispel that second wrong idea in so many ways. OpenStack runs in my blood, and I am very proud to be part of the community.

OK, enough of my banter - let's get down to business with some updates!

Innovation Center

Yes, you can actually see, touch, and sit in it now. The Innovation Center, located at Rackspace’s Castle, launched on September 10th of 2015. Imagine reserving a whole wing of what used to be a old shopping mall just to seat a bunch of OpenStack developers, engineers, support, and architecture staff. This location allows Rackspace and Intel engineers to collaborate on all things OpenStack. We openly invite any and all who are interested to come on down to see it with their own eyes.

Developer Training/Joint OpenStack Engineering

As of October 2015, Intel has placed ~12 developers into the Innovation Center to take part in the detailed training program created in order to on-board OpenStack focused developers. From that point forward, roughly 10-15 additional developers will be brought in to work in the Innovation Center on a monthly basis. The details around the OSIC engineering road map can be found in the following slide from the Tokyo OpenStack Summit:

OSIC Roadmap

Developer Cloud

Come on...you know you only read this far to get the scope in this section. Being honest, you're thinking the the Developer Cloud is pretty darned cool, and I can not blame you. Just saying “two 1000-node OpenStack clusters” out loud sounds impressive. As an architect, my first thought was “show me the details!”. Well, you will be the first to get a look at the details that make up the first of the 1000-node cluster that is hosted at the Rackspace Dallas Fort Worth datacenter. Could not think of a better combination of being funded by Intel and supported by Rackspace.

Keep in mind the following details encompass only the first 1000-node cluster that is ready for use and hosted at Rackspace. Unfortunately, I was not able to get any exciting details on the second cluster located at Intel in Oakland, California yet. Keep posted for more updates to come at a later date.

OSIC Cluster @ Rackspace

  • Made up of ~1012 bare metal servers that are broken up in particular ways to serve the various requests that come in from the OpenStack community.
  • Over 60% of the cluster is running RPC-O (Rackspace Private Cloud - OpenStack)
  • The remainder of the cluster is configured for bare metal server environments as requested by the community.
  • Each cabinet has two top-of-rack Cisco Nexus 3172-PQ network switches.
  • Each cabinet also has one top-of-rack network switch for out-of-band management.

The hardware is broken out into two cloud regions and the details follow:


*Cloud Region 1*

Nodes running RPC-O version 11.0:

  • 17 Server cabinets
  • 1x Deployment node
  • 3x Controller nodes
  • 7x Logging nodes
  • 132x Compute nodes with spindle drives
  • 44x Compute nodes with Intel SSD S3500
  • 66x Compute nodes with Intel SSD S3700
  • 44x Cinder nodes
  • 4x Swift Proxy nodes
  • 40x Swift Object Storage nodes

Bare metal nodes:

  • 10x Compute nodes with spindle drives

Unused nodes:

  • 12x Compute nodes with spindle drives
  • 4x Controller nodes
  • 5x Network nodes
  • 1x Util node
  • 1x Jump node

*Cloud Region 2*

Nodes running RPC-O version 11.0:

  • 29 Server cabinets
  • 1x Deployment node
  • 3x Controller nodes
  • 7x Logging nodes
  • 154x Compute nodes with spindle drives
  • 44x Compute nodes with Intel SSD S3500
  • 66x Compute nodes with Intel SSD S3700
  • 44x Cinder nodes
  • 4x Swift Proxy nodes
  • 40x Swift Object Storage nodes

Bare metal nodes:

  • 242x Compute nodes with spindle drives

Unused nodes:

  • 22x Compute nodes with spindle drives
  • 4x Controller nodes
  • 5x Network nodes
  • 1x Util node
  • 1x Jump node

Who’s using the Developer Cloud?

Currently the Intel Bare Metal CI Team and Mirantis are the only OSIC customers using the cluster as of today. Both of those teams use the bare metal server option as mentioned above. The next consumers slated to come on board is the OpenStack Foundation Infrastructure team. We're also moving the OSIC sign-up form from this link to http://www.osic.org, which will sit on the Developer cloud.

We close out this update with two pretty pictures. I'm rather confident that you wish that this was a screenshot of Horizon from your home labs, just like I do. We can all dream right? Please stay tuned for more updates to come and keep Stacking!

{<2>}Rackspace OSIC Cloud Region 1 {<3>}Rackspace OSIC Cloud Region 2

Check out the last 8 minutes of the recorded OSIC session from the Tokyo OpenStack summit back in October 2015 to see the video of the OSIC launch at Castle: https://www.openstack.org/summit/tokyo-2015/videos/presentation/intel-cloud-for-all-openstack-innovation-center.

January 22, 2016 12:00 PM

Matthias Runge

Disable "Resource Usage"-dashboard in Horizon

When using Horizon as Admin user, you probably saw the metering dashboard, also known as "Resource Usage".

It internally uses Ceilometer; Ceilometer continuously collects data from configured data sources. In a cloud environment, this can quickly grow enormously. When someone visits the metering dashboard in Horizon, Ceilometer then will accumulate requested data on the fly.

On the medium term, Horizon should switch to use gnocchi; in between, if you're tired on waiting, just disable metering dasboard e.g by placing the file like _99_disable_metering_dashboard.py to /usr/share/openstack-dashboard/openstack_dashboard/local/enabled

# The slug of the panel to be added to HORIZON_CONFIG. Required.
PANEL = 'metering'
# The slug of the dashboard the PANEL associated with. Required.
PANEL_DASHBOARD = 'admin'
# The slug of the panel group the PANEL is associated with.
PANEL_GROUP = 'admin'
REMOVE_PANEL = True

Finally, restart httpd:

systemctl restart httpd

by mrunge at January 22, 2016 08:05 AM

January 21, 2016

OpenStack Superuser

Meet OpenStack's community wrangler, David Flanders

The OpenStack Foundation is pleased to announce the addition of a community wrangler: David Flanders. You can call him Flanders, just like his friends have since "The Simpsons" debuted in 1989. Flanders brings 15 years of developer experience building open communities and witnessing how the term “open” has evolved. He talks with Superuser about his new position and what he calls his own, personal API methods.

alt text here

Why are you are excited to be joining the OpenStack Foundation staff?

The amazing intelligence of the developer community, the buzz and can-do attitude of the biannual summits, the international opportunities, the futures the foundation is creating… and so much more! Though, most exciting for me (the reason why I applied for this job) is that the OpenStack community is changing the “open” movement for the whole world.

How so? What does the OpenStack community mean to you?

I’ve spent my career working to promote open technologies via various global communities, foundations, charities and startup capital; for me, the OpenStack community is a next-generation open community. In the past, old-school open source communities have been exclusive with regards to whom may be “knighted” to participate in the social good of their community. OpenStack is- actively broadening the scope of who can participate in open, which I hope to help push forward as well.

So how is OpenStack different from other open source communities, given your personal history?

OpenStack actively promotes a full ecosystem of partners: for- and non- profit, social good as well as all the other new business models starting to emerge in the digital economy...In my opinion (and I’m intentionally being contentious here), OpenStack has broken the mold for what an open community can do and achieve. The bigger question all us open-do-gooders must continue to ask ourselves is: can we continue breaking ‘open’ (until it really works) so that open source becomes a dominant economic generator for our sector, and maybe even one day for all sectors? Yes, I’m that ambitious for the power of open!

Can you give us 5 words that describe you and why?

Explorer I live in Melbourne Australia, I’m American by birth, I’m British-university educated (London hardened), Portuguese in spirit and always eyeing the next passport I should collect.

Of all my degrees, it is the school of cultural education which is most prized to me (other than the teachings of my wife.) Open source is about how the local culture understands it, I want to understand your point-of-view so OpenStack will work for your local community.

Builder (of open communities) I’ve previously helped build open communities for: the UK Gov’t in ‘Open Access’ (Research) and Developer Happiness (aka #DEV8D); The Open Knowledge Foundation for ‘Open Government Data’; ‘Open Hardware’ (for 3D Printers) via the #RepRap community and most recently the ‘Research Bazaar” (#ResBaz) for ‘Open Data Science Skills’ in Universities.

Entrepreneurial-minded While working in the UK, I was able to oversee a multi-million dollar investment portfolio of application developer teams and data-centric startups in Universities for several years. I believe entrepreneurship methodologies/code (failing fast, agile, pivots, etc) are the new de facto business models coming out of our generation.

Polymath Technology has always been a means to an end for me: that is to say I have had the privilege of working with professors and PhDs in every discipline to try and figure out how better to use technology to improve their R&D. I’d describe my chief joy in life as being an ‘eternal knowledge tourist.’ I hope that experience is going to come into play as we build the cloud native application ecosystem for all sectors: industry, university, education, government, charities, etc.

Analytically-empathetic For me, community management is fundamentally about understanding the psychologies of the community. Being with the community and understanding how they view the shared problem: what are the technical decisions (IQ) and what are the motivations of the community for making decisions EQ- it’s as much about hacking-the-brain as it is hacking-computers!

What do you feel is the number one expertise you can bring to the incredibly talented OpenStack community?

Over the next six months, I am really looking forward to sitting down with developers and hearing how they experience working with the community. Oddly enough, I really enjoy getting developers to walk me through their code, so I can understand where their code fits within the ecosystem. A developer talking through their code has all kinds of layered meaning: are developers thinking of the code base in similar ways? - are they designing via shared patterns? - are they interested and incentivized to collaborate via the code base? By being a person who really earnestly listens to developers, I can usually help shed light on how we are all working together as a community collective.

alt text here

So how do you hope to help the community achieve in the next couple of months as we build to the Austin Summit?

Well, OpenStack moves fast, so my ideas will naturally pivot as I get to work with the community more. Immediately, I’d love to talk with the community about the following:

  • What does good “cloud app” training look like so we can grow the app-dev community?
  • What exemplar “cloud native apps” do you envision? What API+SDK support do we need?
  • How can we work with the University sector to recruit the next generation of cloud app developers (NB remember how Apple in the 90’s infiltrated via their education programme)?!
  • Where should the conversation be taking place about how we grow the application developer community (at the a summit session, via a dedicated mailing list, slack, etc.)?
  • What are the events which the OpenStack community should be attending to discuss best practice in application development (containers, pods, etc.)?
  • How can we help our amazing devopps community work with appdev to understand OpenStack as a platform for end user apps development?

I’m going to be taking most of these questions forward via the OpenStack User mailing list, please join me there to continue the conversation: http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee

I’m sure people will want to get in touch and say “hi” after reading this, how might they do so?

Quickest way to say ‘hi-diddly-yo’ would be via Twitter @DFFlanders (same IRC nick: dfflanders) Otherwise, all my up-to-date contact details are on the about page of my personal blog and LinkedIn profile.

Cover Photo // CC BY NC

by Allison Price at January 21, 2016 10:45 PM

Red Hat Stack

Red Hat Cloud Infrastructure Cited as a Leader Among Private Cloud Software Suites by Independent Research Firm

Earlier this week, Red Hat Cloud Infrastructure (RHCI) was named a leader in The Forrester Wave™: Private Cloud Software Suites, Q1 2016 report.

The Forrester report states that Red Hat “leads the evaluation with its powerful portal, top governance capabilities, and a strategy built around integration, open source, and interoperability. Rather than trying to build a custom approach for completing functions around operations, governance, or automation, Red Hat provides a very composable package by leveraging a mix of market standards and open source in addition to its own development.”

Moreover: “Red Hat received top marks for workflow life-cycle automation, administrative portal usability and experience, permissions, compliance tracking, capacity monitoring, platform APIs, ITSM and developer tools, and configuration management tool integration.”

Speaking specifically about Red Hat Enterprise Linux OpenStack Platform, the report states: “Red Hat also commits to contributing all changes upstream to OpenStack rather than maintaining proprietary enhancements. Its API exposure and ability to swap out core functionalities for a long list of pre-integrated market tooling sets it apart from others in the evaluation.”

RHCI is a single subscription product that provides the essential infrastructure and management components needed to build and manage a private or hybrid private Infrastructure as a Cloud (IaaS) infrastructure. The solution includes Red Hat Enterprise Virtualization, our high-performance traditional scale-up virtual infrastructure, and, Red Hat Enterprise Linux OpenStack Platform, our scalable, production-ready OpenStack, designed for scale-out, cloud-enabled workloads.

On the management side, RHCI also includes our award-winning Cloud Management Platform, Red Hat CloudForms, as well as Red Hat Satellite, our Cloud Systems Management tool, for lifecycle management from the physical infrastructure to the tenant workload. Together these integrated components, working alongside your existing infrastructure investments, allow for a flexible path to a cloud architected for traditional scale up workloads, newer cloud-native workloads, or both. Red Hat CloudForms, I’m also pleased to share, was named a leader in the The Forrester Wave™: Hybrid Cloud Management Solutions, Q1 2016 report.

Along with many RHCI customers, we believe that we’re delivering award-worthy breadth, flexibility, and enterprise-level functionality. It’s exciting to receive this ranking from such a highly-ranked analyst firm. Further to this point, you might also be interested in reading a whitepaper written by IDC, and sponsored by Red Hat, entitled “Preparing for Private Cloud and Hybrid IT with Red Hat Cloud Infrastructure.”

A full copy of The Forrester Wave: Private Cloud Software Suites, Q1 2016 is now available on our website.  For additional information about Red Hat Cloud Infrastructure, please visit our RHCI product page.

by Gordon Tillmore, Sr. Principal Product Marketing Manager at January 21, 2016 08:51 PM

IBM OpenTech Team

OpenStack development tips: Setting up a ZNC bouncer

Though not necessary, setting up a ZNC bouncer can positively impact your OpenStack development by making it easier for other developers to leave you messages while you’re away from IRC. You can also read important scroll back and logs easily to stay caught up in what’s happening in your OpenStack project.

Installing ZNC on your VM

This tutorial assumes you have access to a virtual machine with a public IP address. For my own purposes, I created a Ubuntu 14.04 VM on Bluemix using the m1.small flavor. Any VM on a bunch of public cloud providers would do just fine. I also opted to install from a repo instead of building from source, for no reason other than it is fewer steps. On the VM perform the following:

1) Add the repo instead of installing from source

ibmcloud@bouncer:~$ sudo add-apt-repository ppa:teward/znc

2) Update package lists

ibmcloud@bouncer:~$ sudo apt-get update

3) Install ZNC packages

ibmcloud@bouncer:~$ sudo apt-get install znc znc-dbg znc-dev znc-perl znc-python znc-tcl

Configuring your ZNC server

1) Run znc --makeconf

Don’t just run znc! Before running it for the first time, use the --makeconf argument. This takes the user through a first time configuration, most defaults are fine, the only required option is which port to run the service on. These are the options I specified:

port = 55901
username = stevemar
password = supersecretpassword
Setup a network? Yes (freenode is the first option)

Here’s the entire output:

ibmcloud@bouncer:~$ znc --makeconf
[ .. ] Checking for list of available modules...
[ >> ] ok
[ ** ] 
[ ** ] -- Global settings --
[ ** ] 
[ ?? ] Listen on port (1025 to 65534): 55901
[ ?? ] Listen using SSL (yes/no) [no]: 
[ ?? ] Listen using both IPv4 and IPv6 (yes/no) [yes]: 
[ .. ] Verifying the listener...
[ >> ] ok
[ ** ] Unable to locate pem file: [/home/ibmcloud/.znc/znc.pem], creating it
[ .. ] Writing Pem file [/home/ibmcloud/.znc/znc.pem]...
[ >> ] ok
[ ** ] Enabled global modules [webadmin]
[ ** ] 
[ ** ] -- Admin user settings --
[ ** ] 
[ ?? ] Username (alphanumeric): stevemar
[ ?? ] Enter password: 
[ ?? ] Confirm password: 
[ ?? ] Nick [stevemar]: 
[ ?? ] Alternate nick [stevemar_]: 
[ ?? ] Ident [stevemar]: 
[ ?? ] Real name [Got ZNC?]: 
[ ?? ] Bind host (optional): 
[ ** ] Enabled user modules [chansaver, controlpanel]
[ ** ] 
[ ?? ] Set up a network? (yes/no) [yes]: 
[ ** ] 
[ ** ] -- Network settings --
[ ** ] 
[ ?? ] Name [freenode]: 
[ ?? ] Server host [chat.freenode.net]: 
[ ?? ] Server uses SSL? (yes/no) [yes]: 
[ ?? ] Server port (1 to 65535) [6697]: 
[ ?? ] Server password (probably empty): 
[ ?? ] Initial channels: 
[ ** ] Enabled network modules [simple_away]
[ ** ] 
[ .. ] Writing config [/home/ibmcloud/.znc/configs/znc.conf]...
[ >> ] ok
[ ** ] 
[ ** ] To connect to this ZNC you need to connect to it as your IRC server
[ ** ] using the port that you supplied.  You have to supply your login info
[ ** ] as the IRC server password like this: user/network:pass.
[ ** ] 
[ ** ] Try something like this in your IRC client...
[ ** ] /server <znc_server_ip> 55901 stevemar:<pass>
[ ** ] 
[ ** ] To manage settings, users and networks, point your web browser to
[ ** ] http://<znc_server_ip>:55901/
[ ** ] 
[ ?? ] Launch ZNC now? (yes/no) [yes]: 
[ .. ] Opening config [/home/ibmcloud/.znc/configs/znc.conf]...
[ >> ] ok
[ .. ] Loading global module [webadmin]...
[ >> ] [/usr/lib/znc/webadmin.so]
[ .. ] Binding to port [55901]...
[ >> ] ok
[ ** ] Loading user [stevemar]
[ ** ] Loading network [freenode]
[ .. ] Loading network module [simple_away]...
[ >> ] [/usr/lib/znc/simple_away.so]
[ .. ] Adding server [chat.freenode.net +6697 ]...
[ >> ] ok
[ .. ] Loading user module [chansaver]...
[ >> ] ok
[ .. ] Loading user module [controlpanel]...
[ >> ] ok
[ .. ] Forking into the background...
[ >> ] [pid: 13439]
[ ** ] ZNC 1.6.1+deb1~ubuntu14.04.0 - http://znc.in

2) Allow traffic through firewall

By default, VMs on Bluemix have the firewall enabled, we’ll need to allow traffic through port 55901, since that’s the port we specified.

ibmcloud@bouncer:~$ sudo ufw allow 55901

Using ZNC’s Web Interface

One of the best features about ZNC is the web admin interface, just go to: http://ip_address:55901 from any machine and log in with the username and password that you created in the configuration step. From here you can modify a bunch of settings. The only one I chose to modify was the buffer, I increased it to 500.

znc webui

Connecting to your ZNC instance

On my laptop I use LimeChat as my IRC client. Connecting to my ZNC server is pretty easy, I just create a new IRC server instance and specify the IP address, username, and password. If your nickname is registered with the NickServ, then you may also need to identify upon logging in.

limechat-server limechat-config

References

I used the following sites when installing and setting up ZNC: https://dague.net/2014/09/13/my-irc-proxy-setup/ and http://wiki.znc.in/Installation#Install_via_PPA

The post OpenStack development tips: Setting up a ZNC bouncer appeared first on IBM OpenTech.

by Steve Martinelli at January 21, 2016 06:04 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
February 10, 2016 12:39 PM
All times are UTC.

Powered by:
Planet