July 29, 2016

Opensource.com

4 new guides for OpenStack developers and administrators

Want to learn more about managing OpenStack? The open source community has created many helpful guides and tutorials.

by Jason Baker at July 29, 2016 07:01 AM

Aptira

Blue Sky Thinking With A Touch Of Cloud

Tristan Goode - Headshot

Tristan Goode: Aptira Founder, CEO and Board Director

Aptira’s Founder, CEO and Board Director, Tristan Goode shares insights into the future of OpenStack in Australia, as well as the upcoming Summit in Sydney with CEO of Business Events Sydney, Lyn Lewis-Smith.

The people who will lead the innovation and disruption that fuels the revolution toward Australia becoming a leading information economy are, as I write, sitting in a classroom.

The $65 million allocated in the Australian Government’s National Innovation and Science Agenda to increasing participation of young Australians in STEM subjects and improving their digital literacy is an excellent start.  In our line of work, it’s going to be those students who seek their own opportunities to be part of creating new technologies that will at least in the next five years, have the greatest impact.

The reality is though that while these initiatives are very welcome, it’s also the responsibility of those of us in industry and government to do everything we can to help these future leaders help themselves.

Our business is part of the global community of developers of OpenStack, the open source cloud software that didn’t exist a little over five years ago, but today has become the cloud computing platform of choice for more than half of the Fortune 100.  It’s the next big thing most people have never heard of, and it’s becoming the first common software platform in the history of computing for organisations of any scale to manage their data.

Right now, the OpenStack community globally has zero unemployment, and its rate of growth means it’s likely to offer far more jobs than we can fill for a long time to come.  OpenStack is one example of an emerging, disruptive innovation that will very soon become a global standard, and is crying out for talent.  These are the jobs of the future.

Outside of government funding, there is much afoot to make sure this happens.  For example, we pitched in on the now successful bid led by Business Events Sydney to secure hosting rights for OpenStack Summit 2017. The NSW Government also helped to woo decision makers, so our community can reap the benefits of thousands of coders and clients descending on Sydney next November.

These people, whether they’re part of the international community that’s creating the open source software or are among the corporates and governments adopting OpenStack, are the vanguard for game-changing disruption of the global IT sector – which Gartner says will be worth US$3.49 trillion dollars in 2016.  What they talk about while they’re in Sydney will have a fundamental impact on not only how every industry on the face of the planet does business, but what that business will look like in as little as five years’ time.

Business Event Sydney CEO Lyn Lewis-Smith put it best when talking about why this is important.  “OpenStack’s decision to come to Sydney means much more than just the immediate economic impact from hosting the event.  This will be an unprecedented − and for many, a true once-in-a-lifetime − chance for anyone who wants to be part of truly disruptive innovation, to connect and collaborate with the people who are already there, and are already doing it.  They can start to become part of that international community and put themselves in a position to earn one of those jobs of the future everyone talks about, by simply getting on the bus and coming down to the Summit.”

It’s opportunities like the hosting of the 2017 OpenStack Summit that not only adds significant fuel to the rhetorical fire around supporting innovation in this country, but grants much readier access to opportunities for anyone who’s keen to take them.  It’ll allow our businesses to learn from some of the world’s most advanced and take the steps to joining their ranks, and contribute to our business hubs being taken more seriously on the international stage.

And for teenagers sitting in a classroom considering what electives to take for their senior years and where those are likely to lead them, it shows a future their parents and teachers may not even be able to fathom.  When you consider that anyone in high school or university today who gains experience working in the OpenStack community will be setting themselves up for great jobs with innovative companies around the world, that bus fare starts looking like a pretty small investment to make.

The post Blue Sky Thinking With A Touch Of Cloud appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Tristan at July 29, 2016 06:55 AM

Enriquez Laura Sofia

OpenStack 6th Birthday

Today is OpenStack birthday!

DSC01965.JPG

We met at Red Hat Arg, share experiences and a fun chat.

DSC01962.JPG

 

 


by enriquetaso at July 29, 2016 01:51 AM

Let’s play with Cinder and RBD (part 2)

The idea is help you get familiar with what cinder can do and how rbd makes it happen and what it looks like on the backend.

  1. Create a volume
  2. Create a snapshot
  3. Create a volume from a snapshot
  4. Create a volume from another volume

Create volume

Let’s create a logical volume:

$ cinder create <size> --name volume1 --description cinder-volume

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:21:49.000000 |
| description | None |
| encrypted | False |
| id | 1b681f9f-81f6-4965-ad89-28ffb10c1ede |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2016-07-29T00:21:52.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
+--------------------------------+------------------------------------------------+

Positional arguments:

  • <size> : Size of volume, in GiBs. (Required unless snapshot-id/source-volid is specified).
  • Check cinder help create for more info

Let’s verify with ~/sudo rbd ls volumes to check what rbd have:

vagrant@vagrant-ubuntu-trusty-64:~/devstack$ sudo rbd ls volumes
>bar
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede

bar is a image created just with rbd, you can see that all the cinder volumes starts with ‘volume-<uuid>‘.

Create snapshot

So for rbd everything is thinly provisioned, and snapshots and clones use copy-on-write.

When you have a copy-on-write (cow) snapshot, it means it has a depency on the parent that it was created from. The only unique blocks in the snapshot or clone will be the blocks that have been modified (and thus copied) that makes snapshot creation very very fast (you only need to update metadata) and data doesn’t actually move or copy anywhere  …. instead it’s copied on demand, or on write !

This dependency means that you cannot, e.g. delete a volume that has snapshots
because that would make those snapshots unusable, like pulling the rug out from under them.

$ cinder snapshot-create <volume ID or name> --name snap1
$ cinder snapshot-create volume1 --name snap1
$ sudo rbd ls -l volumes
>NAME SIZE PARENT FMT PROT LOCK 
>bar 1024M 1 
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e9..d 1024M 2 yes 

You can see that has the same volume ID  with ..@snapshot-<ID snap>...

Create volume from snapshot

from: cinder snapshot-list get the snap ID.

$ cinder create --snapshot-id 6e93e928-2558-4f12-a9ab-12d25cd72dbd --name v-from-s

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:40:00.000000 |
| description | None |
| encrypted | False |
| id | 18966249-f68b-4ed3-901e-7447a25dad03 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | v-from-s |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | 6e93e928-2558-4f12-a9ab-12d25cd72dbd |
| source_volid | None |
| status | creating |
| updated_at | 2016-07-29T00:40:00.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
+--------------------------------+------------------------------------------------+

Create a volume from another volume

Since we are cloning from a volume and not a snapshot, we must first create a snapshot of the source volume.

$ cinder create --source-volid 1b681f9f-81f6-4965-ad89-28ffb10c1ede --name v-from-v

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:44:47.000000 |
| description | None |
| encrypted | False |
| id | 9f79be73-4df6-4ab1-ab70-02c91df96439 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | v-from-v |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | 1b681f9f-81f6-4965-ad89-28ffb10c1ede |
| status | creating |
| updated_at | 2016-07-29T00:44:47.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
$ sudo rbd ls -l volumes
NAME SIZE PARENT FMT PROT LOCK 
bar 1024M 1 
volume-18966249-f68b-4ed3-901e-7447a25dad03 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 2 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 1024M 2 yes 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 1024M 2 yes 
volume-9f79be73-4df6-4ab1-ab70-02c91df96439 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 2 

In this case the clone volume has an volume-ID@volume-ID.clone_snap

READ MORE?

 


by enriquetaso at July 29, 2016 12:59 AM

July 28, 2016

OSIC - The OpenStack Innovation Center

OpenStack Innovation Center Celebrates Its First Year

OpenStack Innovation Center Celebrates Its First Year

OSIC Team

By David L. Brown, Intel and Kenny Johnston, Rackspace

This month marks six years since the founding of the OpenStack open source project by Rackspace and NASA, and within this time, OpenStack has emerged as the world’s leading open source cloud operating system. With more than 54,000 individual contributors from 600 member companies across 179 countries, the OpenStack community has been one of the fastest growing. And many of the world’s most prominent cloud service providers—and a growing list of global enterprises—have now turned to OpenStack.

Yet, barriers to OpenStack’s broader adoption still exist. No one company can address these challenges on its own—we must work together as a community to address these barriers to ensure that the platform is ready for the workloads of tomorrow.

It was in this spirit that the OpenStack Innovation Center (OSIC) sprang to life—a joint investment between Rackspace and Intel to bring together teams of engineers to accelerate the evolution of the OpenStack platform. The objective is to make OpenStack easy to deploy, operate and use, with all of the features of an enterprise-class computing platform. This month, we’re proud to celebrate OSIC’s 1-year anniversary and reflect on our collective accomplishments.

Check out this fun infographic for an at-a-glance view of these accomplishments!

The team has focused on assembling the world’s largest joint engineering team focused on upstream contributions to OpenStack, introducing a comprehensive training program to add to the ever-growing number of developers and publishing a joint engineering roadmap to solicit community-wide feedback & participation. Key engineering focuses include manageability, reliability & resilience, high availability, scalability, security & compliance, and simplicity, or usability. To date, the team has imparted over 10,000 hours of OpenStack knowledge to nearly 170 new contributors. Collectively, we have contributed to a whopping 25 OpenStack projects, completing nearly 90 blueprints and 36,514 code reviews, and submitting 19,438 patch sets. Specifically, we’ve set our sights on simplifying OpenStack deployments through improved and well-documented configuration options. We’ve also created the Craton fleet management project to bring best-in-breed operating tooling to enterprises everywhere. In addition, the team is creating a third-party performance CI that can be utilized by the community across projects to limit performance regressions. Our efforts have also included operational tests to improve deployment, scaling and upgrading across all projects.

In parallel, the OSIC team hosts the world’s largest OpenStack developer cloud—ultimately comprised of 2,000 nodes—to empower the community to test features and functionality at scale. Fully available to the OpenStack community since its opening, the first 1,000-node cluster has serviced 180 users among prominent organizations including Cambridge University, Midokura, Mirantis, PLUMGrid, Red Hat, StackHPC, and VMware.

Since its inception, OSIC is delivering to its mission—increasing the number of contributors, and contributions, to OpenStack, and advancing the platform’s enterprise capabilities and ease of deployment. All of these investments help ensure OpenStack’s long-term vitality among enterprises around the world, and support the Intel® Cloud for All vision of unleashing tens of thousands of new clouds.

If you have an OpenStack test case that could benefit from the resources of a world-class developer cloud, or want to provide the team with your input on the joint engineering roadmap, visit OSIC.org.

by OSIC Team at July 28, 2016 06:42 PM

Adam Young

ControllerExtraConfig and Tripleo Quickstart

Once I have the undercloud deployed, I want to be able to quickly deploy and redeploy overclouds.  However, my last attempt to affect change on the overcloud did not modify the Keystone config file the way I intended.  Once again, Steve Hardy helped me to understand what I was doing wrong.

Summary

/tmp/deploy_env.yml already definied ControllerExtraConfig: and my redefinition was ignored.

The Details

I’ve been using Quickstart to develop.  To deploy the overcloud, I run the script /home/stack/overcloud-deploy.sh which, in turn, runs the command:

openstack overcloud deploy --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org \
${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML}  "$@"|| deploy_status=1

I want to set two parameters in the Keystone config file, so I created a file named keystone_extra_config.yml

parameter_defaults:
   ControllerExtraConfig:
     keystone::using_domain_config: true
     keystone::domain_config_directory: /path/to/config

And edited /home/stack/overcloud-deploy.sh to add in -e /home/stack/keystone_extra_config.yml likwe this:

openstack overcloud deploy --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org \
    ${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML}    -e /home/stack/keystone_extra_config.yml   "$@"|| deploy_status=1

I have run this both on an already deployed overcloud and from an undercloud with no stacks deployed, but in neither case have I seen the values in the config file.

Steve Hardy walked me through this from the CLI:

openstack stack resource list -n5 overcloud | grep “OS::TripleO::Controller ”

| 1 | b4a558a2-297d-46c6-b658-46f9dc0fcd51 | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt |
| 0 | 5b93eee2-97f6-4b8e-b9a0-b5edde6b4795 | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt |
| 2 | 1fdfdfa9-759b-483c-a943-94f4c7b04d3b | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt

Looking in to each of these  stacks for the string “ontrollerExtraConfig” showed that it was defined, but was not showing my values.  Thus, my customization was not even making it as far as the Heat database.

I went back to the quickstart command and did a grep through the files included with the -e flags, and found the deploy_env.yml file already had defined this field.  Once I merged my changes into /tmp/deploy_env.yml, I saw the values specified in the Hiera data.

Of course, due to a different mistake I made, the deploy failed.  When specifying domain specific backends in a config directory, puppet validates the path….can’t pass in garbage like I was doing, just for debugging.

Once I got things clean, tore down the old overcloud and redeployed, everything worked.  Here was the final /home/stack/deploy_env.yaml environment file I used:

parameter_defaults:
  controllerExtraConfig:
    keystone::using_domain_config: true
    keystone::config::keystone_config:
      identity/domain_configurations_from_database:
        value: true

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
    heat::config::heat_config:
      DEFAULT/num_engine_workers:
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1

And the modified version of overcloud-deploy now executes this command:

# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org -e /home/stack/deploy_env.yaml   "$@"|| deploy_status=1

Looking in the controller nodes /etc/keystone/keystone.conf file I see:

#domain_specific_drivers_enabled = false
domain_specific_drivers_enabled = True

# Extract the domain specific configuration options from the resource backend
# where they have been stored with the domain data. This feature is disabled by
# default (in which case the domain specific options will be loaded from files
# in the domain configuration directory); set to true to enable. (boolean
# value)
#domain_configurations_from_database = false
domain_configurations_from_database = True

# Path for Keystone to locate the domain specific identity configuration files
# if domain_specific_drivers_enabled is set to true. (string value)
#domain_config_dir = /etc/keystone/domains
domain_config_dir = /etc/keystone/domains

by Adam Young at July 28, 2016 05:20 PM

AppFormix

Barcelona es Teva! Vote for Our OpenStack Summit Sessions

Barcelona es Teva! Vote for Our OpenStack Summit Sessions

Wow! 1500+ session proposals! Voting is open for the OpenStack Summit Barcelona, and “Barcelona es Teva” (Barcelona belongs to you!)! Take this opportunity to speak your mind: VOTE HERE

by Jennifer Allen (jenallen@appformix.com) at July 28, 2016 03:30 PM

IBM OpenTech Team

OpenStack Keystone Newton Mid-cycle Recap

The OpenStack Keystone Newton mid-cycle took place at the Cisco offices in San Jose. Special thanks goes out to Chet Burgess and Morgan Fainberg for helping me organize and secure a great location. It was a fast-paced three day event that from brought operators and contributors together to focus on the keystone project. We had two days of intense discussion with a final day dedicated to coding working in smaller groups. Also, for the first time, we held a retrospective on the midcycle itself; we discussed non-technical aspects of the previous mid-cycles to determine how effective we are being. This is summed up by Dolph Mathews on his blog; my recap will pertain to the technical aspects of the midcycle.

Quick Summary

Over 20 keystone community members attended, including 11 core reviewers. Seven different companies were represented. We collaboratively revised and merged over 20 patches across the identity program’s five main repositories (keystone, keystoneauth, python-keystoneclient, keystone-specs and keystonemiddleware).
For a full breakdown of what we discussed see this etherpad.

Performance and Scalability

As OpenStack matures, the keystone team has seen a trend in the types of questions we have been asked. Lately, we’ve been fielding a lot of questions about how to tune or deploy Keystone for optimal performance and scalability. Rather than repeatedly give the answer “use fernet tokens and memcache”, we decided to start formal documentation on the subject, which can be viewed here.

Upgrades

Currently, to upgrade Keystone, a deployer must take all nodes offline, get the latest code, migrate their databases and restart. Ideally, we want to get to a place where rolling upgrades are possible for many Keystone nodes. The previous implementation is unable to meet the requirements of a rolling upgrade and would require downtime. As a result, we are proposing a new upgrade path for Newton (and all future releases). An operator would run a few keystone-manage commands (rolling-upgrade-start, expand, force-upgrade-complete, force-migrate-complete, and contract). This will be very similar to how the Neutron team manages their upgrades. I suspect many other OpenStack teams will begin adopting this pattern. The specification can be viewed here.

Multi-Factor Auth

A contentious subject to say the least. Identities in Keystone come from multiple places: SQL, LDAP and Federation. For LDAP and Federation, these identities are stored and managed outside of Keystone, so it makes sense that for features such as PCI and MFA that Keystone *not* be involved. However, Keystone does support storing users in SQL, and we should support using MFA should a deployer desire. There’s currently a spec up for review, and a mailing list post. The idea is that a user can authenticate by supplying their password and a TOTP passcode; with the TOTP passcode changing every few minutes. Hopefully we can come to a consensus soon and get this landed early in the Ocata release.

Long running operations

Long running operations have been a thorn in Keystone’s side for a while. We advocate for a short token lifespan (one hour) – since they are bearer tokens after all. But we find that deployers more often than not set the token lifespan to one day or ever one week. The reason for this is that some operations (snapshotting an image, backing up a large object to swift, etc), can take a *very* long time. We may finally have a solution to this problem, expect a write up from Jamie Lennox in the near future.

Multi-site deployments

Another question we’re seeing an uptick in is “What’s the best way to manage a multi-region keystone deployment?”. We’ve given the same answer for a number of years, leverage Galera to sync data between sites. However, this manifests into two problems. The first is we have feedback indicating that using Galera for this purpose does not scale for more than ten sites. The second issue is that the Keystone team must be cognizant that 90% of requests to Keystone are for issuing a token, so we should avoid writing to the database for these requests if possible. With Fernet tokens, we are much closer than ever before. We still don’t have a perfect answer, or a true indicator on how many deployers are hitting this issue, but that’s what the summits are for!

Thanks for reading, see everyone in Barcelona!

The post OpenStack Keystone Newton Mid-cycle Recap appeared first on IBM OpenTech.

by Steve Martinelli at July 28, 2016 01:33 PM

July 27, 2016

Kenneth Hui

OpenStack Load Balancing as a Service (LBaaS) Overview

gop_balancing_act23

Rackspace recently released version 12.2 of Rackspace Private Cloud (RPC) based on the Liberty release. In RPC v12.2, Rackspace supports Load Balancing as a Service (LBaaS). Since LBaaS is still a relatively new offering in the OpenStack community,  I want to provide more details to help users understand what LBaaS does and how it can be used in an OpenStack cloud.

Today’s post will provide a technical overview of LBaaS by answering four questions:

  1. What is load balancing and Load Balancing as a Service?
  2. How does LBaaS work in OpenStack?
  3. What does Rackspace recommend for load balancing in OpenStack?
  4. What may be the future of LBaaS in RPC?

To read more about Load Balancing as a Service in OpenStack, please click here to go to my article on the Rackspace blog site.


Filed under: Cloud, Cloud Computing, Networking, Open Source, OpenStack Tagged: Cloud, Cloud computing, LBaaS, Load Balancing, OpenStack, Rackspace

by kenhui at July 27, 2016 07:53 PM

OpenStack Superuser

OpenStack Newton release: what’s next for Heat, Keystone and Kuryr

Each release cycle, OpenStack project team leads (PTLs) introduce themselves, talk about upcoming features for the OpenStack projects they manage, plus how you can get involved and influence the roadmap.

Superuser will feature weekly summaries of the videos; you can also catch them on the OpenStack Foundation YouTube channel. This post covers Heat, Keystone and Kuryr.

Heat

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/15unwL4m2w8" width=""></iframe>

What Heat
whose mission is to orchestrate composite cloud applications using a declarative template format through an OpenStack-native REST API.

Who Thomas Herve, PTL. Day job: principal software engineer, Red Hat.

Burning issues

alt text here

“Convergence is a different way of working,” Herve says. “Right now we’re have a built-in approach and convergence is a bit more iterative. It allows us to do interesting things, for example you can recover failures…We’ve been working on it for almost two years and now it’s almost ready.”

What’s next alt text here

What matters in Newton

Herve also cited performance and improved template efficiency as goals for the coming cycle. “Performance is always something we can work on there are always things we need to improve at the template level so people can do more and be more efficient as they deploy.

Get involved!
“The main thing for us is to have feedback, it’s always interesting for us to hear from users,” he says. “Sometimes we’ll have an answer for them right away…and sometimes we won’t have an answer and it’s a way to improve the project.”

Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [heat]
Meetings are held on IRC in #openstack-meeting on Freenode. See the Heat agenda page for times and details.

Keystone

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/9vmVpp0Dq4E" width=""></iframe>

What Keystone is an OpenStack project that provides identity, token, catalog and policy services for use specifically by projects in the OpenStack family. It implements OpenStack’s Identity API.

Who Steve Martinelli, senior software developer at IBM and project team lead (PTL.)

Burning issues*

alt text here

“Our new V3 APIs have all the goodness in there, there’s a whole lot of awesome work being done and a lot of operators want to take advantage of it,” Martinelli says. “Trouble is, not all projects take advantage of V3, they sometimes make janky assumptions about V2..so we’re creating a cross-project initiative to make sure they are V3-ready.”

What’s next

alt text here

What matters in Newton

alt text here

Responding to the question by interviewer Carol Barrett about key priorities, Martinelli says “Given the fact that every project needs Keystone, it’s got to be fast to respond, it’s got to scale really well, and [it's got to be stable] because there are some old deployments running Keystone. We can’t have any surprises for users.”

Get involved!

Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [keystone]
Participate in the weekly meetings: held in #openstack-meeting, Tuesdays at 18:00 UTC.

Kuryr

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/PGT__VmvShY" width=""></iframe>

What Kuryr
whose mission is to bridge between container framework networking and storage models to OpenStack networking and storage abstractions.

alt text here

Who Antoni Segura Puimedon, PTL. Day job: Midokura.

Burning issues

alt text here

“At the Austin summit, the biggest issue was satisfying use cases that operators have when they want to operate containers — because no one wants to run just Docker,” he says. “The focus was to provide the networking for Swarm, to design and stabilize the prototype that we have for Kubernetes and start to plan for Mesos integration and how to do all of that when containers run within virtual machines.”

What’s next

alt text here

“If you use these APIs for all of your workloads, your operators can leverage the knowledge they have across the board,” he says.

What matters in Newton

alt text here

“One of the first integrations will be Neutron, Magnum and Kuryr,” he says. “It may be a bit difficult but if it works, it’s going to provide a lot of value to OpenStack operators.”

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [kuryr]
Participate in the meetings:
Every two weeks (on even weeks) on Tuesday at 0300 UTC in #openstack-meeting-4 (IRC webclient)
Every two weeks (on odd weeks) on Monday at 1400 UTC in #openstack-meeting-4 (IRC webclient)

Cover Photo // CC BY NC

by Superuser at July 27, 2016 07:06 PM

RDO

Recent RDO blogs, July 25, 2016

Here's what RDO enthusiasts have been writing about over the past week:

TripleO deep dive session #3 (Overcloud deployment debugging) by Carlos Camacho

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

… read (and watch) more at http://tm3.org/81

How connection tracking in Open vSwitch helps OpenStack performance by Jiri Benc

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

… read more at http://tm3.org/82

Introduction to Red Hat OpenStack Platform Director by Marcos Garcia

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.

… read more at http://tm3.org/83

Cinder Active-Active HA – Newton mid-cycle by Gorka Eguileor

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session.

… read more at http://tm3.org/84

by Rich Bowen at July 27, 2016 03:20 PM

Red Hat Stack

TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.

The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.

With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

undercloud vs overcloud

Ironic+Nova+Glance: baremetal management of overcloud nodes

For proper baremetal management during a deployment, Nova and Ironic need to be in perfect coordination. Nova is responsible for the orchestration, deployment, and lifecycle management of compute resources, for example, virtual machines. Nova relies on a set of plugins and drivers to establish compute resources requested by a tenant, such as the utilisation of the KVM hypervisor.

Ironic started life as an alternative Nova “baremetal driver”. Now, Ironic has its own OpenStack project and compliments Nova using its own respective API and command line utilities. Once the overcloud is deployed,  Ironic can be offered to customers that want to offer the baremetal nodes to its tenants using dedicated hardware outside of Nova’s compute pools. Here, in Director’s context, Ironic is a key core component of the undercloud, controlling, and deploying the physical nodes that are required for the overcloud deployment.

But first Director has to register the nodes with Ironic. One has to catalog the IPMI (out-of-band management), it’s IP, username and password, although there are also vendor-specific drivers, for example HP iLO, Cisco UCS, Dell DRAC. Ironic will manage the power-state of bare metal nodes used for the overcloud deployment, as well as the deployment of the operating system (via a PXE-bootable installer image)

The disk image used during hardware bootstrap is taken from the undercloud Glance image service. Red Hat provides the required images to be deployed in the overcloud nodes. These disk images typically contain Red Hat Enterprise Linux and all OpenStack components, which minimises any post-deployment software installation. They can, of course, be customised further prior to upload into Glance. For example, customers often want to integrate additional software or configurations as per their requirements.

Neutron: network management of the overcloud

As you may already know, Neutron provides network access to tenants via a self-service interface to define networks, ports, and IP addresses that can be attached to instances. It also provides supporting services for booting instances such as DHCP, DNS, and routing. Within Director, we firstly use Neutron as an API for defining all overcloud networks, any required VLAN isolation, and associated IP addresses for the nodes (IP address management).

Secondly we use Neutron in the undercloud as a mechanism for managing the network provisioning of the overcloud nodes during deployment. Neutron will detect booting nodes and instruct them to do PXE boot via a special DHCP offer, and then Ironic takes over responsibility for image deployment. Once deployed, the ironic deployment image reboots the machine to boot from hardrive, so it’s the first time the node boots by itself. Then, it will execute os-net-apply (from the TripleO project) to statically configure the operating system with the IP address. Despite that IP being managed in the undercloud’s Neutron DHCP server, it is actually set as a static IP in the overcloud’s interface configuration. This allows for configuration of VLAN tagging, LACP or failover bonding, MTU settings and other advance parameters, from the Director network configuration. Visit this tutorial for more information on os-net-config.

Heat: orchestrating the overcloud deployment steps

The most important component in Director is Heat, which is OpenStack’s generic orchestration engine. Users define stack templates using plain YAML text documents, listing the required resources (for example, instances, networks, storage volumes) along with a set of parameters for configuration. Heat deploys the resources based on a given dependency chain, sorting out which resources need to be built before the others. Heat can then monitor such resources for availability, and scale them out where necessary. These templates enable application stacks to become portable and to achieve repeatability and predictability.

Heat is used extensively within Director as the core orchestration engine for overcloud deployment.  Heat takes care of the provisioning and management of any required resources, including the physical servers and networks, and the deployment and configuration of the dozens of OpenStack software components. Director’s Heat stack template describe the overcloud environment in intimate detail, including quantities and any necessary configuration parameters. It also makes the templates versionable and programmatically understood – a truly Software Defined Infrastructure.

Deployment templates: customizable reference architectures

Whilst not an OpenStack service, one of the most important components to look at are the actual templates that we use for deployment with Heat. The templates come from the upstream TripleO community in a sub-project known as tripleo-heat-templates (read an introduction here). The tripleo-heat-templates repository comprises of a directory of Heat templates and the required puppet manifests and scripts to perform certain advanced tasks.

Red Hat relies on these templates with Director and works heavily to enhance them to provide additional features that customers request, this includes working with certified partners to confirm that their value add technology can be automatically enabled via Director, thus minimising any post-deployment effort (for more information, visit our Partner’s instructions to integrate with Director). The default templates will stand up a vanilla Red Hat OpenStack Platform environment, with all default parameters and backends (KVM, OVS, LVM or Ceph if enabled, etc).

Director offers customers the ability to easily set their own configuration by simply overriding the defaults in their own templates, and also provides hooks in the default templates to easily call additional code that organisations may want to run, this could include the installation and configuration of additional software, make non-standard configuration changes that the templates aren’t aware of, or to enable a plugin not supported by Director.

 

In our next blog post we’ll explain the Reference Architecture that Director provides out of the box, and how to plan for a successful deployment.

by Marcos Garcia - Principal Technical Marketing Manager at July 27, 2016 11:00 AM

OpenStack Superuser

Northern Virginia User Group scales to embrace community interest

OpenStack celebrates its 6th birthday this month. Find a party near you from OpenStack's worldwide list and raise a glass.

We’re also celebrating with the OpenStack community with short interviews from around the world -- from Durban to New York -- to offer a glimpse of OpenStack's impact on a local level.

The third interview shines a spotlight on the Northern Virginia User Group NOVA, one of the fastest-growing groups and a crossroads of tech and government. You also might say that locals, whose tourist board beckons visitors to "Live Passionately," have a growing ardor for OpenStack. The group just recently made it official and is blowing out the candles to honor OpenStack July 28 with a networking social featuring trivia, games, prizes, cupcakes and music.

In this three-minute Superuser TV segment, Shilla Saebi, a community development lead for OpenStack at Comcast and OpenStack NOVA coordinator, shares the love of locals for OpenStack.

“Most of the companies represented in our user group are private clouds, public clouds; we also have telecom providers,” she said. “We have people who are just experiencing or trying to get to know OpenStack or doing testbeds for OpenStack for their company.”

During her tenure as coordinator of NOVA, Saebi has seen the User Group grow to include almost 500 members.

“We are definitely growing rapidly,” she said, noting that around 10 new people sign up every week and typical meetups count 40 to 80 people. The RSVP to attendance rate has also ballooned to 70-80 percent, leading to a few growing pains. "We grew so fast that we're looking for new space to hold our meetups, our last one was at a different location."

From experts to novices and organizations to individuals, Saebi says she expects the supersizing of the NOVA community to continue.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/TZmRqOuMVXQ" width=""></iframe>

Don't see a local user group or birthday celebration? Get involved and learn how to start one in your local community.

Cover Photo CC/NC

by Superuser at July 27, 2016 12:10 AM

July 26, 2016

OpenStack Superuser

Why infrastructure is the key to OpenStack's Big Tent

Elizabeth K. Joseph has always wanted to move things with her mind, but for now, she is moving to empower women in OpenStack. Since joining Hewlett Packard Enterprises in 2013, she has been a contributing member of the OpenStack Infrastructure Team and continues to be a voice in gender equality in the technology field with the Women of OpenStack.

She talks to us about her role in the OpenStack community, why infra matters and bringing open source to new audiences.

If you could be a superhero, what would your superpower(s) be?

My nickname on IRC is "pleia2," as in Princess Leia from Star Wars, so probably telekinesis. I want to move things with my mind, The Force style! Plus, it would be really helpful for moving and dealing with luggage.

What's your role (roles) in the OpenStack community?

I work for Hewlett Packard Enterprise (HPE) as a core member of the OpenStack Infrastructure team, which I joined at the beginning of 2013. A Linux systems administrator by trade, I work on the team to run the fleet of Linux-based servers that the OpenStack developers interact with every day. This means the servers running the continuous integration services (Gerrit, Zuul, Nodepool), plus all the other platforms OpenStack developers interact with, from the Etherpad to the IRC bots and tooling for translations. We also adhere to an open source model with the infrastructure, which I’m really proud of. Every change we make is stored in public git repositories. Plus, just like every other project in OpenStack we require review and automated testing before a change is merged. We also follow the specification process like the rest of the project, which helps our team make decisions about where we should be spending our time. The services we run are documented here and our specs are available here.

There are always a lot of debates in the OpenStack community - which one is the most important, now?

As a member of the infrastructure team, I’m most interested in discussions centered around how an increasing number of projects in the Big Tent are using the infrastructure. The infrastructure team is growing (I’m thrilled to say that Ricardo Carrillo Cruz of HPE and Ian Wienand of Red Hat have just joined us as core members this month), but not fast enough to keep up with the demand on our services.

We need more companies and organizations making a commitment to supporting the OpenStack Infrastructure if we’re going to continue to provide services to all these projects or we’ll need to make hard decisions about what we can realistically support given our current workload and limited staff.

What obstacles do you think women face when getting involved in the OpenStack community?

The vast majority of obstacles that women face are the same as anyone joining the OpenStack community. The tooling takes some time and patience to learn, and submitting your first patch can be intimidating, particularly as you suddenly have strangers reviewing your work in public! Even as someone who has done a lot of work in open source for many years, it took time for me to get used to it. Drilling down into specific challenges women face, I’d say a big one is not seeing many women in leadership positions in the project. It can be difficult to see yourself succeed when you don’t see other people of your gender is blazing the trail ahead. Things are slowly shifting, and with women-focused events and panels at events like the OpenStack Summit we are highlighting the work of more women and it feels like more are participating in the project than when I began. Women in technology also frequently find that it can be difficult to find mentors in their workplace and throughout their career, which trickles down to involvement with OpenStack. Over the past couple years I’ve mentored several women in the community through formal mentoring programs and also more casually, but that work hardly puts a dent in the number of women who want to participate and need guidance. Formal programs like Outreachy and the Google Summer of Code need men and women to help as mentors. My employer, HPE, has also supported few rounds of a scholarship program for women in college who wish to work on OpenStack.I’ve been honored to serve as a mentor for this program and just recently saw the final presentation from one of the students who had been working on the Horizon dashboard.

You tweeted about presenting at the Twenty Eighth South Asia Network Operators Group (SANOG 28) in early August. Can you give a brief preview of what you will speak about?

As someone who usually speaks at open-source conferences, I’m very excited to be reaching a different audience with my upcoming tutorial at SANOG 28. Rather than folks with an open-source background, I’ll be speaking to network professionals in south Asia who are building the networks for some of the largest and fastest growing metropolitan areas in the world. As I introduce OpenStack to these folks, I’ve designed the tutorial to be broken up into three phases:

  • An introduction into various ways OpenStack can be used, from high-functioning fleets of compute nodes to block or object storage arrays to managing a datacenter with bare metal tooling. This introduction will also present the audience with examples of organizations who are using it for all these things in production. I admit that I’ve depended upon OpenStack Summit keynote presentations to find these examples.
  • A series of demonstrations of each service I’m covering using the Mitaka branch of DevStack as a base, for simplicity's sake. I’ll walk them through how to spin up an instance, create and mount a block storage device on an instance, how to upload and retrieve files from object storage and more.
  • The tutorial will conclude by talking about some of the official tooling now available to move from tests in DevStack to starting a deployment in production. Starting with configuration management, OpenStack now has Puppet, Chef, Ansible and Juju as projects are governed by the OpenStack Technical Committee. Beyond specific tooling, I’ll also talk about some of the team member expertise and decisions that will need to be made when an organization approaches the deployment and maintenance of OpenStack.

The audience members will have my slides and DevStack instructions for various services to go home with so they can learn more at work and home.

There are many efforts to get women involved in tech, what are some initiatives that have worked and why?

I mentioned mentoring already, but I really believe in those programs. Having supportive mentors who can help you through new challenges and promote your work is incredibly valuable. Sometimes you just need someone who you know you can ask for help from without feeling self-conscious about it. Or someone who will encourage you to submit a talk to a conference (like the OpenStack Summit!) and help you find the resources you need to write your presentation and give the talk. The confidence that a good mentor can provide can make a world of difference.

Why do you think it's important for women to get involved with OpenStack?

Studies have shown that diverse groups are better at solving problems. People of different genders, cultures and race bring different perspectives and ideas to the table. Difference invites us to think beyond the needs of people in our own countries and life situations solving the needs that we have. We want OpenStack to be the best product out there for organizations looking to build clouds, so we should set ourselves up for success by fostering this valuable team diversity. As for individual women, there is an incredible amount of personal growth to be gained by becoming a contributor to the OpenStack project, whether you’re an operator, developer, translator, technical writer, project manager or any of the other various roles we have in our community. Cloud a fast-paced ecosystem with a vast marketplace of companies hiring for a variety of positions and now is great time to have OpenStack experience on your resume.

This post is part of the Women of OpenStack series to spotlighting the women in various roles in our community who have helped make OpenStack successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’re interested in being featured, please email editor@openstack.org.

Cover Photo // CC BY NC

by Isaac Gonzalez at July 26, 2016 05:09 PM

Mirantis

What is Driving OpenStack Adoption in 2016?

The post What is Driving OpenStack Adoption in 2016? appeared first on Mirantis | The Pure Play OpenStack Company.

What is motivating OpenStack users in 2016?  Talligent and the OpenStack Foundation independently conducted user surveys to understand current thinking about the OpenStack cloud platform, including motivations and concerns around adoption, expected use cases and cloud services, and where OpenStack adoption fits in relation to public cloud platforms.

In both surveys, users consistently list cost savings as the key business driver for adopting OpenStack cloud.  This result isn’t a huge surprise, of course, as “doing more with less” is the motivation for a lot of IT projects.  The key questions are whether adopters of OpenStack should expect to achieve cost savings over legacy and public cloud alternatives, and how to demonstrate cost savings to the project owners (to encourage use of the private cloud) and the Executives (to convince them to keep writing checks).

2016 User Survey Says the Top Business Driver is “Save Money”

Just like in the Fall 2015 user survey, the top business driver for choosing OpenStack is to “Save money over alternative infrastructure choices.”  This could be savings over public cloud alternatives, or could be savings over legacy IT infrastructure technologies.  Either way, to demonstrate that the OpenStack cloud is actually saving the organization money, the operator has to be able to assess and track the cost of cloud services delivered from that Openstack cloud environment.

talligent1There has been a big rush to public clouds over the last few years, partially due to the ease of provisioning resources and the ability to quickly spin up or down according to project requirements.  As public cloud consumption has grown, the expense is now a real and material cost to the organization.  For larger organizations, public cloud expenses are no longer distributed at the individual developer level where they were more easily expensed, but have reached the point where 3rd party tools are required to provide visibility and budget tracking across the entire organization.

At the same time that public cloud adoption has taken off, the OpenStack platform has matured and deployment and management burdens have eased.  A similarly responsive IT service delivery experience can be provided behind the firewall at a fraction of the cost, and the growth in the number of organizations using OpenStack in production proves it.

talligent2This motivation for cost savings mirrors what Taligent uncovered in their recent survey in January 2016.  The top response to the question “what is motivating you” was that “Public cloud cost is too high”, with 50% of respondents also mentioning that “Legacy IT costs are too high”.

talligent3OpenStack private clouds bring the promise of lower costs and improved responsiveness (another key driver for OpenStack adoption) but executing efficiently and achieving the goal of lowering costs is tricky.  OpenStack can be complicated and visibility in the native services is limited.

Can I save with OpenStack?

If it were not possible to save with OpenStack, adoption would be much more limited, likely focused strictly on use cases where a public cloud is not an option due to security or compliance reasons (healthcare, financial services, or government).  Without the cost savings, other industries and use cases would not be able to justify the expense, and would stick with legacy solutions or migrate most workloads to public clouds.

Red Hat has shown that considerable costs savings can be achieved over non-OpenStack private cloud options.  Similarly, Taligent has first hand experience with customers using OpenStack to provide a comparable experience to on-demand public cloud at a fraction of the cost.  For a well-run OpenStack cloud, their customers have found that instance charges around ½ of a public cloud alternative will cover their expenses.  Virtual instances on public clouds can initially appear inexpensive but networking and storage can easily double the cost.  The capital cost of compute and storage for a private cloud can be paid back in a matter of months after a switch from public cloud services.  Additional cost savings come from more flexibility in the way infrastructure is provided so that resources can be closely aligned with requirements.

Of course, your mileage may vary.  As you might expect, it is highly dependent on utilization of the hardware and spreading the capital and operational costs across as many workloads as possible.  Maintaining high utilization requires the just-in-time addition of new resources to support growth in the workloads running the cloud, and, just-in-time capacity planning requires clear visibility of available headspace and cloud consumption growth trends.

talligent4

How do you know if you are saving money?

As investments in OpenStack grow and become material, understanding and tracking cloud costs becomes a critical requirement for efficient cloud management.  From Talligent’s recent survey, however, nearly 70% of respondents who had adopted or were evaluating OpenStack listed cost management as a top challenge.

talligent5This is partly because OpenStack doesn’t natively track the costs of your cloud infrastructure or services running on top of it. The OpenStack services do not provide a history of cloud resource consumption or growth trends across tenants, making it difficult to plan consumption requirements.  An OpenStack billing solution is required to collect appropriate metrics from the service APIs, provide a flexible ratings engine for cost accounting, and provide actionable capacity management insights.

The Public Cloud Experience

From the OpenStack User Survey, we know that app users are generally not limited to the OpenStack environment and will interact with public cloud options.

talligent6Like improved responsiveness and self-service, cost visibility is expected by cloud tenants regardless of private or public.  Public cloud providers readily provide detailed cost/consumption information to your tenants.  The providers are transparent with their charges, making it easy for app users to view and compare what they are spending costs between of public clouds.  It is important to provide your tenants similar information for their OpenStack private cloud usage.  App users need the information to make informed decisions about where to place workloads and what the impact of their decisions will be to the organization.  Executives need similar visibility so that they can budget for IT infrastructure growth in the right platforms where it is needed.

How can private cloud demonstrate the same level of cost detail as public cloud?

In order to get detailed cost information, you’ll need to add a separate service or application. One example of such an application is Talligent’s Openbook, which enables operators to track costs and utilization of their OpenStack clouds, and clearly demonstrate those costs alongside the relevant costs from your AWS and VMware consumption. Other options are also available from various services.

Whichever you choose, however, adding additional services to track costs isn’t just a means for keeping users in line; in fact, discouraging cloud use is the opposite of what you want. It’s a means for providing a more efficient, more smoothly running cloud that provides better return on investment.

John Meadows is VP of Business Development for Talligent and has spent most of the last decade in sales and marketing roles for virtualization and related management technologies.  Openbook from Talligent is billing, chargeback and capacity planning software for cloud services delivered on OpenStack.  To learn more about cost and capacity management for OpenStack and hybrid cloud enviornments, please visit our website at Talligent.com, contact us at openbook@talligent.com, or come see us at an upcoming OpenStack event.

The post What is Driving OpenStack Adoption in 2016? appeared first on Mirantis | The Pure Play OpenStack Company.

by Guest Post at July 26, 2016 03:34 AM

OpenStack Murano Review: Issue #1

The post OpenStack Murano Review: Issue #1 appeared first on Mirantis | The Pure Play OpenStack Company.

Hello, Stackers! To recognize Murano as a separate community project with a specific agenda — despite occasional intersection with the agenda of the Community App Catalog — we’ve decided to launch a new monthly digest covering the agenda of the Murano project. Please enjoy our first issue.

What’s up in the Community?

Let’s start by looking at the community itself.

Murano staff news

Because OpenStack is about people let’s start by announcing changes to Murano’s core team.

ZhuRong

First of all, let us welcome new core people on board. We are happy to celebrate Zhu Rong from 99Cloud. Did you know that Zhurong, according to ancient Chinese mythology, is a name of the God of Fire? If the god is one of us, is it a good moment to say: “In Code We Trust”?

Also, Alexander Tivelkov from Mirantis has now joined the core because of his work on the scalable framework architecture, which is one of the most notable features scheduled for the Newton release.

Finally, Steve McLellan has been removed from Murano core team. Steve has been part of Murano from the very early stages. However, his focus has since shifted, and we all appreciate Steve for his contributions and would like to express hope to see him back on the project in the future.

Murano technical news

Now let’s move to the tech news, and the two main points to note. The first one is that all of Murano is now Python 3.4-compliant. The second and most important news is about Igor Marnat’s initiative, which we described in App Catalog Review #5. He described a number of improvements for the process of developing OpenStack apps and distributing them via the Community App Catalog. (You can find more details on this in the Etherpad.)

The Murano team has made a huge step forward since then. Sergey Kraynev wrote a letter to describe it in detail. The key change is that the Murano core team has agreed to give up control of the repositories with the source code for Murano applications as the first step in separating Murano applications from the actual Murano core project itself. We’ll keep you updated, including a full story related to this topic in the second issue of the Murano Review.  

Taking action

If you want to join Murano meetings, please remember that we have weekly meetups in #openstack-meeting-alt on Tuesdays at 17:00 UTC. The Wiki provides more details.

Flashback: Everything you need to know about Murano discussions in Austin

We’re currently awaiting voting on Barcelona summit proposals, but meanwhile let’s catch up on what happened at the last one, in Austin.

A quick OpenStack Summit summary from Kirill Zaytsev

“During the Murano Fishbowl, the team presented an overview of features implemented in Mitaka, went through priorities for Newton, and had an opportunity to gather some user feedback from Murano users and contributors.

During the Newton cycle, the team is going to manage Murano’s community involvement with the help of Etherpad, making priorities more transparent and enabling easier collaboration from external contributors. The team also received valuable feedback on certain aspects of the Murano user experience.

During Murano’s first work session, the team discussed the future of the murano-apps ecosystem and agreed to push the CI/CD Murano app toolchain into a separate repo (as mentioned in the previous section) and then use those CI/CD tools as the starting point for the centralized CI/CD pipeline in which Murano apps can be developed, tested, and published to apps.openstack.org.  

During the second work session, we discussed the current state of the Scalable App Framework.

We also had an interesting discussion with the interoperability/defcore team. They’re currently considering vertical certification programs. If we would ever want to have a certification program for OpenStack Application Catalog-compatible cloud, it could be implemented as one of such interop programs. The other way is to get support for inclusion into regular openstack certification programs by increasing adoption.

We also attended several sessions of the Stable Branch Maintenance team, which gave an overall positive fresh take on how we could view backports. The Neutron team, for example, treats any Closes-Bug commit as a potential backport and has certain toolsets for backporting/validating such bugs. One practice we should definitely pick up from other projects is leaving 5-10 empty migrations for backports after a stable branch is cut. A proposal suggested EOLing branches at 24 months instead of 18 months. The main problem is the manpower needed to support those, so at least for Kilo, the next release is going to be final. So, yes, OpenStack is about people.”

Murano-related talks in Austin

If you missed the summit — or if you just didn’t have time to get to them — you can also view videos of the Murano-related talks:

Deep dive from Alexander: three Murano design summit sessions

Murano had three design summit sessions — separate from the main conference — two of them with some good attendance and useful outcomes.

The “What’s new” fishbowl got some wide audience participation, and Kirill started it off with gathering the feedback from the contributors and the adopters. The feedback was quite constructive, including a set of specific technical issues. Most of them we already knew of and had presented in Murano’s backlog, but the feedback helped us understand the proper prioritization for these issues.

The second session was dedicated to two topics related to the murano-apps repo and the applications that currently live there, as Sergey Kraynev briefly mentioned above. We discussed:

1) What we should do with our murano-apps project/repository and

2) What should the governance (repo and infra location, project group, etc.) be for the upcoming murano-CI/CD application stack being developed by Sergey and his team.

For the first topic, the group agreed that the apps supported by the core Murano team (wherever they are located) should be just the demo applications showing various features and capabilities of Murano and demonstrating the best coding guidelines and design principles. Because of that, the current set of applications should be reduced since most of the apps in the murano-apps project duplicate each other in terms of Murano features being used (e.g. there is no sense in having both Apache and Tomcat applications which differ only in the set of VM-side config scripts).

Also we’ve agreed to consider moving these demo apps into the main Murano repository and use them for testing Murano in its check and gate jobs. Meanwhile the “production grade” Murano applications (of which the most well-known in the murano-apps repo is the Kubernetes app) should be moved away into their own repos so they may have their dedicated development/reviewing teams, bug trackers, release cycles, tags, branches, and so on. Eventually, we plan to deprecate the murano-apps repository.

Regarding the second topic, the group agreed to move this stack of applications into a standalone project/repo under the OpenStack umbrella (say, openstack/murano-ci-cd-pipeline). At the beginning, that project will contain all four of the related Murano apps (Jenkins, Gerrit, OpenLDAP, DNS app), but when they are mature enough, we may consider splitting them into their own independent projects.

The third Murano session didn’t work out: due to low attendance, the demo of the scalable framework prototype which was planned for this session didn’t happen.

The last day was reserved for contributors’ meetups: informal gatherings of the upstream contributors where they could discuss various kinds of open questions and have some generic conversations and other social activities. The Murano team didn’t request a meetup room on this summit because we had completed all our discussions with the contributors during the first four days of the event, so we had no specific agenda for the last day. However, there were some other important sessions besides the meetup, and that’s where we played a significant role.

The most interesting one was the Enterprise Working Group work session, the gathering of people who work on helping enterprises to adopt their technologies for using the cloud. The attendance was quite large, and we discussed how to properly understand and address the needs of large business for the clouds, how to educate them, and how to help move  their apps to the cloud.

The particular interest for us was an initiative to create a reference architecture of the workloads to be run on the clouds. Right now, the working group is finalizing the reference architectures for the 3-tier app, eCommerce solution, and a Big Data processing solution. Soon these description will be published at the openstack.org/marketplace or somewhere else.

The Murano team has volunteered to take part in this process: as soon as the design for these reference architecture patterns is ready, we will implement them as a set of Murano applications and publish them on the apps.openstack.org. We’ll keep you updated on this process.

P.S.:

Please, feel free to provide us with some feedback. For example, let us know what topics related to Murano you are interested in. Also, if you do have some user experience with Murano, you are more than welcome to share it with us. Frankly speaking, that’s our dream—to make a part of this digest based on real-use cases. And we’ll do all our best to make this dream real.   

The post OpenStack Murano Review: Issue #1 appeared first on Mirantis | The Pure Play OpenStack Company.

by Ilya Stechkin at July 26, 2016 03:11 AM

Carlos Camacho

Testing instack-undercloud submissions locally

This post is to describe how to run/test gerrit submissions related to instack-undercloud locally.

For this example I’m going to use this submission: https://review.openstack.org/#/c/347389/

The follwing steps allow to test the submissions related to instack-undercloud in a working environment.

  ./tripleo-ci/scripts/tripleo.sh --delorean-setup
  ./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
  cd tripleo/instack-undercloud/
  #The submission to be tested
  git review -d 347389
  cd
  ./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
  rpm -qa | grep instack-undercloud
  sudo rpm -e --nodeps <old_installed_instack-undercloud>
  find tripleo/ -name "*rpm"
  sudo rpm -iv --replacepkgs --force <located package>
  #Here we need to check that the changes were actually applied.
  #What I'm used to do it's to search the updated files using locate
  #and manually checking that the changes are OK.
  sudo rm -rf /root/.cache/image-create/source-repositories/*
  sudo rm -rf /opt/stack/puppet-modules

Now, in case that a puppet-tripleo change is needed, you can add the env. vars before re-installing the undercloud.

  export DIB_INSTALLTYPE_puppet_tripleo=source
  export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
  export DIB_REPOREF_puppet_tripleo=refs/changes/XX/XXXXX/X

Now, we just need to run the installer.

  ./tripleo-ci/scripts/tripleo.sh --undercloud

Once this process completes, the output should be something similar to:


#################
tripleo.sh -- Undercloud install - DONE.
#################

by Carlos Camacho at July 26, 2016 12:00 AM

July 25, 2016

Graham Hayes

Equal Opportunities for all OpenStack Projects

So, two weeks ago, I dropped a TC motion and a mailing list post and waited for the other foot to drop.

I was pleasantly surprised - no one started shouting at me - but by trying to not point fingers at individual teams I made the text too convoluted.

So, in an effort to clarify things, here is an overview of what has been said so far, both in the mailing list and the gerrit review itself.

Feedback

... does this also include plugins within projects, like storage backends in cinder and hypervisor drivers in nova?

No - this was not clear enough. This change is aimed at projects that are points of significant cross project interaction. While, in the future there may come a point where Nova Compute Drivers are developed out of tree (though I doubt it), that is not happening today. As a result, there is no projects in the list of projects that would need to integrate with Nova.

Could you please clarify: do you advocate for a generic plugin interface for every project, or that each project should expose a plugin interface that allows plugin to behave as in-tree components? Because the latter is what happens with Tempest, and I see the former a bit complicated.

For every project that has cross project interaction - tempest is a good example.

For these projects, they should allow all projects in tree (like Nova, Neutron, Cinder etc are today), or they should have a plugin interface (like they currently do), but all projects must use it, and not use parts of tempest that are not exposed in that interface.

This would mean that tempest would move the nova, neutron, etc tests to use the plugin interface.

Now, that plugin could be kept in the tempest repo, and still maintained by the QA team, but should use the same interface as the other plugins that are not in that repository.

Of course, it is not just tempest - an incomplete list looks like:

  • Tempest
  • Devstack
  • Grende
  • Horizon
  • OpenStack Client
  • OpenStack SDK
  • Searchlight
  • Heat
  • Mistral
  • Celiometer
  • Rally
  • Documentation

And I am sure I have missed some obvious ones. (if you see a project missing let me know on the motion)

I think I disagree here. The root cause is being addressed: external tests can use the Tempest plugin interface, and use the API, which is being stabilized. The fact that the Tempest API is partially unstable is a temporary things, due to the origin of the project and the way the scope was redefined, but again it's temporary.

This seems to be the core of a lot of the disagreement - this is only temporary, it will all be fixed in the future, and it should stay this way.

Unfortunately the discrepancy between projects is not temporary. The specific problems I have highlighted in the thread for one of the projects is temporary, but I beleive the only long-term solution is to remove the difference between projects.

Before we start making lots of specific rules about how teams coordinate, I would like to understand the problem those rules are meant to solve, so thank you for providing that example. ... It's not clear yet whether there needs to be a new policy to change the existing intent, or if a discussion just hasn't happened, or if someone simply needs to edit some code.

Unfortunately there is a big push back on editing code to help plugins from some of the projects. Again, having the differing access between projects will continue to exacerbate the problem.

"Change the name of the resolution"

—(Paraphrase from a few people)

That was done in the last patchset. I think the Level Playing Field title bounced around my head from the other resolution that was titled Level Playing Field. It may have been confusing alright.

Other Areas

I feel like I have been picking on tempest a little too much, it just captures the current issues perfectly, and a large number of the community have some knowledge of it, and how it works.

There is other areas across OpenStack the need attention as well:

Horizon

Horizon privileged projects have access to much more panels than plugins (service status, quotas, overviews etc). Plugins have to rely on tarballs of horizon

OpenStack Client

OpenStack CLI privileged projects have access to more commands, as plugins cannot hook in to them (e.g. quotas)

Grenade

Plugins may or may not have tempest tests ran (I think that patch merged), they have to use parts of tempest I was told explicitly plugins should not use to get the tests to run at that point.

Docs

We can now add install guides and hook into the API Reference, and API guides. This is great - and I am really happy about it. We still have issues trying to integrate with other areas in docs, and most non docs privileged projects end up with massive amounts of users docs in docs.openstack.org/developer/<project> , which is not ideal.

by Graham Hayes at July 25, 2016 09:25 PM

OpenStack Blog

Technical Committee Highlights July 17, 2016

This update is dedicated to our recent in-person training for the Technical Committee and additional community leaders.

Reflections on our Leadership Training Workshop

In mid-July 2016, 20 members of the OpenStack community including Technical Committee members current and past, PTLs current and past, other Community members and additional supporting facilitators met for a 2 day training class at ZingTrain in Ann Arbor Michigan. The ZingTrain team, joined by founder Ari Weinzweig, inspirational IT manager Tim Root, and various members of the Zingerman’s servant leaders shared a unique approach to business that matches OpenStack’s quite well.

I’m not usually one to gush about a training workshop. But wow, please allow me to gush. To tell the truth, many of us went to this workshop despite thinking we might have to learn how to make sandwiches. Or talk about feelings instead of solving problems. Or even decide too much at once without enough input. And so on. But we got over our fears and had a great time and excellent food and service from Zingerman’s ZingTrain team. Many thanks to Ann Lofgren and Timo Anderson for facilitating the workshop, and much gratitude and admiration to Colette Alexander for recognizing a match and meeting a need.

What did we learn?

We learned about stewardship and put it in the context of OpenStack. I had to look up a definition for this term, even though I’m a word nerd. Stewardship is an “ethic that embodies the responsible planning and management of resources” and it matches so well with what we need to do as leaders in the OpenStack community. The group decided we needed to continue this sort of work after the training and so the stewardship working group has formed. We met for the first time this week and listed many items on a to-do list that we’ll cull and prioritize as we go.

IMG_0981

We learned what servant leadership looks like, when instead of a top-down triangle to represent an organization’s hierarchy, you invert it and have all of your leaders serve their teams. The basic idea is that it is the role of a leader to serve the organization, and you aren’t promoted in order to have others serve you. In the article, A recipe for servant leadership, Ari Weinzweig explains servant leadership with: “To paraphrase John Kennedy’s magnificent 1961 inaugural speech; ‘ask not what your organization can do for you, ask what you can do for your organization.'” In an open source community like ours, this looks like PTLs who review code in order to teach and onboard more contributors instead of writing all the code herself, or generally supporting the needs of team so they can achieve their best.

We learned about consensus, where you first agree to how decisions are made. Consensus can also mean that 18 people who are business partners might not all completely agree on the solution, but are enough convinced by their peers to fully support the decision and make it happen. We are certainly provoked by this concept, considering that we use majority voting today, and TC members will abstain if they can’t agree with a resolution. We want to work on this aspect going forward.

Each of us took away important aspects of consensus-made decisions. At ZingTrain we learned that the original agreement between their partners was to “live with” and not simply “live by” group decisions, even when the decision wasn’t their personal first choice. In this way, the goal with consensus is not to reach a point where a majority opinion ensures people do what the decision indicates, but to ensure that everyone is as comfortable with the decision itself. This aspect felt especially important in the OpenStack context, where people may feel they are putting up with a decision that was made, but haven’t inherently agreed to live with it. We also learned that another component of this decision-making process was that any disagreements must include a counter-proposal.

IMG_0982

We learned about visioning. Now, realize that vision training is an entire subset and workshop in itself. We needed to learn first and get introduced to the idea. What is an effective vision? How can we put what we’ve learned into an OpenStack context?

In a vision, you describe the future state you want as if it happened. A vision is:

  • Inspiring to all that are involved in implementing it.
  • Strategically sound, as in, we have a decent shot at making the vision reality.
  • Documented. You must write it down to make it real and make it work.
  • Communicated. You have to document it but not expect people only to read it, you have to tell your community about your vision.

What if a vision for OpenStack was as detailed as:

It’s a sunny day in fall 2025 in Vancouver, Canada, yet a lot of technologists are lined up to get their RFID wristbands that open doors to conference sessions at the OpenStack Summit. The autonomous vehicles donated by a large OpenStack deployer are emptying outside, bringing them together to plan for the next release of OpenStack. The past release was a huge success and the 100,000 deployments upgraded smoothly with no downtime to end users.

As one of the founders of Zingerman’s Ari Weinzweig notes in this Inc. magazine article, “To be clear, a vision is not a strategic plan. The vision articulates where we are going; the plan tells us how we’re actually going to get there.” A vision for OpenStack is not a strategic plan, yet without it, planning is difficult. Also, decision-making is difficult, and consensus can’t be reached, due to debates not only on, “Where are we going?” but multiplied by questions on, “How do we get there?” We definitely had wheels turning after thinking about this together, as a group, distraction-free, and face-to-face.

IMG_0979 2

It was a great week. What we want to do next is become a more effective technical committee and leadership group. We want to apply the various aspects of servant leadership by becoming better examples of servant leaders ourselves. We’ve identified gaps in our current leadership implementation, and are working to close them so that we take action based on what we’ve learned. We’ve started the Stewardship Working Group. We are drafting documentation to write down our guiding principles, to write down release goals, and generally keep documenting and teaching. We were all challenged to teach two hours a week and learn at least one hour a week, and I think we all are ready to take on the challenge to lead by example by starting with ourselves. Please let us know how we are doing, give us feedback, and shape our community vision with us.

by Anne Gentle at July 25, 2016 07:07 PM

OpenStack Developer Mailing List Digest July 2-22

SuccessBot Says

  • Notmyname: the 1.5 year long effort to get at-rest encryption in openstack swift has been finished. at-rest crypto has landed in master
  • stevemar: API reference documentation now shows keystone’s in-tree APIs!
  • Samueldemq: Keystone now supports Python 3.5
  • All

Troubleshooting and ask.openstack.org

  • Keystone team wants to do troubleshooting documents.
  • Ask.openstack.org might be the right forum for this, but help is needed:
    • Keystone core should be able to moderate.
    • A top level interface than just tags. The page should have a series of questions and links to the discussions for that question.
  • There could also be a keystone-docs repo that would have:
    • FAQ troubleshooting
    • Install guides
    • Unofficial blog posts
    • How-to guides
  • We don’t want a static troubleshooting guide. We want people to be able to ask questions and link them to answers.
  • Full thread

Leadership Training Recap and Steps Forward

  • Colette Alexander has successfully organized leadership training in Ann Arbor, Michigan.
  • 17 people from the community attended. 8 of them from the TC.
  • Subjects:
    • servant leadership
    • Visioning
    • Stages of learning
    • Good practices for leading organizational change.
  • Reviews and reflections from the training have been overwhelmingly positive and some blogs started to pop up [1].
  • A smaller group of the 17 people after training met to discuss how some ideas presented might help the OpenStack community.
    • To more clearly define and accomplish that work, a stewardship working group has been proposed [2].
  • Because of the success, and 5 TC members weren’t able to attend, Colette is working to arrange a repeating offer.
  • Thanks to all who attended and the OpenStack Foundation who sponsored the training for everyone.
  • Full thread

Release Countdown For Week R-12, July 11-15

  • Focus:
    • Major feature work should be well under way as we approach the second milestone.
  • General notes:
    • We freeze release libraries between the third milestone and final release.
      • Only emergency bug fix updates are allowed during that period.
      • Prioritize any feature work that includes work in libraries.
  • Release actions:
    • Official projects following any of the cycle-based release models should propose beta 2 tags for their deliverables by July 14.
    • Review stable/liberty stable/mitaka branches for needed releases.
  • Important dates:
    • Newton 2 milestone: July 14
    • Library release freeze date starting R-6, Aug 25
    • Newton 3 milestone: September 1
  • Full thread

The Future of OpenStack Documentation

  • Current central documentation
    • Consistent structure
    • For operators and users
    • Some less technical audience use this to evaluate with various other cloud infrastructure offerings.
  • Project documentation trends today:
    • Few are contributing to central documentation
    • More are becoming independent with their own repository documentation.
    • An alarming number just don’t do any.
  • A potential solution: Move operator and user documentation into individual project repositories:
    • Project developers can contribute code and documentation in the same patch.
    • Project developers can work directly or with a liaison documentation team members to improve documentation during development.
    • The documentation team primarily focuses on organization/presentation of documentation and assisting projects.
  • Full thread

 

[1] – http://www.tesora.com/openstack-tc-will-not-opening-deli/

[2] – https://review.openstack.org/#/c/337895/

 

by Mike Perez at July 25, 2016 04:26 PM

Arie Bregman

OpenDaylight & OpenStack: How to run CSIT

What is CSIT? Continuous System Integration Tests. Basically, Suites of integration tests for testing OpenDaylight components (alone or with other projects as OpenStack for example). If you are familiar with OpenStack testing, then  you can say it’s very similar to Tempest scenario tests. The tests are executed automatically by OpenDaylight’s Jenkins on the lab provided by the […]

by bregman at July 25, 2016 03:22 PM

Gorka Eguileor

Cinder Active-Active HA – Newton mid-cycle

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session. Just like in previous mid-cycles the Cinder community did its best […]

by geguileo at July 25, 2016 12:18 PM

Red Hat Stack

Introduction to Red Hat OpenStack Platform Director

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed. general

The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).

With Director, you’ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.

Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.

In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).

Lifecycle of your OpenStack cloud

Director is defined as a lifecycle management platform for OpenStack. It has been designed from the ground up to bridge the gap between the planning and design (day-0), the installation tasks themselves (day-1), and the ongoing operation, administration and management of the environment (day-2).

  1. Firstly, the pre-deployment planning stage (day-0). Director provides configuration files to define the target architecture including networking and storage topologies, OpenStack service parameters, integrations to third party plugins, etc. All the required items to suit the requirements of an organisation. It also verifies that target hardware nodes are ready to be deployed and their performance is equivalent (we call that “black-sheep detection”).
  2. Secondly, the deployment stage (day-1). This is where the bulk of the Director functionality is executed. One of the most important steps is verifying that the proposed configuration is sane, there’s no point in trying to deploy a configuration if we are sure it will fail due to pre-flight validation checking. Assuming that the configuration is valid, Director needs to take care of the end to end orchestration of the deployment, including hardware preparation, software deployment, and once up and running, configuring the OpenStack environment to perform as expected.
  3. Lastly, the operations stage in the long-run (day-2). Red Hat has listened to our OpenStack customers and their Operations teams, and designed Director accordingly. It can check the health of an environment, and perform changes, such as adding or replacing OpenStack nodes, updating minor releases (security updates) and also automatically upgrading between major versions, for example from Kilo to Liberty.

director day 0 1 2Despite being a relatively new offering from Red Hat, Director has strong technology foundations, a convergence of many years of upstream engineering work, established technology for Linux and Cloud administration, and newer DevOps automation tools. This has allowed us to create a powerful, best of breed deployment tool that’s in-line with the overall direction of the OpenStack project (with TripleO), as well as the OPNFV installation projects (with Arno).

Feature Overview

Upon initial creation of the Red Hat OpenStack Platform Director, we improved all the major TripleO components and extended them to perform tasks that go beyond just the deployment. Currently, Director is able to perform the following tasks:

  • Deploy a management node (called undercloud) as the bootstrap OpenStack cloud. From there, we define the organisation’s production-use Overcloud combining our reference configurations and user-provided customisations. Director provides command line utilities (and a graphical web interface) as a shortcut to access the undercloud OpenStack RESTful API’s.Diagram-001-Topology
  • The undercloud interacts with bare metal hardware via Ironic (to do PXE boot and power management), which relies on an extensive array of supported drivers. Red Hat collaborates with vendors so that their hardware will be compatible with Ironic, giving customers flexibility in the hardware platforms they choose to consume.
  • During overcloud deployment, Director can inspect the hardware and automatically assign roles to specific nodes, so nodes are chosen based on their system specification and performance profile. This vastly simplifies the administrative overhead, especially with large scale deployments.
  • Director ships with a number of validation tools to verify that any user-provided templates are correct (like the networking files), that also be useful when performing updates or upgrades. For that, we leverage Ansible in the upgrade sanity check scripts. Once deployed you can automatically test a deployed overcloud using Director’s Tempest toolset. Tempest verifies that the overcloud is working as expected with hundreds of end-to-end tests, in a way that it conforms to the upstream API specification. Red Hat is committed to shipping the standard API specification and not breaking update and upgrade paths for customers and therefore providing an automated mechanism for compatibility is of paramount importance.
  • In terms of the deployment architecture itself, Red Hat has built a highly available reference architecture containing our recommended practices for availability, resiliency, and scalability. The default Heat templates as shipped within Director have been engineered with this reference architecture in-mind, and therefore a customer deploying OpenStack with Director can leverage our extensive work with customers and partners to provide maximum stability, reliability, and security features for their platform. For instance, Director can deploy SSL/TLS based OpenStack endpoints for better security via encrypted communications.
  • The majority of our production customers are using Ceph with OpenStack. That’s why Ceph is the default storage backend within Director, and automatically deploys Ceph monitors on controller nodes, and Ceph OSDs on dedicated storage nodes. Alternatively, it can connect the OpenStack installation to an existing Ceph cluster. Director supports a wide variety of Ceph configurations, all based on our recommended best practices.
  • Last, but not least, the overcloud networks defined within Director can now be configured as either IPv4 or IPv6. Feel free to check our OpenStack IPv6 networking guide. Some exceptions, only doable in IPv4, are the provisioning network (PXE) and the VXLAN/GRE tunnel endpoints, which can only be IPv4 at this stage. Dual stack IPv4 and IPv6 networking is available only for non-infrastructure networks, for example, tenant, provider, and external networks.
  • For 3rd-party plugin support, our partners are working with the upstream OpenStack TripleO community to add their components, like other SDN or SDS solutions. The Red Hat Partner Program of certified extensions allows our customer to enable and automatically install those plugins via Director (for more information, visit our documentation on Partner integrations with Director)

In our next post, we’ll explain the components of Director (TripleO) in further detail, how does it help you deploy and manage the Red Hat OpenStack Platform, and a deep dive on how do they work together. This will help you understand, in our opinion, the most important feature of all: Automated OpenStack Updates and Upgrades. Stay tuned!

 

 

by Marcos Garcia - Principal Technical Marketing Manager at July 25, 2016 12:00 PM

Alessandro Pilotti

Hyper-V Shielded VMs – Part 2

A Shielded VM is a Hyper-V generation 2 VM that has a virtual TPM, is encrypted using BitLocker and can only run on healthy and approved hosts in the fabric. It is protected from inspection, tampering and theft from malicious fabric admins and host malware, guaranteeing the security of the virtual machines running in an OpenStack environment.

 

Shielded VMs in OpenStack

In order to create a Shielded VM, a signed template and a PDK file containing VM configuration information are required.
Check here how to create a signed template and generate a PDK file.

 

1) Provide a reference to a Barbican container containing the PDK file.

PDKUtil stores a PDK file to a Barbican container. Install PDKUtil:

pip install pdkutil

PDKUtil uses Keystone for identity management. Credentials and endpoints must be provided via environment variables or command line parameters in the same way supported by most OpenStack command line interface (CLI) tools, e.g.:

export OS_AUTH_URL=http://example.com:5000/v2.0
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin

Store the specified PDK file into a Barbican container:

pdkutil store /home/ubuntu/shielded_pdk.pdk shielded_pdk_container
+---------------------+--------------------------------------------------------------------+
| Field               |  Value                                                             |
+---------------------+--------------------------------------------------------------------+
| PDK_file            |  /home/ubuntu/shielded_pdk.pdk                                     |
| container_name      |  shielded_pdk_container                                            |
| container_reference |  http://IP:9311/v1/containers/34b0694c-a66c-4383-80aa-dd87448fd746 |
+---------------------+--------------------------------------------------------------------+

The PDK container’s reference is mandatory to be passed as an image property or meta when booting a OpenStack VM.

 

2) Create a glance image from the signed template.

The signed template will be uploaded to Glance. The shielding data file (PDK file) will assure that the VM will be created in the way the tenant intended. For example, it can’t be used a different VHDX when creating the shielded OpenStack VM, because the shielding data file contains the signatures of the trusted disk that shielded VMs can be created from. Moreover, if shielded option is requested when creating the image, the PDK reference must be of a shielding data file having a security policy set to shielded.

 

In order to create a shielded OpenStack VM, the image must have the following properties:

  • os_shielded_vms set to required. For adding a vtpm with encryption option enabled, the image must have the os_vtpm_vms property set to required. A shielded vm implies encryption.
  • hw_machine_type set to hyperv-gen2 as vTPM can only be added for Generation 2 VMs.
  • os_secure_boot or os:secure_boot flavor extra spec to required as secure boot must be enforced.
  • img_pdk_reference  containing a reference to a PDK Barbican container. (img_pdk_reference can be passed via nova boot metadata option as well, overriding the image property)

glance image-create --disk-format vhd --container-format bare --name shielded_template  \ 
--property hw_machine_type=hyperv-gen2 --property hypervisor_version_requires='>=10.0'  \ 
--property os_type=windows --property os_secure_boot=required \
--property os_shielded_vm=required --property \
img_pdk_reference=" http://IP:9311/v1/containers/b31320ad-ea02-43d1-8a79-bcb509f59e63"  \
--file img/unused_template.vhdx

 

3) Boot a Shielded VM

An unattended file is used to specialize the shielded instance during the provisioning process. As unattended files are added when creating the shielding data files, they will be used on every VM created using that PDK file. In order not to hard code any VM-specific information into the unattended files, substitution strings can be used in the unattend file to handle specialization values that may change from VM to VM.

When using substitution strings, it is important to ensure that the strings will be populated during the VM provisioning process. The substitution strings corresponding values can be added as metadata boot options.

nova boot --image shielded_template --flavor m1.medium --meta \
img_pdk_reference='http://IP:9311/v1/containers/b31320ad-ea02-43d1-8a79-bcb509f59e63' \
--meta fsk:ComputerName='shieldedvm' --availability-zone nova:guarded \
--nic net-id="adde07f4-6e54-4f8d-b5c9-6955e40d51e0" shieldedvm

That’s all, your shielded VM is getting deployed!

If you prefer to use the Horizon web interface instead of the command line, here’s also a video showing how to perform the same steps.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/XC2fhCLseEY" width="560"></iframe>

The post Hyper-V Shielded VMs – Part 2 appeared first on Cloudbase Solutions.

by Iulia Toader at July 25, 2016 11:20 AM

Opensource.com

Finding security issues, regional events, and more OpenStack news

Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project. Here's what's happening this week, July 25 - 31, 2016.

by Jason Baker at July 25, 2016 05:00 AM

Hugh Blemings

Lwood-20160724

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 18 to 24 July 2016 for openstack-dev:

  • ~515 Messages (up about 32% relative to last week)
  • ~175 Unique threads (up a percent – basically the same as last week)

A busier week in terms of overall traffic, but the number of unique threads about the same and relatively less to note in Lwood this time around.

A reminder that for the next four weeks or so Lwood may arrive a little later than usual – I’m in the US and so it may not always be practical to get things out the door Sunday afternoon/evening… :)

Notable Discussions – openstack-dev

New OpenStack Security Notice

Repeated token revocation requests can lead to service degradation or disruption (OSSN 0068)

From the summary “There is currently no limit to the frequency of keystone token revocations that can be made by a single user, in any given time frame. If a user repeatedly makes token requests, and then immediately revokes the token, a performance degradation can occur and possible DoS (Denial of Service) attacks could be directed towards keystone.”

More information and discussion in the original post or the OSSN itself.

Midcycle Summaries & Minutes

A few posts this week with minutes and/or summaries of mid cycles held these last few weeks for Cinder (Kendall Nelson), Horizon (Rob Cresswell) and Monasca (Fabio Giannetti).

More on project mascots

A few more projects kicked off the process of deciding on Mascots/Logos, as mentioned last week this all stemmed from a post by Heidi Joy Tretheway from the OpenStack Foundation.

The new threads included Charms, Cinder, Manila, Murano, Requirements, Tacker, Telemetry and Tricircle.

If you’re curious last week saw these per project threads for Ansible, App-Catalog, Congress, Designate, Freezer, Glance, Horizon, Kolla, Mistral, Neutron, Puppet, Sahara,Vitrage and Zaqar.

Notable Discussions – other OpenStack lists

Nothing on the other lists that struck me as good Lwood material (this not, of course to say there were no useful conversations!! :)Upcoming OpenStack Events

Midcycle

Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

Core nominations & changes

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

A little plug – as I mentioned last week, I’ve submitted a talk proposal for the Barcelona OpenStack summit titled “Finding your way around the OpenStack-Dev mailing list”  If approved, in the session I will provide a bit of a guide for newcomers (and old hands) to navigating around the various OpenStack related mailing lists, openstack-dev in particular as well as some other useful stuff. This of course all based on my work on Lwood.  When voting goes live for the summit, I’d welcome your support if you think the proposed talk sounds worthwhile – link to follow :)

This edition of Lwood brought to you by the sounds of silence (well ambient noise at my friends place aside… ;)

by hugh at July 25, 2016 12:05 AM

July 23, 2016

Solinea

Avoiding Obstacles to Cloud Adoption – Part 1


Avoiding Obstacles to Cloud Adoption

Part 1 of a series: Technology and Security

Problems with adopting cloud computing models have received plenty of coverage in recent years, however there is also a large and growing number of success stories; it is definitely possible to do it right. In this series of posts we will look in detail at some of the failure paths and how to recognize and avoid them – with an emphasis on private cloud implementations. This first post will focus on the technology implementation approach and will be followed by a second post on security. Future topics will cover more detail on building a successful architecture for your cloud.

Many of the “Top Reasons Cloud Implementations Fail” stories correctly observe that cloud computing is based on technology that IT people are already familiar with: x86 servers, virtualization, storage and networking. This leads some to mistakenly conclude that cloud implementation projects can be treated exactly the same as any technology project, and this is the problem that we will begin this post with.

For a recap on the prerequisites for a successful cloud implementation take a look at the two-part series on Making the Case for OpenStack: You Would be Surprised What Enterprises Overlook and Critical Success Factors, and the follow-up Five Cloud (or Open Infrastructure) Critical Success Factors – Redux.

Technology Obstacles

Your objectives are clear, executive sponsorship secured, KPIs are defined and agreed and the development team is waiting for their cloud APIs; so what comes next? If you’re planning to run the implementation the same way you would any other technology project, then this is a good time to pause and revisit the project goals. If those goals include reaping the benefits of cloud models for development and operations, or if your KPIs include deployment speed and agility then you might be about to make a wrong decision.

Here are some common mistakes when Technology teams select cloud platforms:

  • Use cases are overlooked. It is often assumed that all use cases can be supported simply by “maxing out” the technical specifications.
  • Over engineered technical requirements. Teams come up with an exhaustive list of technical must-haves. Extreme IOPS, maximum network performance, huge memory VMs, sub-second live migration, HA everywhere, and an assortment of the latest published features. The result is either impossible to implement or extremely complex and expensive. Product selection by market research reports. Unsure of technology, the implementation team studies all the available research reports and picks the ‘leaders’ in all the cloud categories and performs a weighted selection based on local preferences.
  • Designed around a legacy operational model. Non-functional requirements are gathered from how current infrastructure is operated. Automation is viewed only as removing repetitive tasks.
  • Proof of concept only involves infrastructure. The implementation team runs a PoC but does not involve Governance, Development, Security, and Operations teams.

The outcome of any of these decisions could be a cloud platform that does not meet the overall goals and KPIs. This may seem counterintuitive if you believe that OpenStack will smooth over any technology differences.

It won’t; here are three reasons why:

  • Vendor Selection: Not all OpenStack vendors’ offerings are alike. This is not to say some are better than others but that they have different opinions about how things should be implemented and which features and vendor integrations they are prepared to support.
  • Cloud Architecture: Not all cloud architectures are alike. Different use cases, security models, and non-functional requirements can call for very different architectures in both the control plane and the infrastructure components.
  • Technology Components: Your technology selection will indirectly force an overall architecture. If you did not plan a cloud architecture with your development, security, and operations teams then you will have one imposed at implementation time by the technology choices. 

Here is where cracks begin to appear. The security team doesn’t like the CMP, it won’t federate with their IdP. Governance can’t get chargeback data or enforce usage controls. The network team says there’s no way they’ll ever support user-initiated overlay networks. The developer automation tools are unable to use the platform API because of insufficient privilege separation. Security Groups are not considered sufficient and external manual firewalls must be used. Automation is impossible. All of these things turned out to be essential for making your cloud use cases work.

The reason these issues surface late is that they are not typical infrastructure or system software problems. At this stage of the project it can be very difficult to make rectifying architectural changes because many of the technology decisions constrain the solution. Additionally, other project stakeholders may be

Solution: don’t treat cloud just as a technology implementation

The corollary of cloud being based upon standard technology is that the most important aspect of cloud is not about the technology; it’s about how technology is consumed and the processes for supporting and managing that.

For a team accustomed to running traditional virtual infrastructure this is an easy mistake to make. At a single point in time an application running in a cloud looks exactly like an application running on your legacy hypervisor. The point overlooked is how did that application get there.

The solution is to make the technology selection decisions last, after the new cloud operational model is understood and a high level architecture is established, and then confirmed only after a proof of concept implementation.

Our preferred methodology for cloud platform deployment follows this order:

  • From the objectives and strategy, define the use cases that must be supported. These use cases should cover the process for developing/testing/deploying applications and well as governance, security, and operational use cases related to the overall objectives.
  • Architect a solution and technical requirements to support the use cases, reviewing with the development, governance, security and operations teams. 
  • Make a provisional technology selection and plan a PoC or Pilot implementation, selecting a target application to demonstrate use cases supporting project objectives.
  • Based upon PoC conclusion, confirm the technology selection and proceed with implementation strategy. 

This methodology greatly reduces the risk of technology failure and ensures that the cloud platform will deliver on the program objectives and KPIs.

The OpenStack documentation site contains a detailed Architecture Design Guide that covers design considerations for many examples. In the next post in this series we will summarize some of the more common architecture patterns for enterprise cloud deployments.

Check back for our follow-up post where we will summarize some fundamental security. 

Solinea specializes in 3 areas: 

  • Cloud architecture and infrastructure design and implementation, with a specific focus on OpenStack – We have been working with OpenStack since its inception, wrote the first book on how to deploy OpenStack, and have built numerous private and public cloud platforms based on the technology.
  • DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with docker and kubernetes implementations – containerizing applications and orchestrating the containers in production. 


The post Avoiding Obstacles to Cloud Adoption – Part 1 appeared first on Solinea.

by Solinea at July 23, 2016 04:25 AM

July 22, 2016

Red Hat Stack

How connection tracking in Open vSwitch helps OpenStack performance

Written by Jiri Benc,  Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch

 

 

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

Introduction

It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called “stateful firewalling”. Indeed, even such basic rule as “don’t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet” requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.

Stateful Firewall in OpenStack

It’s of no surprise that OpenStack implements stateful firewalling for guest VMs. It’s the core of its Security Groups feature. It allows the hypervisor to protect virtual machines from unwanted traffic.

As mentioned, for stateful firewalling the host (OpenStack node) needs to keep track of individual connections and be able to match packets to the connections. This is called connection tracking, or “conntrack”. Note that connections are a different concept to flows: connections are bidirectional and need to be established, while flows are unidirectional and stateless.

Let’s add Open vSwitch to the picture. Open vSwitch is an advanced programmable software switch. Neutron uses it for OpenStack networking – to connect virtual machines together and to create overlay network connecting the nodes. (For completeness, there are other backends than Open vSwitch available; however, Open vSwitch offers the most features and performance due to its flexibility and it’s considered the “main” backend by many.)

However, packet switching in Open vSwitch datapath is based on flows and solely on flows. It had been traditionally stateless. Not a good situation when we need a stateful firewall.

Bending iptables to Our Will

There’s a way out of this. The Linux kernel contains connection tracking module and it can be used to implement a stateful firewall. However, these features had been available only to the Linux kernel firewall at the IP protocol layer (called “iptables”). And that’s a problem: Open vSwitch does not operate at the IP protocol layer (also called L3), it’s at one layer below (called L2). In other words, not all packets processed by the kernel are subject to iptables processing. In order for a packet to be processed by iptables, it needs either to be destined to an IP address local to the host or routed by the host. Packets which are switched (either by Linux bridge or Open vSwitch) are not processed by iptables.

OpenStack needs the VMs to be on the same L2 segment, i.e. packet between them are switched. In order to still make use of iptables to implement a stateful firewall, it used a trick.

The Linux bridge (traditional software switch included in the Linux kernel) contains its own filtering mechanism called ebtables. While connection tracking cannot be used from within ebtables, by setting appropriate system config parameters it’s possible to call iptables chains from ebtables. By using this technique, it’s possible to make use of connection tracking even when doing L2 packet switching.

Now, the obvious question is where to put this on the OpenStack packet traversal path.

The heart of every OpenStack node is so called “integration bridge”, br-int. In a typical deployment, br-int is implemented using Open vSwitch. It’s responsible for directing packets between VMs, tunneling them between nodes and some other tasks. Thus, every VM is connected to an integration bridge.

The stateful firewall needs to be inserted between the VM and the integration bridge. We want to make use of iptables, which means inserting a Linux bridge between the VM and the integration bridge. That bridge needs to have the correct settings applied to call iptables and iptables rules need to be populated to utilize conntrack and do the necessary firewalling.

How It Looks

Looking at the picture below, let’s examine how a packet from VM to VM traverses the network stack.old ovs in openstack

The first VM is connected to the host through the tap1 interface. A packet coming out of the VM is then directed to the Linux bridge qbr1. On that bridge, ebtables call into iptables where the incoming packet is matched according to configured rules. If the packet is approved, it passes the bridge and is sent out to the second interface connected to the bridge. That’s qvb1 which is one side of the veth pair.

Veth pair is a pair of interfaces that are internally connected to each other. Whatever is sent to one of the interfaces is received by the other one and vice versa. Why the veth pair is needed here? Because we need something that could interconnect the Linux bridge and the Open vSwitch integration bridge.

Now the packet reached br-int and is directed to the second VM. It goes out of br-int to qvo2, then through qvb2 it reaches the bridge qbr2. The packet goes through ebtables and iptables and finally reaches tap2 which is the target VM.

This is obviously very complex. All those bridges and interfaces add cost in extra CPU processing and extra latency. The performance suffers.

Connection Tracking in Open vSwitch to the Rescue

All of this can be dramatically simplified. If only we could include the connection tracking directly in Open vSwitch…

And that’s exactly what happened. Recently, the connection tracking code in the kernel was decoupled from iptables and Open vSwitch got support for conntrack. Now it’s possible to match not only on flows but also on connections. Jakub Libosvar (Red Hat) made use of this new feature in Neutron.

Now, VMs can connect directly to the integration bridge and stateful firewall is implemented just using Open vSwitch rules alone.

Let’s examine the new, improved situation in the second picture below.

new ovs conntract

A packet coming out of the first VM (tap1) is directed to br-int. It’s examined using the configured rules and either dropped or directly output to the second VM (tap2).

This substantially saves packet processing costs and thus increases performance. The following overhead was eliminated:

  1. Packet enqueueing on veth pair: The packet sent to a veth endpoint is put to a queue and dequeued and processed later.
  2. Bridge processing on per-VM bridge:. Each packet traversing the bridge is subject to FDB (forwarding database) processing.
  3. ebtables overhead: We measured that just enabling ebtables without any rules configured has performance costs on the bridge throughput. Generally, ebtables are considered obsolete and don’t receive much work, especially not performance work.
  4. iptables overhead: There is no concept of per-interface rules in iptables, iptables rules are global. This means that for every packet, incoming interface needs to be checked and execution of rules branched to the set of rules appropriate for the particular interface. This means linear search using interface name matches which is very costly, especially with a high number of VMs.

In contrast, by using Open vSwitch conntrack, 1.-3. are gone instantly. Open vSwitch has only global rules, thus we still need to match for the incoming interface in Open vSwitch but unlike iptables, the lookup is done using port number (not textual interface name) and more importantly, using a hash table. The overhead in 4. is thus completely eliminated, too.

The only remaining overhead is of the firewall rules themselves.

In Summary

Without Open vSwitch conntrack:

  • A Linux bridge needs to be inserted between a VM and the integration bridge.
  • This bridge is connected to the integration bridge by a veth pair.
  • Packets traversing the bridge are processed by ebtables and iptables, implementing the stateful firewall.
  • There’s substantial performance penalty caused by veth, bridge, ebtables and iptables overhead.

With Open vSwitch conntrack:

  • VMs are connected directly to the integration bridge.
  • The stateful firewall is implemented directly at the integration bridge using hash tables.

Images were captured on a real system using plotnetcfg and simplified to better illustrate the points of this article.

by Marcos Garcia - Principal Technical Marketing Manager at July 22, 2016 09:55 PM

Major Hayden

Setting up a telnet handler for OpenStack Zuul CI jobs in GNOME 3

The OpenStack Zuul system has gone through some big changes recently, and one of those changes is around how you monitor a running CI job. I work on OpenStack-Ansible quite often, and the gate jobs can take almost an hour to complete at times. It can be helpful to watch the output of a Zuul job to catch a problem or follow a breakpoint.

New Zuul

In the previous version of Zuul, you could access the Jenkins server that was running the CI job and monitor its progress right in your browser. Today, you can monitor the progress of a job via telnet. It’s much easier to use and it’s a lighter-weight way to review a bunch of text.

Some of you might be saying: “It’s 2016. Telnet? Unencrypted? Seriously?”

Before you get out the pitchforks, all of the data is read-only in the telnet session, and nothing sensitive is transmitted. Anything that comes through the telnet session is content that exists in an open source repository within OpenStack. If someone steals the output of the job, they’re not getting anything valuable.

I was having a lot of trouble figuring out how to set up a handler for telnet:// URL’s that I clicked in Chrome or Firefox. If I clicked a link in Chrome, it would be passed off to xdg-open. I’d press OK on the window and then nothing happened.

Creating a script

First off, I needed a script that would take the URL coming from an application and actually do something with it. The script will receive a URL as an argument that looks like telnet://SERVER_ADDRESS:PORT and that must be handed off to the telnet executable. Here’s my basic script:

#!/bin/bash

# Remove the telnet:// and change the colon before the port
# number to a space.
TELNET_STRING=$(echo $1 | sed -e 's/telnet:\/\///' -e 's/:/ /')

# Telnet to the remote session
/usr/bin/telnet $TELNET_STRING

# Don't close out the terminal unless we are done
read -p "Press a key to exit"

I saved that in ~/bin/telnet.sh. A quick test with localhost should verify that the script works:

$ chmod +x ~/bin/telnet.sh
$ ~/bin/telnet.sh telnet://127.0.0.1:12345
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Press a key to exit

Linking up with GNOME

We need a .desktop file so that GNOME knows how to run our script. Save a file like this to ~/.local/share/applications/telnet.desktop:

[Desktop Entry]
Version=1.0
Name=Telnet
GenericName=Telnet
Comment=Telnet Client
Exec=/home/major/bin/telnet.sh %U
Terminal=true
Type=Application
Categories=TerminalEmulator;Network;Telnet;Internet;BBS;
MimeType=x-scheme/telnet
X-KDE-Protocols=telnet
Keywords=Terminal;Emulator;Network;Internet;BBS;Telnet;Client;

Change the path in Exec to match where you placed your script.

We need to tell GNOME how to handle the x-scheme-handler/telnet mime type. We do that with xdg utilities:

$ xdg-mime default telnet.desktop x-scheme-handler/telnet
$ xdg-mime query default x-scheme-handler/telnet
telnet.desktop

Awesome! When you click a link in Chrome, the following should happen:

  • Chrome will realize it has no built-in handler and will hand off to xdg-open
  • xdg-open will check its list of mime types for a telnet handler
  • xdg-open will parse telnet.desktop and run the command in the Exec line within a terminal
  • Our telnet.sh script runs with the telnet:// URI provided as an argument
  • The remote telnet session is connected

The post Setting up a telnet handler for OpenStack Zuul CI jobs in GNOME 3 appeared first on major.io.

by Major Hayden at July 22, 2016 07:44 PM

OpenStack Superuser

How large open source projects can develop for security

In just six years, OpenStack has grown into 58 projects that provide open source cloud computing software for public and private clouds.

That rapid expansion just might be the security professional’s worst nightmare. From network orchestration service Astara to messaging service Zaqar, there are thousands of people around the world contributing code. Code that could be full of holes, vulnerabilities and unreported bugs.

That’s what makes OpenStack earning the Core Infrastructure Initiative (CII) Best Practices Badge from the Linux Foundation even more impressive. The badge covers seven areas - from change control to quality and security - you can check out the specifics here.

Superuser asked Travis McPeak, senior security architect at IBM and OpenStack Security Project team member, about some best practices for developing large open source projects with security in mind.

What's the key to security-conscious development?

Security-conscious development requires security consideration at each stage in the software development lifecycle. If a product hasn't been designed securely it will be extremely difficult, if not impossible, to fix this later. Similarly, the most secure design can be crippled by simple insecure coding practices. Different security tools are applied at each stage.

Threat modeling, for example, is a very effective way to discover defects in design and missing system controls. Source code review and scanning tools lower the risk of implementation defects. The key is involving security at each stage. Missed steps become increasingly difficult and expensive to correct at later stages.

How do you employ them when working with large teams around the world?

Any software development can be difficult with highly distributed teams and security is no exception. To make this easier we've found it useful to adopt multiple methods of communication to ensure everybody has a way they're comfortable with.

Some teams prefer to have instant communication and use IRC, Slack, and other tools. Other teams like to have video conferences or calls. Still other teams are on sufficiently different timezones that email, forums, and wikis are most effective for them. Regardless of tool choice the important part is that each team member knows who to talk with and has a way to talk with that person.

You can read more about OpenStack and security with a new white paper published by the Foundation available at: www.openstack.org/software/security.

Get involved

If you want to be a part of the security team or have an idea of your own to make OpenStack more secure, head to #openstack-security on FreeNode IRC or join one of our weekly meetings (1700 UTC in #openstack-meeting-alt). Alternatively, drop an email on the OpenStack Developers mailing list use the tag [security] , introduce yourself and what you’re interested in.

Cover Photo: Ryan McGuire, Gratisography.

by Nicole Martinelli at July 22, 2016 05:30 PM

Major Hayden

What’s Happening in OpenStack-Ansible (WHOA) – July 2016

OpenStackThis post is the second installment in the series of What’s Happening in OpenStack-Ansible (WHOA) posts that I’m assembling each month. My goal is to inform more people about what we’re doing in the OpenStack-Ansible community and bring on more contributors to the project.

July brought lots of changes for the OpenStack-Ansible project and the remaining work for the Newton release is coming together well. Many of the changes made in the Newton branch have made deployments faster, more reliable and more repeatable.

Let’s get to the report!

New releases

You can always find out about the newest releases for most OpenStack projects on the OpenStack Development mailing list, but I’ll give you the shortest summary possible here.

Kilo

The final Kilo release, 11.2.17, is in the books! If you’re on Kilo, it’s definitely time to move forward.

Liberty

The latest Liberty release is now 12.1.0. For more information on what’s included, review the release notes or view a detailed changelog.

Mitaka

Mitaka is the latest stable branch and it’s currently at version 13.2.0. It contains lots of bug fixes and a few small backported features. The latest details are always in the release notes and the detailed changelog.

Notable discussions

The OpenStack-Ansible mid-cycle is quickly approaching! It runs from August 10th through the 12th at Rackspace’s headquarters in San Antonio, Texas. All of the signup information is on the etherpad along with the proposed agenda. If you’re interested in OpenStack deployment automation with Ansible, please feel free to join us!

Support for Open vSwitch is now in OpenStack-Ansible, along with Distributed Virtual Routing (DVR). Travis Truman wrote a blog post about using the new Open vSwitch support. The support for DVR was added very recently.

We had a good discussion around standardizing how OpenStack’s python services are deployed. Some projects are now recommending the use of uwsgi with their API services. During this week’s IRC meeting, we agreed as a group that the best option would be to standardize on uwsgi if possible during the Newton release. If that’s not possible, it should be done early in the Ocata release.

Jean-Philippe Evrard was nominated to be a core developer on OpenStack-Ansible and the thread received many positive comments over the week. Congratulations, JP!

Notable developments

Lots of work is underway in the Newton release to add support for new features, squash bugs, and reduce the time it takes to deploy a cloud.

Documentation

Documentation seems to go one of two ways with most projects:

  • Sufficient documentation that is organized poorly (OpenStack-Ansible’s current state)
  • Insufficient documentation that is organized well

One of the complaints I heard at the summit was “What the heck are we thinking with chapter four?”

To be fair, that chapter is gigantic. While it contains a myriad of useful information, advice, and configuration options, it’s overwhelming for beginners and even seasoned deployers.

Work is underway to overhaul the installation guide and provide a simple, easy-to-follow, opinionated method for deploying an OpenStack cloud. This would allow beginners to start on solid ground and have a straightforward deployment guide. The additional information and configuration options would still be available in the documentation, but the documentation will provide strong recommendations for the best possible options.

Gnocchi deployments

OpenStack-Ansible can now deploy Gnocchi. Gnocchi provides a time series database as a service and it’s handy for use with ceilometer, which stores a lot of time-based information.

Multiple RabbitMQ clusters

Some OpenStack services communicate very frequently with RabbitMQ and that can cause issues for some other services. OpenStack-Ansible now supports independent RabbitMQ clusters for certain services. This allows a deployer to use a different RabbitMQ cluster for handling telemetry traffic than they use for handling nova’s messages.

PowerVM support

Lots of changes were added to allow for multiple architecture support, which is required for full PowerVM support. Some additional fixes for higher I/O performance and OVS on Power support arrived as well.

Repo server improvements

Building the repo server takes quite a bit of time as repositories are cloned, wheels are built, and virtual environments are assembled. A series of patches merged into the project that aim to reduce the time to build a repo server.

Previously, the repo server built every possible virtual environment that could be needed for an OpenStack-Ansible deployment. Today, the repo server only builds virtual environments for those services that will be deployed. This saves time during the build process and a fair amount of disk space as well.

Source code is also kept on the repo server so that it won’t need to be downloaded again for multiple architecture builds.

Additional changes are on the way to only clone the necessary git repositories to the repo server.

Ubuntu 16.04 (Xenial) support

Almost all of the OpenStack-Ansible roles in Newton have Ubuntu 16.04 support and the integrated gate job is turning green a lot more often this week. We still need some testers who can do some real world multi-server deployments and shake out any bugs that don’t appear in an all-in-one (AIO) build.

Feedback?

The goal of this newsletter is three fold:

  • Keep OpenStack-Ansible developers updated with new changes
  • Inform operators about new features, fixes, and long-term goals
  • Bring more people into the OpenStack-Ansible community to share their use
    cases, bugs, and code

Please let me know if you spot any errors, areas for improvement, or items that I missed altogether. I’m mhayden on Freenode IRC and you can find me on Twitter anytime.

The post What’s Happening in OpenStack-Ansible (WHOA) – July 2016 appeared first on major.io.

by Major Hayden at July 22, 2016 03:48 PM

Aptira

OpenStack Australia Government Day – Speaking and Sponsorship Opportunities Now Open

OpenStack Australia Government Day

Following on from the success of the first OpenStack Australia Day in Sydney this year, Canberra is set to host the second OpenStack Australia Day for 2016. Focusing on Open Source cloud technology within the Government sector, this conference will gather users, vendors and solution providers to showcase the latest technologies and share real-world experiences of the next wave of IT virtualisation. OpenStack Australia Government Day will also feature keynote presentations from industry leading figures, workshops, a panel and a networking event for a less formal opportunity to engage with the community.

Speaker and sponsorship opportunities for OpenStack Australia Government Day are now open. There’s a limited number of sponsorship packages available, so all sponsorships will be confirmed on a first-come, first-serve basis.

Note that sponsorships close on the 9th of September, and speaker submissions are due by the 7th of October. For more information, to request a sponsorship package or to submit a presentation, please visit the OpenStack Australia website.

Tickets will be available shortly. We hope to see you all in Canberra!

The post OpenStack Australia Government Day – Speaking and Sponsorship Opportunities Now Open appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Jessica Field at July 22, 2016 05:49 AM

Carlos Camacho

TripleO deep dive session #3 (Overcloud deployment debugging)

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

This session is related to how to troubleshoot a failed THT deployment.

This video session aims to cover the following topics:

  • Debug a TripleO failed overcloud deployment.
  • Debugging in real time the deployed resources.
  • Basic Openstack commands to see the deployment status.

So please, check the full session content on the TripleO YouTube channel.

<object data="http://www.youtube.com/embed/fspnjD-1DNI" style="width:100%; height:100%; height: 315px; float: none; clear: both; margin: 2px auto;"></object>

Sessions index:

    * TripleO deep dive #1 (Quickstart deployment)

    * TripleO deep dive #2 (TripleO Heat Templates)

    * TripleO deep dive #3 (Overcloud deployment debugging)

by Carlos Camacho at July 22, 2016 12:00 AM

July 20, 2016

Cloudwatt

Innovation Beta: MyCloudManager v2

logo

This version of MyCloudManager (Beta) is a different stack of everything the team was able to share with you so far. It aims to bring you a set of tools to unify, harmonize and monitor your tenant. In fact it contains a lot of different applications that aims to help you manage day by day your Linux instances :

  • Monitoring and Supervision
  • Log management
  • Jobs Scheduler
  • Mirror ClamAV - Antivirus
  • Repository app manager
  • Backup snapshot or backup soft
  • Time synchronization

MyCloudManager has been completely developed by the CAT team ( Cloudwatt Automation Team).

  • MyCloudManager is fully HA (High Available)
  • it is based on a CoreOS instance
  • all applications are deployed via Docker containers orchestrated by Kubernetes
  • The user interface is developed by React
  • Also you can install or configure, from the GUI, all the applications on your instances via Ansible playbooks.
  • To secure maximum your Cloudmanager, no port is exposed on the internet apart from port 22 to the management of the stack of bodies and port 1723 for PPTP VPN access.

Preparations

The prerequisites

Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

$ source COMPUTE-[...]-openrc.sh
Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Install MyCloudManager

The 1-click

MyCloudManager start with the 1-click of Cloudwatt via the web page Apps page on the Cloudwatt website. Choose MyCloudManager apps, press DEPLOY.

After entering your login / password to your account, launch the wizard appears:

oneclick

As you may have noticed the 1-Click wizard asked to reenter your password Openstack (this will be fixed in a future version of MyCloudManager) You will find here your tenant ID, it’s same as Projet ID. It will be necessary to complete the wizard.

By default, the wizard deploys two instances of type “small-1” who will be the masters instances, these are necessary for the proper functioning of Kubernetes HA. Regarding nodes they support all your “pods “(applications) on the stack. They should be size according to the use you want to make of MyCloudManager by default we proposed to deploy flavor type “n2.cw.standart-1”.

You’ll see later that 3 “tiny” instances will be created, they help Kubernetes to know all of the nodes and the application deployed on the cluster.

Also a variety of other instance types exists to suit your various needs, allowing you to pay only for the services you need. Instances are charged by the minute and capped at their monthly price (you can find more details on the Pricing page on the Cloudwatt website).

To persist the application data, we will create standard volume in your tenant and automatically attach to your stack to deploy each application through Kubernetes to contain all the data of your different applications.

By default, MyCloudManager will be deployed on two master instances, 3 worker instances and 3 etcd instances.

Press DEPLOY.

The 1-click handles the launch of the necessary calls on Cloudwatt API :

  • Start all cluster instances based on CoreOS,
  • Start the toolbox-backend container,
  • Start the toolbox-frontend container,
  • Start the rethinkdb container,
  • Start the rabbitmq container,
  • Start the traefik container

The stack is created automatically. You can see its progression by clicking on its name which will take you to the Horizon console. When all modules become “green”, the creation is finished.

Wait 5 minutes that the entire stack is available.

startopo

 Finish OpenVPN access

In order to have access to all functionalities, we have set up a VPN connection.

Here are the steps to follow :

  • First retrieve the output information of your stack,

stack

Windows 7

  • Must now create a VPN connection from your computer , go to “Control Panel > All Control Panel > Network and share center”. Click “ Set up a connection …..”

start vpn internet

  • Entry now the retrieved information in the output stack. Initially the FloatingIP and then login and password provided.

info login

After following this procedure you can now start the VPN connection.

vpnstart


Windows 10

  • Go to Settings> NETWORK AND INTERNET > Virtual Private Network

configwin01

  • Now enter the information retrieved out of the stack : firstly the FloatingIP and then login and password provided.

configipwin10 configuserwin10

After following this procedure you can now start the VPN connection.

vpnstart


You can now access in the MyCloudManager administration interface via the URL http://manager.default.svc.mycloudmanager and begin to reap the benefit.

It’s (already) done !

Enjoy

Interface access and the various applications is via IP address or DNS names if the user rights on your computer allow you to do this. Indeed a SkyDNS container is launched at startup allowing you to benefit all the names in place. You can access on the different web interfaces applications by clicking Go or via URL request (ex: http://10.0.1.250:30601/ or http://zabbix.default.svc.mycloudmanager/).

We place a block volume every time you deploy an application to save all the datas of the application containers. The volume is mounted on the cluster node that support your application and automatically attached to the container. This makes our stack to be much more robust. For information if the application crash and goes on another node, Kubernetes will dismount and re-mount the volume on the new node, therefore the application finds all of its data.

Interface Overview

Here is the home of the MyCloudManager, each thumbnail representing an application ready to be launched. In order to be as scalable and flexible as possible, all applications of MyCloudManager are Docker’s containers.

accueil

A menu is present in the top left of the page, it can move through the different sections of MyCloudManager, we’ll detail them later.

  • Apps: Application List
  • Instances: list of visible instances of MyCloudManager
  • Tasks : all ongoing or completed tasks
  • Audit: list of actions performed
  • Backups: list all backup jobs
  • My Instances> Console: access to the console Horizon
  • My account> Cockpit ; access to the dashboard
  • Support: allows sending mail to support and cloud coach

menu

All of the applications in the Apps section are configurable through by Settings button settings on each thumbnail.

As you can see, we have separated them into different sections. params

In the Info section you will find a presentation of the application with some useful links on the application. appresume

In the Environments section you can register here all the parameters to be used to configure the variables of the container to its launch environment. paramsenv

In the Parameters section you can register here all the different application configuration settings. paramapp

To identify the applications running on those that are not, we have set up a color code : An application is started will be surrounded by a green halo and a yellow halo during installation. appstart

The tasks make the tracking of actions performed on MyCloudManager. It is reported in relative time.

tasks

It is possible for you to cancel pending on error spot in the tasks menu by clicking horloge which will then show you what logo poubelle.

We also implemented a audit section so you can see all actions performed on each of your instances and export to Excel (.xlsx ) if you want to make a post-processing or keep this information for safety reasons via the button xlsx.

audit

The Backups section allows you to backup all instances by your MyCloudManager. The backup may be performed in two ways, via a snapshot or via duplicity that has been called soft.

  • The snapshot backup will take a picture of the instance when you have schedule the backup. Then you can find it in the list of your images on your tenant.
  • The soft backup will deploy a duplicity container and backup all data in the repository (/dataor /config) in a swift container which can also be found in containers section of your tenant (object storage). If you want to save a server group, then you have to select when creating the backup. Regarding the scheduling of backups, several choices are available to you:

  • Daily: one backup per day at the desired time,
  • weekly: one backup per week at day and time desired,
  • Monthly: one backup per month at day and time desired.

To start a new backup configuration you must click on the buttonbutton

Give a name to your backup configuration: newconfig

Select the servers to be added: Backupselectsrv

Set now when and how the backup of those servers will be made:

  • Snapshot : Takes a “picture” of your instance and deposited in your image library on your tenant (Warning: a snapshot runs cold as mentioned in this article End of the hot snapshot place to the cold snapshot!) bkpsnapshot

  • Soft: Copy all the selected directories in a swift container bkpsoft

Once you have clicked the button FINISH your configuration is now saved:

bkpsvg

You can always change the configuration of a backup via the button editbkpedit that allows you to add or remove servers, change the backup directory and when it will run. The delete button bkpdelete, for its part, allows to completely remove the selected backup job.

Who says said backup said restore:

To restore a backup soft or snapshot the approach stays the same. You must go to the menu instances of your MycloudManager. As you can see a new restore button appeared on all servers that have been saved.

When you click on a pop-up open, you can now choose from the list the backup to restore chooserestore. Once this has been done, if your backup was snapshot, the selected image will be restored instead of the current instance, if the backup is soft the selected files will be restored in the restore directory of your instance.

Back to menu

Finally , we integrated two navigation paths in the MyCloudManager menu : My Instances and My Account. They are respectively used to access the Cloudwatt Horizon console and to manage your account via the Cockpit interface.

The Support section will allow you, as the name implies, contacts the Cloudwatt support organization if requested or incident in your MyCloudManager. You can also contact a cloud coach to have more information regarding our ecosystem or feasibility of your projects that you want to focus on the Cloudwatt public cloud.

Email :

  • Choose your need Email Support or Contact a Cloud Coach,
  • The type field will allow you to choose between demand ** or incident,
  • The Reply Email Address field will allow the support or cloud coach to have your address in order to respond,
  • The Request / Problems Encountered field constitutes the body of the email.

supportemail

Sending email is via the button sendmail. This becomes mailsend if the email has been sent or mailfail if the server encountered an error while sending.

Add instances to MyCloudManager

To add instances to MyCloudManager, 3 steps:

  1. Attach your router instance of MyCloudManager
  2. Run the script attachment
  3. Start the desired services

1. Attach the instance at the instance of router:

$ neutron router-interface-add $MyCloudManager_ROUTER_ID $Instance_subnet_ID

You will find all the information by inspecting the stack of resources via the command next heat :

$ heat resource-list $stack_name

Once this is done you are now in the ability to add your instance to MyCloudManager to instrumentalize .

2. Start the attachment script:

On MyCloudManager, go to the instance menu and click the button bouton at the bottom right.

We offer two commands to choose: one Curl and one Copy to Clipboard command to run a script in instance build.

addinstance

Once the script is applied to the selected instance it should appear in the menu instance of your MyCloudManager .

appdisable

Trick If you want to create an instance via the console horizon Cloudwatt and declare directly in your MyCloudManager, you should to select - in step 3 of the instance launch wizard - MyCloudManager network and the Security Group and - in step 4 - you can paste the command Copy to Clipboard command in the Custom Script field.

attachnetwork

launchinstance

3. Start the required services on the instance :

To help you maximum we created playbooks Ansible to automatically install and configure the agents for different applications.

To do this, simply click on the application you want to install on your machine. The playbook Ansible concerned will be automatically installed. Once the application is installed, the application logo switch to color, allowing you to identify the applications installed on your instances.

appenable

The MyCloudManager services provided by applications

In this section, we will present different service MyCloudManager.

Monitoring and supervision

We have chosen to use Zabbix, the most popular application for monitoring, supervision and alerting . Zabbix application is free software to monitor the status of various network services , servers and other network devices but also applications and software worn on the supervised servers; and producing dynamic graphics resource consumption. Zabbix uses MySQL, PostgreSQL or Oracle to store data. According to the large number of machines and data to monitor the choice of SGBD greatly affects performance. Its web interface is written in PHP and provided a real-time view on the collected metrics.

To go further, here are some helpful links :

Log Management

We chose Graylog which is the product of the moment for log management , here is a short presentation : Graylog is an open source log management platform capable of manipulating and presenting data from virtually any source. This container is the offer officially by Graylog teams. * The Graylog Web Interface is a powerful tool that allows anyone to manipulate the entirety of what Graylog has to offer through an intuitive and appealing web application. * At the heart of Graylog is it’s own strong software. Graylog Server interacts with all other components using REST APIs so that each component of the system can be scaled without comprimising the integrity of the system as a whole. * Real-time search results when you want them and how you want them: Graylog is only able to provide this thanks to the tried and tested power of Elasticsearch. The Elasticsearch nodes behind the scenes give Graylog the speed that makes it a real pleasure to use.

Enjoying this impressive architecture and a large library of plugins, Graylog stands as a strong and versatile solution for log management both instances but also worn applications and software on monitored instances.

To go further, here are some helpful links :

Job Scheduler

We have chosen to use Rundeck. The Rundeck application will allow you to schedule and organize all jobs that you want to deploy consistently on all of your holding via its web interface.

In next version of MyCloudManager, we give you the possibility to backup your servers like as we saw in the bundle Duplicity.

To go further, here are some helpful links :

Mirror Antivirus

This application is a Ngnix server. A CRON script will run every day to pick up the latest virus definition distributed by ClamAV. The recovered packet will be exposed to your instances via Ngnix allowing you to have customers ClamAV update without your instances not necessarily have access to the internet.

To go further, here are some helpful links :

Software repository

We have chosen to use Artifactory. Artifactory is an application that can display any type of directory server via a Ngnix . Here our aim is to offer an application that can expose a repository for all of your instances.

To go further, here are some helpful links :

Time Synchronisation

We have chosen to use NTP. NTP container is used here so that all of your instances without access to the internet can be synchronized to the same time and access to a server time.

To go further, here are some helpful links :

The MyCloudManager versions v2 (Beta)

  • CoreOS Stable 1010.6
  • Docker 1.10.3
  • Kubernetes 1.3
  • Zabbix 3.0
  • Rundeck 2.6.2
  • Graylog 2.0
  • Artifactory 4.9.1
  • Nginx 1.11.2
  • SkyDNS 2.5.3a
  • Etcd 2.0.3

List of distributions supported by MyCloudManager

  • Ubuntu 14.04
  • Debian Jessie
  • Debian Wheezy
  • CentOS 7.2
  • CentOS 7.0
  • CentOS 6.7

Application configuration (by default)

As explained before, we left the possibility via the button Settings settings on each thumbnail, enter all application settings to launch the container. However, the login and password can’t be changed everywhere, it will do inside the application once it started.

Login and password by default of MyCloudManager applications :

  • Zabbix - Login : admin - Password : zabbix
  • Graylog - Login : admin - Password : admin
  • Rundeck - Login : admin - Password: admin

Other applications have no web interface, so no login/ password, except Artifactory which has no authentication.

So watt?

The goal of this tutorial is to accelerate your start. At this point you are the master of the stack.

You now have an SSH access point on your virtual machine through the floating-IP and your private keypair (default user name core).

You can access the MyCloudManager administration interface via the URL MyCloudManager

And after?

This article will acquaint you with this first version of MyCloudManager. It is available to all Cloudwatt users in Beta mode and therefore currently free.

The intention of the CAT ( Cloudwatt Automation Team) is to provide improvements on a bimonthly basis. In our roadmap, we expect among others:

  • Instrumentalisation of Ubuntu 16.04 instance (possible today but only with the CURL command),
  • Instrumentalisation of Windows instances,
  • A French version,
  • not having to re-enter your credentials,
  • many other things

Suggestions for improvement ? Services that you would like ? do not hesitate to contact us apps.cloudwatt@orange.com

Have fun. Hack in peace.

The CAT

by The CAT at July 20, 2016 10:00 PM

OpenStack Superuser

Finding OpenStack security issues with Bandit

OpenStack represents one of the largest open source projects both by lines of code and number of active contributors. Although its sheer size represents tremendous growth, it also presents a unique security challenge: How can you find existing security defects in a code base of that size and prevent the introduction of new defects?

Bandit is the tool from the OpenStack Security Project born out of a necessity to respond to that challenge.

In this article, we’ll introduce a common Python security issue, show how to set up Bandit, run Bandit to find the issue and then show the proper process for reporting a security bug.

Issue

Let’s say we’re implementing a simple application programming interface (API) that stores files for users. We allow users to be created with a simple API:

POST /v1/users - Create a user (username, password)

When our application receives a request, it creates a new directory to store the user’s files:

subprocess.Popen('mkdir %s' % user_name, shell=True)

Easy enough, right? What happens if somebody creates the following user?

'something; rm –rf /'

Since subprocess is using a shell to execute the command (because of the shell=True parameter), the shell interprets “;” to be a separator character between two commands: the intended directory creation and the unintended attacker-supplied command. We use “rm” here as an example, but in reality the command could be anything the attacker chooses. This attacker-supplied command will be run with the privileges of the application.

Bandit to the rescue

But there is good news. Bandit can find this issue. The easiest way to install Bandit is to use the “pip” command to install from PyPI:

virtualenv bandit_venv
source bandit_venv/bin/activate
pip install bandit

That’s it! We can now run Bandit:

bandit –r path/to/example
>> Issue: [B602:subprocess_popen_with_shell_equals_true] 
   subprocess call with shell=True identified, security issue. 
   Security: High Confidence: High
   Location: /Users/travismcpeak/Desktop/bandit_test/test.py:6
5 
6   subprocess.Popen('mkdir %s' % user_name, shell=True)
7

Bandit quickly recognizes this as a high-severity issue and shows the offending code. Each plug-in, such as the “B602” plug-in shown above, is explained in great detail in the included Bandit documentation. In many cases, quick references to resources that show the safer way of performing these actions are included in the links section.

Reporting security bugs in OpenStack

Security issues should be responsibly disclosed. Before disclosing security issues though, please take the time to understand the impact. Although reporting instances of insecure coding practices is useful for maximum value, it’s best to file a complete report. If possible the report should include things like:

  • Analysis of the program flow
  • The source of any relevant values (config file, user input, etc.)
  • Locations input has been sanitized
  • Mitigating factors

If you feel like you may have found something but require assistance with triage, please ask for assistance in the #openstack-security channel on IRC. It’s important not to publicly disclose details of possible security issues so please don’t put details in the public channel.

Once you’ve properly triaged and understood the issue, the next step is to file a “private security” bug for the relevant project in Launchpad. You should see an option to mark the bug as a “private security” bug. Please choose this.

Next steps

After reading this, you should be familiar with a basic security issue in Python code, be able to install Bandit and use it to find this (and other) issues, know the steps required to triage a security bug, and be able to responsibly disclose security issues for OpenStack projects.

Another really good resource for developers is the "Secure Development Guidelines" which are aimed at explaining common security issues and the safer way to perform common actions.

Bandit is actively developed and maintained by the OpenStack Security Project. If you have questions or you’d like to contribute to Bandit, please send an email on the OpenStack-dev mailing list (use the “[bandit]” tag) or drop by our IRC channel #openstack-security on Freenode.

Travis McPeak, who currently works at IBM, is a member of the OpenStack Security Project. This article first appeared in the print edition of Superuser magazine, distributed at the Austin Summit. If you'd like to contribute to the next one, get in touch: editor@openstack.org

Cover Photo // CC BY NC

by Travis McPeak at July 20, 2016 06:55 PM

OpenStack Days Ireland zeroes in on networking

Everyone was seeing green at the first OpenStack Days Ireland complete with local community members, users and fresh faces.

Hosted in Dublin and organized by Intel with support from the local user group, the gathering exceeded expectations by selling out five weeks before the event with 160 people from 35 different companies registered to attend. When Ruchi Bhargava, director of datacenter and cloud software engineering at Intel polled the audience to see how many attendees were new to OpenStack, almost half of the people in the room raised their hands.

Structured around OpenStack manageability, network functions virtualization (NFV), networking, service assurance and security, the agenda featured 15 sessions, including case studies, demos and updates from the Product Working Group.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

"The demand displays that Ireland is indeed at the core of this cutting edge technology as evidenced by the number of companies both presenting, attending and collaborating in real life OpenStack activities and deployments," said Haidee McMahon, one of the event organizers and SDN/NFV Orchestration Engineering Manager at Intel.

Embracing the diversity of the event, the Women of OpenStack kicked organized a breakfast to welcome female attendees who accounted for 15 percent of attendees.

Carol Barrett, an active leader in the Women of OpenStack community, opened the breakfast, welcoming women attendees and allies to provide tips on how to get involved in the growing community. With recent OpenStack Summit growth up from nine percent to 12 percent at the Austin Summit, Barrett encouraged attendees to get involved and join the different programs, including the mentoring program that launched at the OpenStack Summit Austin.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Echoing the results of the March 2016 user survey, containers and NFV were focuses of the event's breakout sessions.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Alex Gabert, senior systems engineer at Midokura started a game of networking Buzzword Bingo, but introduced a term that was not familiar to the new Stackers attending the event: Kuryr.

"OpenStack Kuryr is a project where vendors from network overlay companies work peacefully together and try to reach the goal of bringing Docker networking closer to the Neutron plug in," he explained.

Carrying the theme of OpenStack networking, Mark McLoughlin, director of engineering, OpenStack at Red Hat and member of the OpenStack board of directors, discussed the opportunity for OpenStack and even more broadly, open source for NFV.

"The telco industry is really transforming its data centers, adopting the cloud model and changing how its applications are architected and provisioned," he said in an interview with Superuser TV. "And that's something that works really well with OpenStack."

Catch videos of the full lineup of sessions, and learn more about the first OpenStack Days Ireland and what it means for the community from McMahon and McLoughlin in the SuperuserTV clip below.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/YMAEdmmWfCc" width=""></iframe>

Cover Photo // NC by CC

by Allison Price at July 20, 2016 06:39 PM

AppFormix

Giving Operators What They Want: Choice and Stability With Kubernetes

The beauty of Kubernetes is that it essentially frees an application developer from having to worry about the underlying infrastructure upon which the app runs. If only we could give operators the confidence that Kubernetes will run on their platform of choice—from private infrastructure to public providers—and their host operating system of choice. Oh, and while we’re at it, let’s bundle in the additional operational tooling to monitor the health of Kubernetes itself.

by Sumeet Singh (sumeet@appformix.com) at July 20, 2016 05:28 PM

The Official Rackspace Blog

Rackspace with Red Hat OpenStack Platform: Better Together

With 81 percent of senior IT professionals leveraging or planning to leverage OpenStack as their private cloud technology, OpenStack has clearly become the leading private cloud choice.

But in doing so, many have found operating a private cloud is more complicated than consuming the public cloud and requires more oversight of the datacenter/network/hardware than expected. This complexity, along with the lack of OpenStack talent available in today’s market, has created a challenge for organizations as they look to adopt OpenStack as a private cloud.

By offering OpenStack private cloud as a service, Rackspace’s Managed OpenStack Private Cloud team solved this problem for the enterprises, providing customers a scalable, production ready private cloud in any data center.

Continuing to bring our operational expertise to best of breed cloud technologies, we launched Rackspace Private Cloud Powered by Red Hat, the first and only Red Hat certified Managed Private Cloud in February, combining the support-focused practices of Rackspace and Red Hat together into a managed OpenStack private cloud solution.

External network

Red Hat certified managed private cloud customers get the peace of mind that comes with Red Hat’s certification network, which includes thousands of vendors who work directly with Red Hat to validate that their software and hardware work as expected and correct any bugs that may arise. This collaboration offers the comfort of knowing that the reference architecture was co-developed (and is supported) by Red Hat and Rackspace, and it integrates with the certification network.

Since the launch, Rackspace has been working closely with Red Hat to refine and improve our Red Hat OpenStack Platform as a service offer to make it the standard for enterprise grade, production ready OpenStack deployments.

As active OpenStack contributors, Rackspace and Red Hat have written a combined 30 percent of the code in the software and we continue to work together to improve OpenStack for use cases enterprises demand. Part of this work has included working to test OpenStack projects such as Triple O, Ceilometer, and Ironic at the OpenStack Innovation Center, to bring our results back to the community. Currently, the team is working through scaled deployment of Ceph (30 nodes), as well as documenting impacts to database sizing as polling sizes are adjusted for Ceilometer.

<figure class="wp-caption aligncenter" id="attachment_41073" style="width: 443px">OpenStack code<figcaption class="wp-caption-text">Calculated by Stackalytics</figcaption></figure>

Our combined OpenStack expertise, Rackspace’s Fanatical Support and Red Hat’s Linux expertise give customers a support experience second to none.

Rackspace proactively monitors your cloud and takes proactive steps to keep it running, and serves as a single point of contact for any issues with your cloud, whether they’re OpenStack, driver or kernel level. Rackspace takes ownership of the issue and works directly with Red Hat engineering to solve a customer’s problem.

This deep partnership starts with support and continues through the entire product. From joint road mapping at the feature level to deployment engagements, Rackspace and Red Hat work together to ensure the technology is delivering the best outcomes for customers. This level of expertise with OpenStack offers customers piece of mind in when adopting this private cloud platform that they’ll never have to go it alone.

Consuming OpenStack as a service also offers customers economies of scale, as Rackspace is able to build and deploy a production ready cloud for customers in as little as 45 days from agreed upon requirements — giving customers valuable speed to market in today’s fast-paced world.

And as an operations partner, we don’t stop there. While customers focus on maximizing their new cloud, Rackspace and Red Hat continue to work in front of the customer to identify technological improvements and ways to improve the cloud experience while continuing to operate as a scalable, upgradable, production ready cloud.

These improvements include adding support for the newest projects in OpenStack, performance tuning changes and integration with additional third party tools such as Appformix and PLUMgrid.

In coming months, we’ll be detailing how Rackspace Private Cloud Powered by Red Hat simplifies cloud operations for our joint customers and and helps them with standing up a production grade OpenStack cloud. Meanwhile, we invite you to learn to learn more about this solution.

The post Rackspace with Red Hat OpenStack Platform: Better Together appeared first on The Official Rackspace Blog.

by Daniel Sheppard at July 20, 2016 11:00 AM

OpenStack Superuser

How PubMatic saved millions by cloudhopping with OpenStack

PubMatic, a marketing automation platform for advertising publishers that prides itself in providing real-time analytics to their users, first built their advertising platform on the AWS public cloud. Soon after, they found themselves spending up to 18 times what they budgeted because of their high volume of data, high volume of network traffic that required a constant threshold of performance and large analytic system that consisted of thousands of nodes.

Udy Gold, PubMatic’s VP of data center and cloud operations, found himself in an awkward position. As the monthly cost for the public cloud soared to $400,000 and higher, PubMatic’s CFO kept knocking on Gold’s door to ask how he planned to solve the budgeting nightmare.

While he jokes about hiding under his desk to avoid the questions, Gold compares his experience with the public cloud to the most expensive taxi cab ride of his life. Gold once landed at 3 a.m. in Mumbai tired, sick and hungry and the driver he scheduled did not wait to pick him up. What should have been an easy taxi ride to his hotel ended up with a price tag 20 times what it should have been. Gold knew before he got into the rickshaw early that morning that he was being ripped off, but he climbed in anyhow.

“Sometimes you know what you start,” Gold said. “And sometimes you know why you even start it. I knew why I got into this rickshaw. I needed to get to a bed. But, you never know how it will end. And when you go to the public cloud, that’s exactly this.”

Here is the scenario that Gold was faced with at PubMatic, which he shared at the OpenStack Summit in Austin:

alt text here

He knew that with the public cloud he was getting hardware, network and dashboard management, but there were a number of things he wasn’t getting that he needed. Some of these were:

  • Operating system installation and management
  • Network architecture and design
  • Security and patches
  • Ownership and responsibility
  • Application deployment and support
  • A complete data center

Gold laid out the requirements his team had when moving from a public cloud to a hybrid cloud and determining if OpenStack was the answer:

alt text here

PubMatic did decide to switch to a private cloud with OpenStack and were able to create a data environment that allowed them to provision, configure, deploy and manage their own infrastructure; create a self serving environment that could be controlled by the data team instead of opening a ticket with corporate IT or the data center team to get more hardware and more VMs; and ultimately save them a ton of money.

For Gold and PubMatic, the private cloud through OpenStack was the best solution. But Gold stressed that “cloudhopping” from public to private cloud is not something he is recommends to every company. Gold said in his presentation that it’s important to evaluate what will work best for your company’s individual scenario and data team. PubMatic teamed up with startup Platform9 Systems, here are the requirements he went to the Sunnyvale-based company with:

alt text here

They worked together to develop the current system for PubMatic, in which Platform9 handles its private cloud and PubMatic maintains some public cloud services itself. He outlines the journey from data center to hybrid data center with a public cloud as personal one -- not a black-and-white case of best practice.

“Public cloud is not good or bad. For some it works really well, for others it doesn’t work at all for a variety of reasons.”

If your company is trying to decide about using the public cloud versus a private cloud with OpenStack, there are several factors to keep in mind that Gold spelled out:

alt text here

You can catch additional details from PubMatic’s experience in the 37-minute talk on the OpenStack Foundation Summit site.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/5CzyGMRSHQA" width=""></iframe>

Cover Photo // CC BY NC

by Jeanna Barrett at July 20, 2016 12:33 AM

July 19, 2016

Cloudwatt

New available OS distribution : Fedora 24

Fedora is an operating system based on the Linux kernel, developed by the community-supported Fedora Project and sponsored by Red Hat.

Cloudwatt provides an update of the Fedora 24 distribution image, just released from the Fedora labs on June 21st with all the system and security updates available as of July the 12th, 2016. We encourage you to use this latest version from now.

To all users:

  • if you have developed scripts or applications referencing the old identifier (id), you should update them with the id of this new version to ensure continuity of your service. Indeed, the id of the image is new and the old version of the image will be removed from our public catalog at the end of the summer.

  • If you have a snapshot of your instance done with the previous image, know that the restoration will work because even though the old picture will be removed from the public catalog and will no longer be visible, Cloudwatt stores and keeps a history of all published images.

by Florence Arnal at July 19, 2016 10:00 PM

RDO

Recent RDO blogs, July 19, 2016

Here's what RDO enthusiasts have been blogging about in the last week.

OpenStack 2016.1-1 release Haïkel Guémar

The RDO Community is pleased to announce a new release of openstack-utils.

… read more at http://tm3.org/7x

Improving RDO packaging testing coverage by David Simard

DLRN builds packages and generates repositories in which these packages will be hosted. It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

… read more at http://tm3.org/7y

TripleO deep dive session #2 (TripleO Heat Templates by Carlos Camacho

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/7z

How to build new OpenStack packages by Chandan Kumar

Building new OpenStack packages for RDO is always tough. Let's use DLRN to make our life simpler.

… read more at http://tm3.org/7-

OpenStack Swift mid-cycle hackathon summary by cschwede

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

… read more at http://tm3.org/80

by Rich Bowen at July 19, 2016 08:21 PM

OpenStack Superuser

New York OpenStack community celebrates two 6th birthdays this year

OpenStack celebrates its 6th birthday this month. There are parties from Argentina to Vietnam, find one near you from OpenStack's worldwide list and raise a glass.

We’re also celebrating with the OpenStack community with short interviews from around the world to offer a glimpse of OpenStack's impact on a local level.

Superuser's second interview features New York City User Group, which celebrated on July 13. (The first was one of the community's newest groups - South Africa.

In addition to celebrating the OpenStack Foundation’s birthday however, they also celebrated their own 6th birthday. To celebrate both occasions, they hosted a birthday party with 50 members of their group, stickers and a cake featuring both 6th birthday and official user group logos.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

In this 9-minute Superuser TV segment, Kenneth Hui, a senior technical marketing manager at Rackspace, OpenStack Foundation ambassador and OpenStack New York and Philadelphia co-organizer describes growing in conjunction with OpenStack, the challenges faced throughout the years and what he hopes to see out of OpenStack in the future.

Hui, who has been involved with of the New York City User Group for the last three years, has seen “an upshot in the number of people who are taking a look at OpenStack” during his tenure. The people involved have changed, too. "There were a lot of vendors in the beginning," he says. "As adoption grows, we're seeing more users."

Those shifts tailor the group's get-togethers. After noticing an increase in the last year or so of VMware administrators joining the New York User Group, Hui offered up a talk on the history of VMware's involvement with OpenStack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Hui also speaks to the commitment of the user group members - whether they are completely new to OpenStack or have been in the OpenStack community for a while - in learning more about OpenStack. They meet nine-10 times a year - blizzards permitting, a few storms have shut them down - and he says that's frequent enough for people to feel a part of the community.

As the pace of technology accelerates, so does the work of the OpenStack community.

“There are so many new technologies that are coming down the pipe every few months instead of every few years,” he says. “The [OpenStack] community as a whole needs to figure out how to help us navigate through those new technologies...and [make OpenStack] an integration engine for these new technologies.”

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/45jj8hZd-to" width=""></iframe>

Hui also coordinates the Philadelphia OpenStack User Group which will celebrate OpenStack's 6th birthday on July 28 by making custom tees.

Don't see a local user group or birthday celebration? Get involved and learn how to start one in your local community.

Cover Photo CC/NC

by Superuser at July 19, 2016 06:57 PM

Major Hayden

Join me on Thursday to talk about OpenStack LBaaS and security hardening

Rackspace Office Hours Live Stream header imageIf you want to learn more about load balancers and security hardening in OpenStack clouds, join me on Thursday for the Rackspace Office Hours podcast! Walter Bentley, Kenneth Hui and I will be talking about some of the new features available in the 12.2 release of Rackspace Private Cloud powered by OpenStack.

The release has a tech preview of OpenStack’s Load Balancer as a Service project. The new LBaaSv2 API is stable and makes it easy to create load balancers, add pools, and add members. Health monitors can watch over servers and remove those servers from the load balancers if they don’t respond properly.

OpenStack Summit Austin 2016 - Automated Security Hardening with OpenStack-Ansible - Major HaydenI talked about the security hardening feature extensively at this year’s OpenStack Summit in Austin and it is now available in the 12.2 release of RPC.

The new Ansible role and its tasks apply over 200 security hardening configurations to OpenStack hosts (control plane and hypervisors) and it comes with extensive auditor-friendly documentation. The documentation also allows deployers to fine-tune many of the configurations and disable the ones they don’t want. Deployers also have the option to tighten some configurations depending on their industry requirements.

Join us this Thursday, July 21st, at 1:00 PM CDT (check your time zone) to talk more about these new features and OpenStack in general.

The post Join me on Thursday to talk about OpenStack LBaaS and security hardening appeared first on major.io.

by Major Hayden at July 19, 2016 02:09 PM

The Official Rackspace Blog

You’ve Come a Long Way, Baby: Happy 6th Birthday OpenStack

July 19 marks the 6th anniversary of the founding of the OpenStack open source project, and as co-founders, we couldn’t be prouder.

Each year around this time, we take stock of OpenStack’s progress, and this year, we have another milestone to celebrate: the first anniversary of the OpenStack Innovation Center, which we founded with Intel to more quickly and intelligently take on remaining challenges and continue to speed up enterprise adoption.

Fortune100

That adoption is already impressive: half of Fortune 100 companies now use OpenStack. According to a recent survey published on CIO.com, 81 percent of senior IT professionals are planning to move or have already moved to OpenStack private cloud. More than 54,000 community members and upwards of 80 global user groups now work across 179 countries, supporting more than 600 companies.

But deployment challenges remain. By its very nature, OpenStack is infinitely complex, with an almost limitless number of configuration choices, varying levels of maturity between a growing number of projects and a rapid pace of new releases.

OS at 6

Many companies lack the operational expertise to optimally configure and then run the platform. The CIO.com survey found that half of all enterprises that tried to implement an OpenStack cloud failed. Of those who were successful, 65 percent found the implementation experience difficult.

These findings aren’t surprising, given that OpenStack is a relatively new technology, and the industry overall has a shortage of technical talent, but that also means it won’t be a quick or easy challenge for companies to solve on their own.

Luckily, they don’t have to.

As Rackspace has continued to invest in OpenStack, one thing has become crystal clear: for the vast majority of companies, OpenStack is best consumed as a managed service. We’ve come to understand that customer success isn’t about finding the right open source project; it’s about finding the right partner.

As builder and operator of the world’s largest OpenStack public cloud and some of the world’s largest OpenStack private clouds, we understand better than anyone what is required to help our customers take full advantage of this complex but powerful cloud platform. Rackspace customers rely on us to help do critical tasks such as health checks, capacity management and upgrades.

Consuming OpenStack as a managed service also eliminates the need to find operational expertise — a real bottom line boon, given that OpenStack engineers are expensive and difficult to find, hire and retain. We eliminate this talent gap and enable our customers to focus their IT resources on developing software and features for their customers (instead of managing infrastructure).

Our unmatched stable of OpenStack experts act as extensions of our customers’ IT staff, freeing them to focus their time on the more important tasks of enabling their businesses to be successful.

Rackspace is proud to be one of the founders of OpenStack and the project’s current standard-bearer. We are humbled by the great community that has grown up around it. And as we celebrate this milestone, we continue to look forward — to watch adoption grow and the technology mature even more.

The post You’ve Come a Long Way, Baby: Happy 6th Birthday OpenStack appeared first on The Official Rackspace Blog.

by Bryan Thompson at July 19, 2016 11:00 AM

July 18, 2016

IBM OpenTech Team

Horizon Newton Midcycle Sprint Recap

The Horizon midcycle was held in San Jose, California from July 12-14, 2016. Thanks Rob and Cisco for hosting us! We had about a dozen Horizon contributors in attendance from 7 different companies (Cisco, HP, IBM, Intel, Mirantis, Rackspace, Symantec). We covered a lot of ground in three days, starting with the most important topic: a Horizon mascot/ logo. :) Vampire cats, unicorns, superheros, and various memes were suggested. But onto serious business, here’s an update on what we covered for Newton.

Schema Form
Writing a form template can be a pain, not to mention how messy, bloated, and inconsistent it can get. You need to redefine every form field and its attributes. Luckily, there is an Angular solution called Schema Form which allows you to generate Bootstrap 3 supported form markup via a JSON schema. This will work with Horizon’s custom form validation and theming too. A first pass at it is in review with the Create Network form.

For more information, http://schemaform.io/.

Resource Type Registry
We have a resource type registration system that collects all the implementation details needed for creating a user interface, including display and behavior related to that resource type. You can retrieve these details from wherever you need – tables, detail views, forms, modals, and etc. It is created and accessed via the HEAT type name, for example: “OS::Cinder::Volume”, which is associated with a service API call.

The things returned by this singleton instance are:

  • information about properties, for example labels and formatters for table column headers
  • itemActions*
  • batchActions*
  • globalActions*
  • detailViews*
  • tableColumns*
  • filterFacets*

Those marked with an asterisk (*) are also extensible, allowing you to add, remove, replace existing items.

With this information, you can build out different types of UI.

The inline code documentation is comprehensive with thorough examples. We continue to add and improve the registry. It has made the creation of panels very simple. One concern that was brought up at the midcycle was that these layers of abstraction make it difficult for users to figure out what the problem is. For example, if one of your row actions is not working, where do you start to debug it? We will address this with more documentation about how the framework components work together. For example, diving into the dependency chain of:

hz-resource-panel ->
hz-resource-table ->
hz-dynamic-table ->
hz-cell ->
hz-field

Images table has been converted to use the registry. Please take a look at that as the most up-to-date reference. Several others are in the review pipeline.

So a summary of what needs to be done with the registry are:

  • High level documentation
  • High level functional tests
  • Stability of Angular framework

NG Images
On a related note, this is the most comprehensive NG panel in tree. We focused on this one to help us establish patterns before developing other panels. Some more issues to tackle include handling asynchronous table events via polling and supporting multiple files direct upload (with CORS).

Quota Usage
The quota usage was originally “manually” calculated in Horizon because there were no specific service API endpoints. It was very inefficient and sometimes did not align with the backend. However, cinderclient and novaclient now support it. Neutronclient does not yet. Consequently, we can improve how we fetch quota calls. There is a patch that will get the quota calls at the same time as getting the table data to improve the table render performance.

Theming
Continuing to theme elements (spinner, number spinner, legacy workflow). The current default styling will be moved to a separate repo and called “OpenStack Theme”. New default will be vanilla Bootstrap.

Glance V2 Support
This is a work-in-progress which has encountered regression issues in the Glance API. For example, V2 no longer supports creating an image from a URL. This feature can still be enabled within Glance, but it is not discoverable outside. If it is made discoverable, Horizon can add a configuration to local_settings.py to enable this feature.

Filtering
We are adding more filtering. From the user’s perspective it works seamlessly whether it is client-side or server-side filtering.

Upgrading xstatic Packages
In the progress of upgrading Angular, Bootstrap UI.

DISCUSSION TOPICS

How to Support Microversions [new]
Microversioning is used to add REST API changes to fix bugs and add new features after a major version has been released. These are often times backward incompatible. We have crude support for it in Horizon right now using if checks (in the case of unlock/ lock instances available in Nova 2.9). However, we do not have a general way to resolve this problem of enabling/ disabling the UI based on whether or not the feature is supported by the API.

Deprecation Policy [new]
We have enabled NG Launch Instance wizard and NG Swift panel as default. However, the legacy code is still in tree. We will put out a release note to let developers know that we intend to remove this panel in n cycles. We still need to discuss how many cycles this will be (to give people a chance to migrate over if they choose to do so) and whether or not we will put the legacy code into a deprecated panel plugin before we phase out.

On a side note, currently removing the legacy Launch Instance code will break integration tests. We need to clean up/ rewrite the tests.

Adding a utility library [new]
Considering adding Underscore or Lodash.

Error Messaging [ongoing]
Error messaging has always been a contentious topic in Horizon. The underlying problem is that there are no standards across APIs. So what we are left to do is either show vague, not-so-useful messages (“Volume could not be created”) or having to parse the error message with hardcoded keywords and then displaying a user-friendly message. We cannot return whatever the API gives us because that could contain sensitive content.

To alleviate the issue, we can do better validation on the client-side so we don’t need to make an API call and/ or let the user know at which step it errored out (“was not able to reach service”).

Dashboard Panel Organization [ongoing]
Currently, Horizon is split into Project and Admin and organized as “lists.” Usability studies and operator feedback informs us that it leads to a confusing navigation for new users, where they may dig into the contents of one panel, but they have to switch to another context to run an action. We are working with the OpenStack UX team to figure out what is the best way to improve navigation.

OSProfiler [in review]
NG Developers Panel [in review]

These are two new panels which are in development and in Horizon’s roadmap. They will assist developers implementing new panels and/ or tracing what API calls are being made.

So that is our 3-day midcycle in a nutshell. Feel free to visit our IRC channel @ #openstack-horizon if you have any questions.

Farewell for now, catch us in Barcelona in the fall!

The post Horizon Newton Midcycle Sprint Recap appeared first on IBM OpenTech.

by Cindy Lu at July 18, 2016 11:45 PM

SUSE Conversations

OpenStack 6 years on: thriving, expanding and maturing

This week marks the 6th anniversary for OpenStack cloud software. What began in July 2010 with a first design summit in Austin, Texas, has since grown into the clear leader for open source cloud solutions. Industry analyst 451 Research sees the OpenStack market growing at an impressive 35% CAGR and predicts that revenue will exceed …

+read more

The post OpenStack 6 years on: thriving, expanding and maturing appeared first on SUSE Blog. Mark_Smith

by Mark_Smith at July 18, 2016 11:22 PM

OpenStack Superuser

Why OpenStack is not big enough

Although the OpenStack Foundation has faced criticism for the ever-expanding number of projects in its Big Tent, executive director Jonathan Bryce believes the growth is necessary.

Bryce also talks becoming an integration engine, changing workloads and cloud native apps in a recent episode of OSpod, a podcast about all things OpenStack. Niki Acosta of Cisco and Jeff Dickey of Redapt fired off a host of interesting questions. We’ve picked out a few interesting bits and edited for clarity. You can catch the whole 55-minute episode here.

What are the criticisms you hear the most and how are you responding to these criticisms?

One from the last year has been around the Big Tent, that’s been an ongoing transition and how the upstream community is governed… a lot of people have been have been kind of confused about what that means and they don't know exactly what you know like what does that mean for OpenStack what does that mean for what I should use or not use I think those are all and legitimate questions and concerns. We definitely deserved criticism on how we communicated around it. Ultimately the decision to the change how the upstream projects were governed was the right one, but the process for communicating and explaining it led to a lot of confusion. Two years ago, projects were defined as “integrated,” which is a word with a specific meaning and would lead you to assume something about how those projects work together and interact together that wasn't fully accurate. So we were in a confusing situation already.

The different technical leaders looked at that problem and try to solve that problem they also saw that you know there were many many other things that were being built in the OpenStack ecosystem that were really useful and valid that weren’t being recognized in some case were starving for more resources because they were they were integrated you know they were developed the OpenStack way by the same developers but companies were unwilling to use them or put resources on it because there's the label that we had that it seemed like a stamp of approval, so the goal is really to kind of communicate more clearly what the OpenStack development community was developing and then, rather than just having one label, to have a variety of other data that people can look at sea ok you know is this project being used in production how old is it what kind of testing does it have what kind of documentation does it have so, that people who are building products and services who want to use the code directly actually have more information than just integrated.

Some people say OpenStack is too big— the funny thing is that in some ways OpenStack is not big enough.

One of the things that we've been doing over the last year on there's a on OpenStack.org website under the software section there's a project navigator we launched at the end of 2015 and it surfaces a lot more of that information. It also highlights the core projects around compute, storage and networking that most of the other projects build on integrate with. It definitely doesn't solve all the problems, it’s kind of step one and we're continuing to add more information to that.
Some people say OpenStack is too big— the funny thing is that in some ways OpenStack is not big enough. If you look at the problem set that we're taking on, you need a massive on set of functionality to automate at scale all of the infrastructure in the world’s data centers — so you're going to need a lot more than just a virtualization manager or an object storage system — which is where we started out six years ago. You need you need a lot more than that to do governance and orchestration and vertical integration for databases, data analytics, etc...

Traditionally, people called OpenStack a cloud platform, is it turning into an integration platform?

What is changing is the answer to the question: “What is cloud?” If you go back to 10 years ago, cloud was really just sort of extremely elastic virtualization, that was cloud and the only thing you could run on cloud was something that could run in a highly elastic cloud virtualized environment. Now, there are incredible workloads being run on top of clouds.

At the OpenStack Summit in Austin, SAP talked about a production example that runs Siemens Mindsphere, which is basically a control system for industrial manufacturing, on top of the SAP cloud platform which runs on top of OpenStack. Ten years ago nobody would have thought about putting that in a cloud environment at all. The scope of cloud has changed dramatically over the last 10 years, so if we want to stay relevant, then we want to continue to meet those those kinds of needs and the scope of what we work on has to grow as well.

Cloud is changing, how are the workloads changing?

What's been really interesting to me is to see how the workloads that ran on cloud — there's still the standard ones, basically web services- websites, APIs, the backend, mobile applications- but then you have this split which is bringing over legacy applications, things that traditionally you would run in a really stable, safe never-changing environment you're trying to bring those forward into a cloud-hosted model on one end and then on the other end you have this cloud native application development movement, where you're really exploding your application into micro-services that each run on their own. Instead of one application, it's actually like you have 30 or 40 applications that make up an end-user product and so those are very very distinct extremes...

The problem set that we’re addressing is very broad, this is a general-purpose technology and you look at something like Linux and the bread and butter of Linux when I started working with it in the late 90s was running a single server. That was basically what everybody that I knew who was using Linux was doing and then there were a few of us who were trying to use it to the desktop as well it was really difficult, but you know like most of the use cases for it were running servers. Now, it's in my phone, it’s in my car, it's in my set top box on my TV because these are general-purpose technologies. A lot of times they start out with niche use cases and then as that expands go into crazy things that we never imagined.

It seems like everyone's fighting right now to be the first citizen of cloud native apps. How do you see the container landscape working with OpenStack?

You're right and I think that it's a really foolish approach to the idea. There's so much opportunity out there, I really think that we are just at the beginning of a completely different way for how we manage data and how we build applications… A survey showed that in the first quarter in the U.S. there were more new mobile customers that came online that were cars than cell phones. It's not just about our our laptops, it's not just about our phones, it's about everything that we do we interact with is becoming connected. That totally changes what you can build and what you work with in that in it from a development standpoint... If we all just kind of scramble around and we want to be the first citizen — the one who controls this piece or that piece — we delay the true breakthrough that we should be working together to build.

At the OpenStack Summit we did get a few things that the people were kind of like, “what?” We had Gartner speak about bimodal…We also had Alex Polvi from CoreOS and the cloud computing foundation from Google and they showed on a Kubernetes environment that was running the OpenStack services on top of it. We also showed people running Kubernetes on top of OpenStack...

We have to be careful that we don't hold on too tightly to our toys and miss the bigger opportunity.

Isn’t one of the benefits of OpenStack is that the possibilities both above and below the stack are endless?

That’s one of the great strengths of OpenStack…There are APIs on top that you can build really cool and powerful applications on, but they're also APIs underneath so that you can tie in storage like Ceph or more commercial storage systems like Netapp and IBM, so it's pluggable on on both sides…

We’ve gotten past the point where at least the standard OpenStack infrastructure level, the technology, is no longer the issue.

What we have built, ultimately upstream, in OpenStack is a way of producing software in an amazing global collaboration. If you look at where we run the risk as an industry overall of either missing the opportunity or delaying that inflection point where we get to incredible value, it's because there are a lot of other a lot of other companies who see these technology points as kind of a proprietary opportunity for them and they want to control it, even if they might have software underneath that's an open source license, they don't necessarily focus on building the community as a top priority. We've seen that for years now, those communities never end up as healthy and as vibrant and diverse as when building a community is explicitly a top priority. That's part of what worries me and one of the things that I think could get in our way.

Do you think the Foundation will ever have an opinionated build?

Probably not, but one of the things that that we have started doing is working with different users and with different working groups to come up with more and more detailed reference architectures. The use cases are just so varied. However, when we look at things like data from our user survey what we see that there's some pretty consistent trends. People who do big data analytics with Sahara project, most of them run Ironic, and it makes sense. Obviously, if you're going to be doing heavy analytics, then running those on bare metal gives you some performance improvement. But if you were just coming to OpenStack with not a lot of knowledge and not a lot of background, you might not know to go look at Ironic from the get-go. So we're working on more documentation around specific use cases, hopefully when we get to Barcelona in October we’ll be able to unveil some cool new documentation. So a specific distribution or a build, probably not, but much more detail around what we're seeing a common patterns, yes.

There seems to be a lag in the cultural aspect of being able to develop, deploy and adopt these technologies. Do you think that enterprise adoption has lagged because of those cultural elements? What advice would you give to companies that struggle with the cultural aspect of transforming their business?

We’ve gotten past the point where at least the standard OpenStack infrastructure level, the technology, is no longer the issue. It's almost all cultural and it's almost all around how do find the right people and how do you adjust years or even decades of historical procedures to this world.

I’ve told this story before, but it’s the perfect example, about a company that wanted to adopt cloud to speed up their development process. It was taking 44 days for them to get a feature out…and then they built a cloud, rolled it out and after that they were able to do it in 42 days! And it's because they haven't changed the culture, they haven't changed the processes, and once they were able to go back and do that and they dropped the deployment time down to hours. In order to get there, they had to go through and change who had a sign-off authority for every single application change and move a lot of that the front end — so you're validating a deployment methodology versus a specific deployment of that application. Once you get to that point, you can follow that methodology as many times as you want, hundreds of times a week. When I talk with companies, I tell them it's really easy to get excited about some change and try to adopt it all at once and that almost always ends up failing…

The two things that I think are really important —to have buy-in from leadership for a transition like this, otherwise it's too easy for people throughout the organization to dismiss it... And Before you come up with with this this grand strategy, try a couple of pilot projects. This is something (you see with) successful OpenStack projects, they almost always had a specific purpose that they started with, then added on afterwards. Because OpenStack can do so many things, it's really easy to get paralyzed by the possibilities…It's best to pick a single application, a single development team and meet their needs and learn in a controlled environment where you have a real world use case but you're basically getting feedback and a really tight feedback loop around one specific scenario. Then add in as you go along…

Do you see public cloud as a threat to OpenStack?

It's sort of like saying that that Linux is competitive to Facebook. One is a technology and you can use it to build the other thing or you can use it to build alternatives. Earlier this year, there was an article about OpenStack changes focus from private cloud to telcos. First of all, OpenStack isn’t one person or one entity that makes a decision like that, plus it's used for a lot of different things and the majority of the deployments are private clouds and usually in enterprises but [there are] around 30 or so public clouds that are running OpenStack. As you you look at how public cloud has evolved since Amazon first kicked off the market in 2006, 10 years ago now, it’s gone from being a really simple menu-driven set of options to dozens and dozens of different services. Some of the things that Amazon has added lately include things like email hosting and DNS hosting, things that hosting companies have done for a long time.

We're seeing diversification in the public cloud market and you'll end up seeing hyperscale clouds like Amazon Microsoft and Google that are available for broad general purpose applications, the basic bread and butter of the cloud development model. But you'll also see public clouds focused in a specific region or in a specific vertical, and that's where we've seen OpenStack being used to build those public clouds. There's a public cloud in Sweden built by City Networks, built specifically for European Union financial regulations. If you are a a financial services company in the European union and you can run on this public cloud and know that you are meeting the requirements that the EU has in place, which include on locality and rules around around citizen data etc…I think we'll see continued diversification in addition to the standard hyperscale public cloud model.

What is the foundation doing to address the growing need for security?

Whenever you talk about OpenStack security, it’s a multi-layered question…But within OpenStack specifically there are a couple of groups focused on the security of the OpenStack code, ongoing vulnerability management and the basic code security…Those are the two main focus areas - on development and ops. We’re also putting together a security white paper (editor’s note: look for it in mid-July) around that content.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/VWXkm-mLU7E" width=""></iframe>

Cover Photo // CC BY NC

by Nicole Martinelli at July 18, 2016 11:14 PM

Solinea

451 Research: OpenStack-related Business Models to Exceed $4BN by 2019

Solinea Launches Enterprise Platform for Management, Audit, and Monitoring for OpenStack

Download the July 2016 451 Research Impact Report on Solinea: “Solinea’s Goldstone Enterprise Targets Enterprise Openstack Users”


451 Research tracks the evolving OpenStack business models through its Market Monitor offering, a market-sizing and forecasting service that offers a bottom-up market size, share and forecast analysis for the rapidly evolving marketplace for OpenStack-related products and services. The service provides detailed information on the 56 vendors in the OpenStack marketplace, including a listing of each vendor and its products, and a view of the competitive landscape. This service tracks key market segments whose vendors support OpenStack or base their services on the OpenStack framework: service providers, IT services, distributions and training. The OpenStack Market Monitor no longer includes PaaS and cloud management vendor revenue associated with OpenStack. Those estimates are fully included in our Cloud Enabling Technologies forecasts.

The 451 Take

Our OpenStack market estimate and forecast was derived using a bottom-up analysis of each vendor’s current revenue generation and growth prospects. Of the 56 companies included within our estimate, eight out of 10 have directly provided revenue guidance. Based on our research, we continue to believe the market is still in the early stages of enterprise use and revenue generation. Our Market Monitor service expects total OpenStack-related revenue to exceed $4bn by 2019. Revenue overwhelmingly comes from the service provider space, with in increasing portion coming from private cloud deployments rather than public IaaS. We expect an uptick in revenue from all sectors, especially from OpenStack products and distributions that are primarily targeting enterprises.

Get insights on the OpenStack market: 

    • The market 35% CAGR
    • An overview of OpenStack service providers
    • An overview of IT Services, Training, and Distributions
    • The OpenStack market outlook

Get The Report

The post 451 Research: OpenStack-related Business Models to Exceed $4BN by 2019 appeared first on Solinea.

by Solinea at July 18, 2016 05:24 PM

RDO

OpenStack Swift mid-cycle hackathon summary

OpenStack Swift mid-cycle hackathon summary

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

There are always way more topics to discuss than time, therefore we collected topics first and everyone voted afterwards. We came up with the following major discussions that are currently most interesting within our community:

  • Hummingbird replication
  • Crypto - what's next
  • Partition power increase
  • High-latency media
  • Container sharding
  • Golang - how to get it accepted in master
  • Policy migration

There were a lot more topics, and I like to highlight a few of them.

H9D aka Hummingbird / Golang

This was a big topic - as expected. It has been shown by Rackspace already that H9D improves the performance of the object servers and replication significantly compared to the current Python implementation. There were also some investigations if it would be possible to improve the speed using PyPy and other improvements; however the major problem is that Python blocks processes on file I/O, no matter if it is async IO or not. Sam wrote a very nice summary about this earlier on [1].

NTT also benchmarked H9D, and showed some impressive numbers as well. Shortly summarized, throughput increased 5-10x depending on parameters like object size and the like. It seems disks are no longer the bottleneck - now the proxy CPU is the new bottleneck. That said, inode cache memory seems to be even more important because with H9D one can do many more disk requests.

Of course there were also discussions about another proposal to accept golang within OpenStack and discussions will continue [2]. My personal view is that the H9D implementation has some major advantages and hopefully (a refactored subset) will be accepted to be merged to master.

Crypto retro & what's next

Swift 2.9.0 has been released the past week and includes the merged crypto branch [3]. Kudos to everyone involved, especially Janie and Alistair! This middleware make it possible for operators to fully encrypt object data on disk.

We did a retro on the work done so far; it has been the third time that we used a feature branch and a final soft-freeze to land a major change within Swift. There are pros and cons for this, but overall it worked pretty well again. It also made sense that reviewers stepped in late in the process, because this added new sights onto the whole work. Soft freezes also enforce more reviewers to contribute to it and get it merged finally.

Swiftstack benchmarked the crypto branch; as expected the throughput decreases somewhat with crypto enabled (especially with small objects), while proxy CPU usage increases. There were some discussions about improving the performance, and it seems the impact from checksumming is significant here.

Next steps to improve the crypto middleware is to work on some external key master implementations (for example using Barbican) as well as key rotation.

Partition power increase

Finally there is a patch ready for review now, that will allow an operator to increase the partition power without downtime for end users [4].

I gave an overview about the implementation, and also showcased a demo how this works. Based on discussions during the last week I spotted some minor eventualities that have been fixed meanwhile, and I hope to get this merged before Barcelona. We somewhat dreamed about a future Swift that might be usable with automatic partition power increase, where an operator needs to think about this much less than today.

Various middlewares

There are some proposed middlewares that are important to their authors, and we discussed quite a few of them. This includes:

  • High-latency media (aka archiving)
  • symlinks
  • notifications
  • versioning

The idea to support high-latency media is to use cold storage (like tape or other public cloud object storage with a possible multi-hour latency) for less frequently accessed data and especially to offer a low-cost long-term archival solution based on Swift [5]. This is somewhat challenging for the upstream community, because most contributors don't have access to large enterprise tape libraries for testing. In the end this middleware needs to be supported by the community, and a stand-alone repository outside of Swift itself might make most sense therefore (similar to the swift3 middleware [6]).

A new proposal to implement true history-based versioning has been proposed earlier on, and some open questions have been talked about. This should land hopefully soon, adding an improved way to versioning compared to today's stack-based versioning [7].

Sending out notifications based on writes to Swift have been discussed earlier on, and thankfully Zaqar now supports temporary signed urls, solving some of the issues we faced earlier on. I'll update my patch shortly [8]. There is also another option to use oslo.messaging. All in all, the whole idea will be to use a best-effort approach - it's simply not possible to guarantee a notification has been delivered successfully without blocking requests.

Container sharding

As of today it's a good idea to avoid billions of objects in a single container in Swift, because writes to that container can get slow then. Matt started working on container sharding sometime ago [9], and iterated once again because he faced new problems with the previous ideas. My impression is that the new idea is getting much closer to something that will eventually be merged, thanks to Matt's persistence on this topic.

Summary

There were a lot more (smaller) topics that have been discussed, but this should give you an overview of the current work going on in the Swift community and the interesting new features that we'll see hopefully soon in Swift itself. Thanks everyone who contributed and participated and special thanks to Richard for organizing the hackathon - it was a great week and I'm looking forward to the next months!

by cschwede at July 18, 2016 03:55 PM

How to build new OpenStack packages

Building new OpenStack packages for RDO is always tough. Let's use DLRN to make our life simpler.

DLRN is the RDO Continuous Delivery platform that pulls upstream git, rebuild them as RPM using template spec files, and ships them in repositories consumable by CI (e.g upstream puppet/Tripleo/packstack CI).

We can use DLRN to build a new RDO python package before sending them for package review.

Install DLRN

[1.] Install required dependencies for DLRN on Fedora/CentOS system:

$ sudo yum install git createrepo python-virtualenv mock gcc \
              redhat-rpm-config rpmdevtools libffi-devel \
              openssl-devel

[2.] Create a virtualenv and activate it

$ virtualenv dlrn-venv
$ source dlrn-venv/bin/activate

[3.] Clone DLRN git respository from github

$ git clone https://github.com/openstack-packages/DLRN.git

[4.] Install the required python dependencies for DLRN

$ cd DLRN
$ pip install -r requirements.txt

[5.] Install DLRN

$ python setup.py develop

Now your system is ready to use DLRN.

Let us package "congress" OpenStack project for RDO

[1.] create a project "congress-distgit" and initialize the project using git init

$ mkdir congress-distgit
$ cd congress-distgit
$ git init

[2.] create a branch "rpm-master"

$ git checkout -b rpm-master

[3.] Create openstack-congress.spec file using RDO spec template and commit it into rpm-master branch.

$ git add openstack-congress.spec
$ git commit -m "<your commit message>"

Add a package entry in rdoinfo

[1.] Copy rdoinfo directory somewhere locally and make changes there.

$ rdopkg info && cp -r ~/.rdopkg/rdoinfo $SOMEWHERE_LOCAL
$ cd $SOMEWHERE_LOCAL/.rdopkg/rdoinfo

[2.] Edit the rdoinfo.yml file and add package entry in the last

$ vim rdoinfo.yml
- project: congress # project name
  name: openstack-congress # RDO package name
  upstream: git://github.com/openstack/%(project)s # Congress project source code git repository
  master-distgit: <path to project spec file git repo>.git # path to congress-distgit git directory
  maintainers:
  - < maintainer email > # your email address

[3.] save the rdo.yml and run

$ ./verify.py

This will check rdo.yml sanity.

Run DLRN to build openstack-congress package

[1.] Go to DLRN project directory.

[2.] Run the following command to build the package

$ dlrn --config-file projects.ini \
        --info-repo $SOMEWHERE_LOCAL/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name openstack-congress \ --package flag to build openstack-congress
        --head-only \ To build the package using latest commit

It will clone the project code "openstack-congress" and spec under "openstack-congress_distro" folder.

[3.] Once done, you can rebuilding the package by passing the –dev flag.

$ dlrn --config-file projects.ini \
        --info-repo ~/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name <openstack-congress> \ # --package flag to build openstack-congress
        --head-only \ # To build the package using latest commit
        --dev \ # to build the package locally

[4.] Once build is completed, you can find the rpms and srpms in this folder

$ # path to packaged rpms and srpms
$ <path to DLRN>/data/repos/current/

Now grab the rpms and feel free to test it.

by chandankumar at July 18, 2016 02:05 PM

OpenStack Blog

Get a high COA score. Win a Chromebook.

Are you ready to show off your OpenStack skills? The COA high score contest has begun! July 18th through September 11th, COA exam takers with the highest score each week will win a Chromebook from the OpenStack Foundation.

 

Entry is easy:

  1. Go to openstack.org/coa and read the exam materials
  2. Register and schedule your exam!

 

That’s it!
Winners will be notified via email. See the full contest rules below. Good luck!

 

Certified OpenStack Administrator Exam Contest

The Certified OpenStack Administrator Exam Contest (the “Contest”) is designed to encourage eligible individuals (“Entrant(s)” or “You”) to take the Certified OpenStack Administrator Exam (the “Exam”). OpenStack will choose the winner, and the prize will be awarded in accordance with these Official Rules (these “Rules”).

  1. BINDING AGREEMENT: In order to enter the Contest, you must agree to the Rules. Therefore, please read these Rules prior to entry to ensure you understand and agree. You agree that submission of an entry in the Contest constitutes agreement to these Rules. You may not submit an entry to the Contest and are not eligible to receive the prize described in these Rules unless you agree to these Rules. These Rules form a binding legal agreement between you and OpenStack with respect to the Contest.
  2. ELIGIBILITY: To be eligible to enter the Contest, an Entrant must be 18 years of age or older and eligible to take the Exam as set forth in the Exam Handbook (available for download on the Exam Website.  Contest is void where prohibited by law. Employees, directors, and officers of OpenStack and their immediate families (parents, siblings, children, spouses, and life partners of each, regardless of where they live) and members of the households (whether related or not) of such individuals are ineligible to participate in this Contest.
  3. SPONSOR: The Contest is sponsored by OpenStack Foundation (“OpenStack” or “Sponsor”), a Delaware non-stock, non-profit corporation with principal place of business at 1214 West 6th Street, Suite 205 Texas, Austin, TX 78703, USA.
  4. CONTEST PERIOD: The Contest begins on July 18, 2016 when posted to The OpenStack Blog (http://www.openstack.org/blog) and ends on September 11, 2016 11:59 Central Time (CT) Zone (“Contest Period”). All dates are subject to change.
  5. HOW TO ENTER: To enter the Contest, visit the Exam website located at https://www.openstack.org/coa (“Exam Website”) during the Contest Period, click “Get Started,” and follow the prompts to purchase and take the Exam.  You will be automatically entered into the Contest by completing the Exam during the Contest Period.  If you start but do not complete the Exam during the Contest Period, you will not be entered into the Contest.  Additional entry information is available on The OpenStack Blog.  If you wish to opt out of the Contest, please email us at info@openstack.org. LIMIT ONE (1) ENTRY PER ENTRANT. Subsequent entries will be disqualified.
  6. TAKING THE EXAM: To enter the Contest, you must complete the Exam.  You understand that in addition to these Rules, you must comply with all terms, conditions, rules, and requirements set by OpenStack and the Exam administrators for the Exam.  Information regarding the Exam is available on the Exam Website and in the Exam Handbook.
  7. SCORING: Exams will be scored in accordance with the Exam Handbook, available for download on the Exam Website.  The winner will be the individual that achieves the highest score on the Exam. In the event of a tie, the individual who completed the Exam in the shortest amount of time will be the winner.  For the purposes of the tie-breaker, Exam time will be measured from initial commencement of the Exam to final submission.  Time from purchase of Exam to commencement of Exam will not be considered.
  8. PRIZE: One winner will be selected.  The winner will receive a Toshiba Chromebook 2 – 2015 Edition (CB35-C3350) with an approximate retail value of $350 US Dollars.
  9. TAXES: AWARD OF PRIZE TO POTENTIAL WINNER IS SUBJECT TO THE EXPRESS REQUIREMENT THAT THEY SUBMIT TO OPENSTACK ALL DOCUMENTATION REQUESTED BY OPENSTACK TO PERMIT IT TO COMPLY WITH ALL APPLICABLE STATE, FEDERAL AND LOCAL TAX REPORTING. TO THE EXTENT PERMITTED BY LAW, ALL TAXES IMPOSED ON PRIZES ARE THE SOLE RESPONSIBILITY OF THE WINNER. In order to receive a prize, potential winner must submit tax documentation requested by OpenStack or otherwise required by applicable law, to OpenStack or a representative for OpenStack or the relevant tax authority, all as determined by applicable law. The potential winner is responsible for ensuring that they comply with all the applicable tax laws and filing requirements. If the potential winner fails to provide such documentation or comply with such laws, the prize may be forfeited and OpenStack may, in its sole discretion, select an alternate potential winner.
  10. GENERAL CONDITIONS: All federal, state and local laws and regulations apply. OpenStack reserves the right to disqualify any Entrant from the Contest if, in OpenStack’s sole discretion, it reasonably believes that the Entrant has attempted to undermine the legitimate operation of the Contest or Exam by cheating, deception, or other unfair playing practices or annoys, abuses, threatens or harasses any other entrants, or OpenStack. OpenStack retains all rights in the OpenStack products and services and entry into this Contest will in no case serve to transfer any OpenStack intellectual property rights to the Entrant.
  11. PRIVACY: Entrants agree and acknowledge that personal data submitted with an entry, including name and email address may be collected, processed, stored and otherwise used by OpenStack for the purposes of conducting and administering the Contest. All personal information that is collected from Entrants is subject to OpenStack’s Privacy Policy, located at http://www.openstack.org/privacy. Individuals submitting personal information in connection with the Contest have the right to request access, review, rectification or deletion of any personal data held by OpenStack in connection with the Contest by sending an email to OpenStack at info@openstack.org or writing to: Compliance Officer, OpenStack Foundation, P.O. Box 1903, Austin, TX 78767.
  12. PUBLICITY: By entering the Contest, Entrants agree to participate in any media or promotional activity resulting from the Contest as reasonably requested by OpenStack at OpenStack’s expense and agree and consent to use of their name and/or likeness by OpenStack. OpenStack will contact Entrants in advance of any OpenStack-sponsored media request.
  13. WARRANTY AND INDEMNITY: Entrants warrant that they will take the Exam in compliance with all terms, conditions, rules, and requirements set forth on the Exam Website and in the Exam Handbook. To the maximum extent permitted by law, Entrant indemnifies and agrees to keep indemnified Sponsor at all times from and against any liability, claims, demands, losses, damages, costs and expenses resulting from any act, default or omission of the Entrant and/or a breach of any warranty set forth herein. To the maximum extent permitted by law, Entrant agrees to defend, indemnify and hold harmless Sponsor from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorney’s fees) arising out of or accruing from: (i) any misrepresentation made by Entrant in connection with the Contest or Exam; (ii) any non-compliance by Entrant with these Rules; (iii) claims brought by persons or entities other than the parties to these Rules arising from or related to Entrant’s involvement with the Contest; (iv) acceptance, possession, misuse or use of any prize or participation in any Contest-related activity or participation in the Contest; (v) any malfunction or other problem with The OpenStack Blog or Exam Website in relation to the entry and participation in the Contest or completion of the Exam by Entrant; (vii) any error in the collection, processing, or retention of entry or voting information in relation to the entry and participation in the Contest or completion of the Exam by Entrant; or (viii) any typographical or other error in the printing, offering or announcement of any prize or winners in relation to the entry and participation in the Contest or completion of the Exam by Entrant.
  14. ELIMINATION: Any false information provided within the context of the Contest by Entrant concerning identity, email address, or non-compliance with these Rules or the like may result in the immediate elimination of the entrant from the Contest.
  15. INTERNET AND DISCLAIMER: OpenStack is not responsible for any malfunction of The OpenStack Blog or Exam Website or any late, incomplete, or mis-graded Exams due to system errors, hardware or software failures of any kind, lost or unavailable network connections, typographical or system/human errors and failures, technical malfunction(s) of any network, cable connections, satellite transmissions, servers or providers, or computer equipment, traffic congestion on the Internet or at The OpenStack Blog or Exam Website, or any combination thereof. OpenStack is not responsible for the policies, actions, or inactions of others, which might prevent Entrant from entering, participating, and/or claiming a prize in this Contest. Sponsor’s failure to enforce any term of these Rules will not constitute a waiver of that or any other provision. Sponsor reserves the right to disqualify Entrants who violate the Rules or interfere with this Contest in any manner. If an Entrant is disqualified, Sponsor reserves the right to terminate that Entrant’s eligibility to participate in the Contest.
  16. RIGHT TO CANCEL, MODIFY OR DISQUALIFY: If for any reason the Contest is not capable of running as planned, OpenStack reserves the right at its sole discretion to cancel, terminate, modify or suspend the Contest. OpenStack further reserves the right to disqualify any Entrant who tampers with the Exam process or any other part of the Contest, Exam, The OpenStack Blog, or Exam Website. Any attempt by an Entrant to deliberately damage any web site, including the The OpenStack Blog or Exam Website, or undermine the legitimate operation of the Contest or Exam is a violation of criminal and civil laws and should such an attempt be made, OpenStack reserves the right to seek damages from any such Entrant to the fullest extent of the applicable law.
  17. FORUM AND RECOURSE TO JUDICIAL PROCEDURES: These Rules shall be governed by, subject to, and construed in accordance with the laws of the State of Texas, United States of America, excluding all conflict of law rules. If any provision(s) of these Rules are held to be invalid or unenforceable, all remaining provisions hereof will remain in full force and effect. Exclusive venue for all disputes arising out of the Rules shall be brought in the state or federal courts of Travis County, Texas, USA and you agree not to bring an action in any other venue. To the extent permitted by law, the rights to litigate, seek injunctive relief or make any other recourse to judicial or any other procedure in case of disputes or claims resulting from or in connection with this Contest are hereby excluded, and Entrants expressly waive any and all such rights.
  18. WINNER’S LIST: The winner will be announced on The OpenStack Blog.  You may also request a list of winners after December 1, 2016 by sending a self-addressed stamped envelope to:

OpenStack Inc.

Attn: Contest Administrator

P.O. Box 1903

Austin, Texas 78767

(Residents of Vermont need not supply postage).

 

by Anne Bertucio at July 18, 2016 11:00 AM

Opensource.com

Project updates, bridging the diversity gap, and more OpenStack news

Are you interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at July 18, 2016 05:00 AM

Hugh Blemings

Lwood-20160717

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 11 to 17 July 2016 for openstack-dev:

  • ~388 Messages (up about 25% relative to last week)
  • ~173 Unique threads (up about 35% relative to last week)

List traffic picked up quite a bit relative to last week, but total message count still down around 32% relative to the long term average of 562 messages per week since I started keeping track in late June 2015.

Note that for the next five weeks or so Lwood may arrive a little later than usual – I’m in the US and so it may not always be practical to get things out the door Sunday afternoon/evening… :)

Notable Discussions – openstack-dev

Minor technical issues with Naming Polls

Monty Taylor pointed out there’s been some minor issues with the naming polls for P and Q.  These look to be resolved and new emails are going out for Q with P to follow shortly thereafter.

Openstack Stewardship Working Group (SWG)

Amrith Kumar wrote early in the week announcing the kick-off meeting of the OpenStack Stewardship Working Group.

As Amrith explains, the SWG was set up by the TC with the intent that this small group would “review the leadership, communication, and decision making processes of the TC and OpenStack projects as a whole, and propose a set of improvements to the TC.”  He goes on to note that anyone interested in these areas is welcome to join the Working Group.

Project Mascots

Heidi Joy Tretheway announced that the OpenStack Foundation is encouraging projects to choose a mascot to be used as a logo for the project and making an illustrator available to assist in creating them if desired.  As she clarifies in a subsequent post it’s all optional, no requirement to replace existing logos/mascots if projects already have them, and, yes, there will be stickers made available :)

This in turn kicked off a flurry of per project threads about choosing mascots/logos including these for Ansible, App-Catalog, Congress, Designate, Freezer, Glance, Horizon, Kolla, Mistral, Neutron, Puppet, Sahara,Vitrage and Zaqar.

Sing in the streets of Barcelona!

Well not necessarily in the streets, but as a musician I could hardly pass up mentioning Neil Jerram’s post in which he invites anyone interested in doing some singing while in Barcelona to flag their interest in the etherpad. Neil’s making this open and inclusive and urges people not to exclude themselves on the basis of style of music or ability.

In case the prospect of hearing me sing puts you off getting involved (or even attending the Summit) don’t sweat it – it’s not a given that I’m able to attend this time around and if I do I promise to sing tunefully :)

Notable Discussions – other OpenStack lists

High Performance / Parallel File Systems panel at Summit

Interested in High Performance / Parallel File Systems ? Blair Bethwaite floats the idea of a panel session for Barcelona dealing with this very topic over on the OpenStack-Operators list.

Upcoming OpenStack Events

Midcycle

No new midcycle – related messages this week as far as I could see other than minor logistics for events already mentioned in Lwood.

Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

Core nominations & changes

  • [Ansible] – Nominating Jean-Philippe Evrard for core in openstack-ansible and all openstack-ansible-* roles – Jesse Pretorius
  • [Fuel] Nominate Alexey Stepanov for fuel-qa and fuel-devops core – Andrey Sledzinskiy
  • [L2GW][Neutron] New core team member Ofer Ben-Yaakov – Sukhdev Kapur

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

A little plug – I’ve submitted a talk proposal for the Barcelona OpenStack summit titled “Finding your way around the OpenStack-Dev mailing list”  If approved, in the session I will provide a bit of a guide for newcomers (and old hands) to navigating around the various OpenStack related mailing lists, openstack-dev in particular as well as some other useful stuff. This of course all based on my work on Lwood.  When voting goes live for the summit, I’d welcome your support if you think the proposed talk sounds worthwhile – link to follow :)

This edition of Lwood brought to you by Freddy Mercury (Barcelona) among other tunes.

 

by hugh at July 18, 2016 12:54 AM

Carlos Camacho

TripleO deep dive session #2 (TripleO Heat Templates)

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

This session is related to a THT overview for all users who want to dig into the project.

This video session aims to cover the following topics:

  • A THT basic introduction overview.
  • A Template model used.
  • A description of the new composable services approach.
  • A code overview over the related code repositories.
  • A cloud deployment demo session.
  • A demo session with a deployment in live referring to debugging hints.

So please, check the full session content on the TripleO YouTube channel.

<object data="http://www.youtube.com/embed/gX5AKSqRCiU" style="width:100%; height:100%; height: 315px; float: none; clear: both; margin: 2px auto;"></object>

Sessions index:

    * TripleO deep dive #1 (Quickstart deployment)

    * TripleO deep dive #2 (TripleO Heat Templates)

    * TripleO deep dive #3 (Overcloud deployment debugging)

by Carlos Camacho at July 18, 2016 12:00 AM

July 14, 2016

OpenStack Superuser

South African OpenStack community hopes to spark technological revolution

OpenStack celebrates its 6th birthday this month. There are parties from Argentina to Vietnam; find one near you from OpenStack's worldwide list and raise a glass.

We’re also celebrating with the OpenStack community with short interviews from around the world to offer a glimpse of OpenStack's impact on a local level.

Our first interview features one of the newest User Groups. Yusuf Rajah founded Durban DOUG - Durban OpenStack User Group just three months ago but members are so excited about what the technology can bring to their country that they organized a birthday party July 8. From a custom cake and t-shirts to a raffle to give away a free Certified OpenStack Administrator exam, DOUG found creative ways to celebrate.

In a 10-minute Superuser TV segment, Rajah, a technical consultant with Tactical-DevOps, describes the difficulties faced by rural communities who need technologies but have very limited access to it and how OpenStack as an infrastructure to develop technologies on would benefit many communities in similar situations.

alt text here

“In our country’s history, we’ve been through a lot when it comes to being able to communicate with each other; we were literally kept physically apart," he said. "We’ve got all of those hoops to jump through, and I think technology, especially like OpenStack is a fantastic platform to start bringing people together in a creative way.”

Though the technology runs in the cloud, he says that face-to-face contact has been his best recruiting tool so far. He went to local stores in Durban, the second most populous urban area in South Africa, and put posters up so that people would see the name and ask about it.

“There are a lot of barriers to entry for the technology, so I have to do a lot of explaining first,” he says. "Once people get it, they understand that this is something we need to pay attention to because it’s actually quite an important piece of technology that we need to be a part of.”

While Rajah's grassroots work still has many rivers to cross, he remains devoted to the task of educating his local community around the benefits, use cases of OpenStack and more broadly, cloud technology.

“It’s a feat of engineering. It’s not just code, this is a whole community.”

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/OSyS2If4eMw" width=""></iframe>

Don't see a local user group or birthday celebration? Get involved and learn how to start one in your local community.

Cover Photo CC/NC

by Superuser at July 14, 2016 09:41 PM

Graham Hayes

OpenStack - A leveler playing field

I just proposed a review to openstack/governance repo [0] that aims to have everything across OpenStack be plugin based for all cross project interaction, or allow all projects access to the same internal APIs and I wanted to give a bit of background on my motivation, and how it came about.

Coming from a smaller project, I can see issues for new projects, smaller projects, and projects that may not be seen as "important".

As a smaller project trying to fit into cross project initiatives, (and yes, make sure our software looks at least OK in the Project Navigator) the process can be difficult.

A lot of projects / repositories have plugin interfaces, but also have project integrations in tree, that do not follow the plugin interface. This makes it difficult to see what a plugin can, and should do.

When we moved to the big tent, we wanted as a community to move to a flatter model, removing the old integrated status.

Unfortunately we still have areas when some projects are more equal - there is a lingering set of projects who were integrated at the point in time that we moved, and have preferential status.

A lot of the effects are hard to see, and are not insurmountable, but do cause projects to re-invent the wheel.

For example, quotas - there is no way for a project that is not nova, neutron, cinder to hook into the standard CLI, or UI for setting quotas. They can be done as either extra commands (openstack dns quota set --foo bar) or as custom panels, but not the way other quotas get set.

Tempest plugins are another example. Approximately 30 of the 36 current plugins are using resources that are not supposed to be used, and are an unstable interface. Projects in tree in tempest are at a much better position, as any change to the internal API will have to be fixed before the gate merges, but other out of tree plugins are in a place where they can be broken at any point.

None of this is meant to single out projects, or teams. A lot of the projects that are in this situation have inordinate amounts of work placed on them by the big-tent, and I can emphasize with why things are this way. These were the examples that currently stick out in my mind, and I think we have come to a point where we need to make a change as a community.

By moving to a "plugins for all" model, these issues are reduced. It undoubtedly will cause more, but it is closer to our goal of Recognizing all our community is part of OpenStack, and differentiate projects by tags.

This won't be a change that happens tomorrow, next week, or even next cycle, but think as a goal, we should start moving in this direction as soon as we can, and start building momentum.

Note

This was originally posted to the openstack-dev mailing list.

by Graham Hayes at July 14, 2016 07:30 PM

OpenStack Superuser

OpenStack Newton release: what’s next for Cinder, Neutron and Nova

Each release cycle, OpenStack project team leads (PTLs) introduce themselves, talk about upcoming features for the OpenStack projects they manage, plus how you can get involved and influence the roadmap.

Superuser will feature weekly summaries of the videos; you can also catch them on the OpenStack Foundation YouTube channel. This post covers Cinder, Neutron and Nova.

Cinder

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/wRphXGU9VLQ?list=PLKqaoAnDyfgrnDWFRM4_7i9QF19dLLWyg" width=""></iframe>

What Cinder
The goal of the project is to implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide software defined block storage via abstraction and automation on top of various traditional backend block storage devices.

Who Sean McGinnis, PTL. Day job: Senior principal software engineer at Dell.

Burning issues

alt text here

“A lot of what we discussed in Austin might not be interesting to the end users, but the had a lot of changes in Mitaka that developers working on the code need to be aware of, things like implementing micro-versions in the API and rolling upgrades,” he says. “There’s a lot we need to be aware of that hopefully as these features mature will be useful to the end users too.”

What’s next

alt text here

“I’d love to have feedback from users about what they’re looking for with replication, so we can be sure we meet their use cases,” he added.

What matters in Newton

alt text here

Get involved!

”We can always use help - core reviews, documentation, you don’t need to be a programmer,” McGinnis says. “If you want to be a part of the community and get involved, we would love to have you.” He suggests dropping in on IRC and asking who might need help. Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [cinder]
Participate in the weekly meetings: held in #openstack-meeting, on Wednesdays at 16:00 UTC.

Neutron

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/-oksIu-uZL0?list=PLKqaoAnDyfgrnDWFRM4_7i9QF19dLLWyg" width=""></iframe>

What
Neutron's goal is to implement services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

Who Armando Migliaccio, PTL. Day job: software architect, HP Networking at Hewlett-Packard.

Burning issues

alt text here

“We had a lot of exciting sessions in Austin,” Migliaccio says. “There’s usually a lot of yelling, but the discussions are good.” The outcome, he says, is a path forward for a number of things that were “long overdue.”

What’s next

alt text here

“It’s hard to come up with just three things, he says “these are the top priorities — as well as challenges. Hopefully, we won’t disappoint our user base.”

What matters in Newton

alt text here

alt text here

“People often ask me what will happen in 10-12 month’s time and I usually struggle to give an answer because in open source, you’re never really in full control,” he says. "We’re very ambitious, so we end up over-subscribing ourselves. Some of the things we’re talking about now will spill over into Ocata.”

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [neturon]
To get code, ask questions, view blueprints, etc, see: Neutron Launchpad Page Neutron’s regular IRC meetings start one hour after the main openstack meeting, on the same #openstack-meeting channel: http://wiki.openstack.org/Network/Meetings

Nova

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/EHCj1DZNNWw?list=PLKqaoAnDyfgrnDWFRM4_7i9QF19dLLWyg" width=""></iframe>

What Nova, OpenStack’s compute service. The project aims to implement services and associated libraries to provide massively scalable, on demand, self-service access to compute resources, including bare metal, virtual machines, and containers.

Who Matt Riedemann, who has been with IBM for over 10 years.

Burning issues

alt text here

“A big thing that’s already happened in Newton has been the backlog, some of it we have been carrying since Kilo. We put a freeze on new specs up until the Summit,” Riedemann says. “We need to get a lot of this backlog cleaned up before we can add new features.”

What’s next

alt text here

“It’s about redefining a lot of data models so everything’s not in a single data base,” he added. “In Mitaka we laid a lot of the foundations but didn’t get a lot of code merged. In Newton we’ve already made some pretty good progress.”

What matters in Newton:

alt text here

You can find all the planned updates at the Nova specs repo but Riedemann says the top priorities are the Cells v2 work, the scheduler work, the API policy defaults and the API-refs doc cleanup. Looking ahead to the Ocata release, he says that interoperability will also be a focus as the team lays the groundwork on the APIs.

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [nova]
“You can really tell in the Newton design sessions that Nova is really naturally grown into effective sub-teams,” Riedemann says. “There are lot of broad things that need work and people can get their feet wet with.” Check out the Nova wiki for more information.

Participate in the weekly meetings: Thursdays alternating 14:00 UTC (#openstack-meeting) and 21:00 UTC (#openstack-meeting).

Cover PhotoCC/NC

by Superuser at July 14, 2016 05:20 PM

July 13, 2016

Percona

Using Ceph with MySQL

Ceph

CephOver the last year, the Ceph world drew me in. Partly because of my taste for distributed systems, but also because I think Ceph represents a great opportunity for MySQL specifically and databases in general. The shift from local storage to distributed storage is similar to the shift from bare disks host configuration to LVM-managed disks configuration.

Most of the work I’ve done with Ceph was in collaboration with folks from Red Hat (mainly Brent Compton and Kyle Bader). This work resulted in a number of talks presented at the Percona Live conference in April and the Red Hat Summit San Francisco at the end of June. I could write a lot about using Ceph with databases, and I hope this post is the first in a long series on Ceph. Before I starting with use cases, setup configurations and performance benchmarks, I think I should quickly review the architecture and principles behind Ceph.

Introduction to Ceph

Inktank created Ceph a few years ago as a spin-off of the hosting company DreamHost. Red Hat acquired Inktank in 2014 and now offers it as a storage solution. OpenStack uses Ceph as its dominant storage backend. This blog, however, focuses on a more general review and isn’t restricted to a virtual environment.

A simplistic way of describing Ceph is to say it is an object store, just like S3 or Swift. This is a true statement but only up to a certain point.  There are minimally two types of nodes with Ceph, monitors and object storage daemons (OSDs). The monitor nodes are responsible for maintaining a map of the cluster or, if you prefer, the Ceph cluster metadata. Without access to the information provided by the monitor nodes, the cluster is useless. Redundancy and quorum at the monitor level are important.

Any non-trivial Ceph setup has at least three monitors. The monitors are fairly lightweight processes and can be co-hosted on OSD nodes (the other node type needed in a minimal setup). The OSD nodes store the data on disk, and a single physical server can host many OSD nodes – though it would make little sense for it to host more than one monitor node. The OSD nodes are listed in the cluster metadata (the “crushmap”) in a hierarchy that can span data centers, racks, servers, etc. It is also possible to organize the OSDs by disk types to store some objects on SSD disks and other objects on rotating disks.

With the information provided by the monitors’ crushmap, any client can access data based on a predetermined hash algorithm. There’s no need for a relaying proxy. This becomes a big scalability factor since these proxies can be performance bottlenecks. Architecture-wise, it is somewhat similar to the NDB API, where – given a cluster map provided by the NDB management node – clients can directly access the data on data nodes.

Ceph stores data in a logical container call a pool. With the pool definition comes a number of placement groups. The placement groups are shards of data across the pool. For example, on a four-node Ceph cluster, if a pool is defined with 256 placement groups (pg), then each OSD will have 64 pgs for that pool. You can view the pgs as a level of indirection to smooth out the data distribution across the nodes. At the pool level, you define the replication factor (“size” in Ceph terminology).

The recommended values are a replication factor of three for spinners and two for SSD/Flash. I often use a size of one for ephemeral test VM images. A replication factor greater than one associates each pg with one or more pgs on the other OSD nodes.  As the data is modified, it is replicated synchronously to the other associated pgs so that the data it contains is still available in case an OSD node crashes.

So far, I have just discussed the basics of an object store. But the ability to update objects atomically in place makes Ceph different and better (in my opinion) than other object stores. The underlying object access protocol, rados, updates an arbitrary number of bytes in an object at an arbitrary offset, exactly like if it is a regular file. That update capability allows for much fancier usage of the object store – for things like the support of block devices, rbd devices, and even a network file systems, cephfs.

When using MySQL on Ceph, the rbd disk block device feature is extremely interesting. A Ceph rbd disk is basically the concatenation of a series of objects (4MB objects by default) that are presented as a block device by the Linux kernel rbd module. Functionally it is pretty similar to an iSCSI device as it can be mounted on any host that has access to the storage network and it is dependent upon the performance of the network.

The benefits of using Ceph

Agility
In a world striving for virtualization and containers, Ceph gives easily moves database resources between hosts.

IO scalability
On a single host, you have access only to the IO capabilities of that host. With Ceph, you basically put in parallel all the IO capabilities of all the hosts. If each host can do 1000 iops, a four-node cluster could reach up to 4000 iops.

High availability
Ceph replicates data at the storage level, and provides resiliency to storage node crash.  A kind of DRBD on steroids…

Backups
Ceph rbd block devices support snapshots, which are quick to make and have no performance impacts. Snapshots are an ideal way of performing MySQL backups.

Thin provisioning
You can clone and mount Ceph snapshots as block devices. This is a useful feature to provision new database servers for replication, either with asynchronous replication or with Galera replication.

The caveats of using Ceph

Of course, nothing is free. Ceph use comes with some caveats.

Ceph reaction to a missing OSD
If an OSD goes down, the Ceph cluster starts copying data with fewer copies than specified. Although good for high availability, the copying process significantly impacts performance. This implies that you cannot run a Ceph with a nearly full storage, you must have enough disk space to handle the loss of one node.

The “no out” OSD attribute mitigates this, and prevents Ceph from reacting automatically to a failure (but you are then on your own). When using the “no out” attribute, you must monitor and detect that you are running in degraded mode and take action. This resembles a failed disk in a RAID set. You can choose this behavior as default with the mon_osd_auto_mark_auto_out_in setting.

Scrubbing
Every day and every week (deep), Ceph scrubs operations that, although they are throttled, can still impact performance. You can modify the interval and the hours that control the scrub action. Once per day and once per week are likely fine. But you need to set osd_scrub_begin_hour and osd_scrub_end_hour to restrict the scrubbing to off hours. Also, scrubbing throttles itself to not put too much load on the nodes. The osd_scrub_load_threshold variable sets the threshold.

Tuning
Ceph has many parameters so that tuning Ceph can be complex and confusing. Since distributed systems push hardware, properly tuning Ceph might require things like distributing interrupt load among cores and thread core pinning, handling of Numa zones – especially if you use high-speed NVMe devices.

Conclusion

Hopefully, this post provided a good introduction to Ceph. I’ve discussed the architecture, the benefits and the caveats of Ceph. In future posts, I’ll present use cases with MySQL. These cases include performing Percona XtraDB Cluster SST operations using Ceph snapshots, provisioning async slaves and building HA setups. I also hope to provide guidelines on how to build and configure an efficient Ceph cluster.

Finally, a note for the ones who think cost and complexity put building a Ceph cluster out of reach. The picture below shows my home cluster (which I use quite heavily). The cluster comprises four ARM-based nodes (Odroid-XU4), each with a two TB portable USB-3 hard disk, a 16 GB EMMC flash disk and a gigabit Ethernet port.

I won’t claim record breaking performance (although it’s decent), but cost-wise it is pretty hard to beat (at around $600)!

Ceph

https://rh2016.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=42190&tclass=popup

 

by Yves Trudeau at July 13, 2016 05:48 PM

OpenStack Superuser

How State Grid Corporation of China cuts IT costs, eludes vendor lock-in with OpenStack

The State Grid Corporation of China (SGCC) probably isn’t a familiar household name to most of us, but it is in fact the world’s largest electric utility company. SGCC supplies electrical power in 26 states across China, with the utility behemoth’s IT systems playing an essential role in providing safe and reliable power – and thereby furthering the economic development and livelihood of citizens through those regions.

SGCC also stands as the seventh largest company in the world, according to Fortune’s Global 500 ranking for 2015. Historically, SGCC’s IT architecture has included a vast array of database, middleware, and storage needs, serving over 100 IT systems, 500TB of data for each of the 26 states served, and 1,000+ hosts in each state. To manage these hefty requirements, SGCC had mainly used closed, proprietary vendor offerings as the backbone to the utility company’s IT systems.

Huanyu Zhao, database administrator and OpenStack operations team leader at SGCC, recently spoke at the OpenStack Summit in Austin. Huanyu opened his presentation with before and after images of himself – as a young man at the onset of his career juxtaposed with his current appearance – and joked that the stresses from over 10 years of DBA experience were responsible for hair loss and never having the time to exercise. Zhao’s more optimistic about the future, though, noting that since SGCC began using OpenStack that he’s had the bandwidth to take better care of himself.

Zhao expanded on the technical challenges SGCC faced prior to the move to OpenStack, when the company’s IT architecture relied upon legacy solutions. The proprietary software being used was difficult and costly to scale, demanded expensive license and service fees while limiting the company’s control over their own data, and drove the company toward vendor lock-in (which only perpetuated these issues).

alt text here

Inspired by new cloud-based technology and the desire to better control the company’s investments into IT systems (and free itself from vendor lock-in), the Zhao’s team started work on building a self-controlled private cloud, beginning with a move from vendor-based storage to open source. Over the past three years, SGCC has executed a transition from Unix-based IT systems to OpenStack and the cloud.

alt text here

The company began planning to implement OpenStack in December 2014, and broke ground on the implementation in September 2015. By February of this year, SGCC was running over 200 physical machines and more than 1,000 VMs with OpenStack, and has near-future plans to extend OpenStack to more of its IT systems. Of particular note: the utility company is working toward utilizing a big data cloud and leveraging SDX to realize software-defined networking and storage. Zhao went on to tout the major draws of the switch to OpenStack being the platform’s flexibility, its vast customization potential, and its compatibility in working with other applications and services, such as Docker, Zabbix, Ansible, and others.

For help with this transition, SGCC partnered with EasyStack, a leading OpenStack service provider in Asia that focuses on the needs of enterprise private clouds. EasyStack also made available public and private APIs that gave SGCC a 100% open and unlocked solution, with full control of their own data.

alt text here

SGCC’s network logic makes use of several OpenStack projects, including Nova, Glance, Keystone, Neutron, Ceilometer (Telemetry), Swift, and Cinder. An internal network communicates between VMs, while an external network provides data service for users. SGCC’s storage network uses Ceph exclusively. The company also has a deployment network and management network for use whenever needed.

OpenStack has provided SGCC with an almost automated operation, freeing up personnel and resources for investment in more forward-looking projects (or, as Zhao joked: “creating more time for holidays.”)

alt text here

SGCC’s future plans include expanding the use of OpenStack and software-defined storage to implement more than 1000 nodes in each of the states SGCC serves. Within five years, SGCC expects to have over 50,000 nodes across the company, making it the largest production OpenStack environment for the electricity industry in the world. On the software-defined storage front, SGCC plans to upgrade Ceph, implement a database cloud, and make quality-of-service improvements to the storage network.

Hear additional details of SGCC’s OpenStack implementation by viewing the 30-minute presentation on the OpenStack Summit Site.

Cover photo: Three Gorges Dam hydroelectric power plant in Hubei. // CC BY NC

by Nic Bain at July 13, 2016 04:38 PM

David Moreau Simard

Improving RDO packaging testing coverage

DLRN builds packages and generates repositories in which these packages will be hosted.

It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

RDO is completely open source and community driven: anyone can submit patches to improve RDO packaging.

When someone submits a patch for review on review.rdoproject.org, we have a gate job called “DLRN-rpmbuild” that will run DLRN to see if the package builds successfully.

The problem

Until now, that was essentially the extent of our testing coverage on code reviews for distgit packaging projects — and we weren’t satisfied with that.

The reality is that while the packages may build successfully, they could actually be broken. We could potentially carry bad systemd unit files, bad config, missing runtime dependencies, etc.

RDO continuously tests it’s trunk repositories with, amongst other things, WeIRDO and TripleO-Quickstart. These testing pipelines help us stay up to date throughout the development cycles. If we happened to inadvertently merge a bad packaging review, these jobs would eventually pick up the faulty packages and fail, letting us know that the package is broken and we’d go back to square one.

It would be great to test the new packages even before they land in trunk repositories, though.

Testing even before the trunk repositories

I mentioned we already had the DLRN-rpmbuild job that built packages, generated repositories and even uploaded them to Swift as part of log artifact collection.

We needed a way to have integration tests use those uploaded repositories and, after a lot of tinkering and different implementations, came to a satisfying solution.

First, we needed to override the default log path provided by Zuul but we found that it didn’t work properly and we required a patch that had not yet landed upstream. Overriding the default log path was necessary — otherwise, the log path is suffixed by $ZUUL_UUID which is unique to every job.

In our context, we want a “child” job to be able to consume artifacts from a “parent” job. Since the $ZUUL_UUID of the parent job is not exposed to the child job, we are not able to “guess” where the artifacts have been uploaded without some dirty hacking. We needed the artifact upload path of the parent job to be predictable and computable by the child job.

We’re now exposing this process through a jenkins builder on review.rdoproject.org. This builder computes the location where there should be a DLRN repository and configures it if there is one.

This builder enables us to re-use existing WeIRDO integration gate jobs transparently and seemlessly, just by adding the builder to the jobs and making it a children of DLRN-rpmbuild. There will effectively be two DLRN repositories configured:

  • the one that was just built in the gate (configured before the job runs)
  • the one from trunk.rdoproject.org (eventually configured by WeIRDO)

In our testing, we use yum-plugin-priorities. Since the date (epoch) is embedded in the package NVR (name-version-release) and because both DLRN repositories are configured with priority “1”, the package that was just built will have priority because it is more recent.

There’s still work to do

We’re happy with the implementation and tested it successfully. Now, we need to work on configuring the right jobs for the right projects.

For example, for Cinder packaging reviews, we don’t want to be running integration jobs that don’t involve Cinder. It would be inefficient and a bad usage of our resources.

Both Packstack and Puppet-OpenStack provide great testing scenarios for different projects which are leveraged by WeIRDO.

Still, it will prove rather challenging to choose which scenario to run on which project — even if just for the sheer amount of projects we have to deal with. It’s worth it, though, we will be able to boast better testing coverage as a result and end up with less broken packages landing in trunk repositories.

by dmsimard at July 13, 2016 02:52 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
July 29, 2016 10:03 AM
All times are UTC.

Powered by:
Planet