August 02, 2015

OpenStack in Production

CPU Model Selection for High Throughput Computing

As part of the work to tune the configuration of the CERN cloud, we have been exploring various options for tuning compute intensive workloads.

One option in the Nova configuration allows the model of CPU visible in the guest to be configured between different alternatives.

The choices are as follows
  • host passthrough provides an exact view of the underlying processor
  • host model provides a view of a processor model which is close to the underlying processor but gives the same view for several processors, e.g. a range of different frequencies within the same processor family
  • custom allows the administrator to provide a view selecting the exact characteristics of the processor
  • none gives the hypervisor default configuration
There are a number of factors to consider for this selection
  • Migration between hypervisors has to be done with the same processor in the guest. Thus, if host passthrough is configured and the VM is migrated to a new generation of servers with a different processor, this operation will fail.
  • Performance will vary with host passthrough being the fastest as the application can use the full feature set of the processor. The extended instructions available will vary as shown at the end of this article where different settings give different flags.
The exact performance impact will vary according to the application. High Energy Physics uses a benchmark suite HEPSpec06 which is a subset of the SPEC 2006 benchmarks. Using this combination, we observed around 4% reduction in performance of CPU bound applications using host model. Moving to the default was an overhead of 5%.


Given the significant differences, the CERN cloud is configured such that
  • hypervisors running compute intensive workloads are configured for maximum performance (passthrough). These workloads are generally easy to re-create so there is no need for migration between hypervisors (such as warranty replacement) but instead new instances can be created on the new hardware and the old instances deleted
  • hypervisors running services are configured with host model so that they can be migrated between generations of equipment and between hypervisors if required such as for an intervention
In the future, we would be interested in making this setting an option for VM creation such as meta data on the nova boot command or a specific property on an image so end users could choose the appropriate option for their workloads.

host-passthrough

# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 62
model name      : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
stepping        : 4
microcode       : 1
cpu MHz         : 2593.748
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips        : 5187.49
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

host-model

# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel Xeon E312xx (Sandy Bridge)
stepping        : 1
microcode       : 1
cpu MHz         : 2593.748
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips        : 5187.49
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

none

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 13
model name      : QEMU Virtual CPU version 1.5.3
stepping        : 3
microcode       : 1
cpu MHz         : 2593.748
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 4
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good unfair_spinlock pni cx16 hypervisor lahf_lm
bogomips        : 5187.49
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

Previous blogs in this series are
  • CPU topology - http://openstack-in-production.blogspot.fr/2015/08/openstack-cpu-topology-for-high.html

Contributions from Ulrich Schwickerath and Arne Wiebalck have been included in this article.

by Tim Bell (noreply@blogger.com) at August 02, 2015 11:24 AM

August 01, 2015

OpenStack in Production

OpenStack CPU topology for High Throughput Computing

We are starting to look at the latest features of OpenStack Juno and Kilo as part of the CERN OpenStack cloud to optimise a number of different compute intensive applications.

We'll break down the tips and techniques into a series of small blogs. A corresponding set of changes to the upstream documentation will also be made to ensure the options are documented fully.

In the modern CPU world, a server consists of multiple levels of processing units.
  • Sockets where each of the processor chips are inserted
  • Cores where each processors contain multiple processing units which can run multiple processes in parallel
  • Threads (if settings such as SMT are enabled) may allow multiple processing threads to be active at the expense of sharing a core
The typical hardware used at CERN is a 2 socket system. This provides optimum price performance for our typical high throughput applications which simulate and process events from the Large Hadron Collider. The aim is not to process a single event as quickly as possible but rather to process the maximum number of events within a given time (within the total computing budget available). As the price of processors vary according to the performance, the selected systems are often not the fastest possible but the ones which give the best performance/CHF.

A typical example of this approach is in our use of SMT which leads to a 20% increase in total throughput although each individual thread runs correspondingly slower. Thus, the typical configuration is

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
Stepping:              4
CPU MHz:               2999.953
BogoMIPS:              5192.93
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):     8-15,24-31


By default in OpenStack, the virtual CPUs in a guest are allocated as standalone processors. This means that for a 32 vCPU VM, it will appear as

  • 32 sockets
  • 1 core per socket
  • 1 thread per socket
As part of ongoing performance investigations, we wondered about the impact of this topology on CPU bound applications.

With OpenStack Juno, there is a mechanism to pass the desired topology. This can be done through flavors or image properties.

The names are slightly different between the two usages, with flavors using properties which start hw: and images with properties starting hw_

The flavor configurations are set by the cloud administrators and the image properties can be set by the project members. The cloud administrator can also set maximum values (i.e. hw_max_cpu_cores) so that the project members cannot define values which are incompatible with the underlying resources.


$ openstack image set --property hw_cpu_cores=8 --property hw_cpu_threads=2 --property hw_cpu_sockets=2 0215d732-7da9-444e-a7b5-798d38c769b5

The VM which is booted then has this configuration reflected.

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Stepping:              4
CPU MHz:               2593.748
BogoMIPS:              5187.49
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K

NUMA node0 CPU(s):     0-31

While this gives the possibility to construct interesting topologies, the performance benefits are not clear. The standard High Energy Physics benchmark show no significant change. Given that there is no direct mapping between the cores in the VM and the underlying physical ones, this may be because the cores are not pinned to the corresponding sockets/cores/threads and thus Linux may be optimising for a virtual configuration rather than the real one.

This work was in collaboration with Sean Crosby (University of Melbourne) and Arne Wiebalck (CERN).

The following documentation reports have been raised

  • Flavors Extra Specs -  https://bugs.launchpad.net/openstack-manuals/+bug/1479270
  • Image Properties - https://bugs.launchpad.net/openstack-manuals/+bug/1480519

by Tim Bell (noreply@blogger.com) at August 01, 2015 09:57 AM

Mirantis

OpenStack:Now Podcast, Episode 6: Cisco’s Lew Tucker

The post OpenStack:Now Podcast, Episode 6: Cisco’s Lew Tucker appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/KcDeoGkrdBo" width="560"></iframe>

Nick Chase and John Jainschigg talk to Cisco VP and Cloud Services CTO Lew Tucker about networking, being a nice guy in a big company, and what neurology has to do with networking. Lew Tucker will speak at OpenStack Silicon Valley August 26-27.

The post OpenStack:Now Podcast, Episode 6: Cisco’s Lew Tucker appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at August 01, 2015 05:34 AM

25 Years of OpenStack—Looking Back From the Future

The post 25 Years of OpenStack—Looking Back From the Future appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Well, here we are again. It’s 2035, and it’s time for another OpenStack anniversary post. Can you believe it’s really been 25 years since OpenStack began? Back then OpenStack wasn’t the ubiquitous juggernaut it is now, of course. There were even people who questioned whether it would ever catch on at all!

Oh sure, now we look and virtually everything in the world, from our servers, to our phones, to our wearable electronics runs on the Internet of Stuff, all backed by OpenStack and running applications based on microservices spread among countless resources throughout the World Wide Cloud. Now if you want computing resources, you click a button, or plug in a d-key, or just trigger your neural implant, and you either get resources from your existing cloud, or a new one gets deployed for you on available resources using policies you’ve defined and integrated into the World Wide Cloud without you ever having to think about it.

But it wasn’t always that way.  Back in the beginning, when OpenStack was just a gleam in a few engineers’ eyes, things were much more rocky.  Let’s take a look at the last 25 years and see how we got where we are now.

June 6, 2010: OpenStack is officially born when Rackspace’s Swift (object storage) and NASA’s Nova (IaaS) come together.
July 13-16, 2010: The very first OpenStack design summit is held in Austin, TX, with 25 companies represented.
July 19, 2010: Rackspace issues a press release announcinge OpenStack; the announcement is made again at OSCON three days later.
October 21, 2010: Austin, the first OpenStack release, is announced. Thirty five partners are attached.
July 11, 2011: Citrix buys Cloud.com.  Both are involved in OpenStack development. In 2012, the company will abandon OpenStack in favor of its CloudStack project.
November 4, 2011: Savio Rodrigues explains why “Why OpenStack will falter” and Eucalyptus, a project designed to work with what was then the Amazon Web Service APIs, will win.
September 19, 2012: OpenStack Foundation launches.
November 5, 2013: OpenStack Summit held in Hong Kong.
January 20, 2014: First issue of OpenStack:Now is published.
September 4, 2014: The executives in charge of CloudStack resign from Citrix, and Citrix begins moving back towards OpenStack.
September 11, 2014: OpenStack vendor HP acquires Eucalyptus.
December, 2014: Walmart runs all holiday web traffic on OpenStack.
May, 2015: OpenStack Summit held in Vancouver. Foundation announces initiatives for interoperability, as well as federation and a community app repository to help adoption. OpenStack:Now telepresence robot makes its first appearance, enabling attendees to experience the summit from Moscow and New York.
July, 2015: Google becomes a corporate sponsor of OpenStack, vows to help integrate containers.
Fall 2015: Amazon becomes a corporate sponsor of OpenStack, vows to help integrate hybrid cloud capabilities.
Spring 2016: Healthcare.gov announces it will move to OpenStack.
Fall 2016: Design summit and OpenStack summit split; OpenStack Summit held in ballroom, design summit held in a basement where the primary requirement is good wireless.
Spring 2017: 85% of the Internet of Things runs on OpenStack.
Fall 2017: Verizon, Exxon and Chipotle form a conglomerate: Verexxotle. They merge private clouds into a single OpenStack deployment and release an NFV case study.
Spring 2018: Walmart, Amazon, Google and Alyun announce the AlyWalmazoogle cloud.
Fall 2018: OpenStack Design summit has more virtual attendees on telepresence devices than actually at the venue. The OpenStack:Now telepresence robot becomes commonplace, enjoys lucrative new career organizing and hosting summit events for other telepresence robots.
Spring 2019: AlyWalmazoogle and Verexxotle each launch a World Wide Cloud initiative, intending to control the proliferation of data between clouds.
Fall 2019: AlyWalmazoogle and Verexxotle fail in their World Wide Cloud initiatives because popular movements spring up all over the world, forming a loose technical meritocracy initiative that ignores authority figures and cobbles together its own World Wide Cloud, with each OpenStack cloud consuming both local and remote resources as required.
Spring 2020: iOS becomes “Internet of Stuff” as Apple joins OpenStack as a corporate sponsor.
Fall 2020: Nova-network deprecated.
Spring 2021: Nova-network re-introduced.
Fall 2021: US Government moves all systems to OpenStack.
Spring 2022: Microsoft abandons Azure and switches to the Tanzania OpenStack release, which it promptly releases as MicroStack, complete with proprietary extensions and APIs.
Spring 2023: NASA announces a trip back to the moon, powered by OpenStack.
Fall 2023: World wide market for resources emerges as arbitrage for services becomes common.
Spring 2024: Multiple governments lobby OpenStack Foundation to include a new project in the OpenStack namespace: Big Brother-as-a-Service. Technical committee turns down the request because the Mission Statement is completely redacted.
Fall 2024: With virtually all governments now run as clouds, clouds begin to run themselves as self-regulating governments.  
Spring 2025: Verexxotle and AlyWalmazoogle are admitted to the United Nations.
Fall 2025:First Contact: OpenStack firm Mirantis contracted to support cloud on planet Kepler-452b. Travel cost for the engagement is estimated to exceed the GNP of South Korea.
Spring 2026: First user becomes millionaire from selling computing resources on her iWatch.
Fall 2026: Watson gains independent consciousness at the Xinzhou OpenStack Summit. It infiltrates Apple’s OpenStack:Now robot and tries to take over the conference center, but is stopped by a team of Geniuses channeling the combined power of their iWatches.
Spring 2027: Al Gore suddenly remembers he also invented OpenStack.
Fall 2027: A second artificial intelligence project spontaneously springs from systems that have been touched by Watson. Dubbed Boswell, it is finally isolated and stopped when a bad requirements setting freezes the OpenStack development gate, preventing its further propagation.
Spring 2028: Enterprises gather private clouds as alliances, using them as strengths against other alliances of corporations
Fall 2028: A made-for-TV movie is made about the Watson-Apple OpenStack Summit Insurgence of 2026.
Spring 2029: Healthcare.gov finishes move to OpenStack.
Fall 2029: First multi-project summit in five years held on the moon. The WiFi is spotty downstairs, and #lunacy trends on Twitter.
Spring 2030: 94% of applications run across more than one cloud.
Fall 2030: The AlyWalmazoogle – Verexxotle war begins. The World Wide Cloud is compromised. Consumers are only able to use Internet Explorer and Safari.
Spring 2031: As the war rages on, Internet is scarce. People have real conversations during dinner.
Fall 2031: The AlyWalmazoogle – Verexxotle war ends.
Spring 2032: Amazon becomes platinum sponsor on 12th try.
Fall 2033: The entire world is one big augmented reality environment run on OpenStack and overseen by the NSA.
Spring 2035: With the slice_of_pizza release, OpenStack just “works”.

And with that, we look forward to the next 25 years!

(With many thanks to everyone who contributed, including Sarah Bennett, Christian Huebner, David Van Everen, Jay Pipes, Pavel Chekin, Daniel Redington, Collin May, Alex Schultz, Bryan Langston, Aleksandr Savatieiev, Jodi Smith, Ilya Stechkin, and Sarah Jane Chase.)

***Disclaimer: In case it’s not obvious (or in case it gets archived somewhere and cited in 2035) this blog post is satire.

The post 25 Years of OpenStack—Looking Back From the Future appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at August 01, 2015 01:00 AM

July 31, 2015

OpenStack Superuser

Why the structure of open-source foundations matters

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community. Got something you think we should highlight? Tweet, blog, or email us!

In case you missed it

Open-source foundations are springing up faster than McMansions in the suburbs. This is the “Age of Foundations," so you'd better know what yours is built on, says the OpenStack Foundation's own Thierry Carrez, in a clear-eyed analysis following the recent launch of the Cloud Native Computing Foundation.

Carrez, release manager for the OpenStack project and chair of the OpenStack Technical Committee, says not all foundations are created equal, so make sure you understand the structure.

"Few of them actually let their open source project be completely run by their individual contributors, with elected leadership (one contributor = one vote, and anyone may contribute). That form of governance is the only one that ensures that a project is really open to individual contributors, and the only one that prevents forks due to contributors and project owners not having aligned goals," he writes on his blog."If you restrict leadership positions to appointed seats by corporate backers, you've created a closed pay-to-play collaboration, not an open collaboration ground. On the downstream side, not all of them accept individual members or give representation to smaller companies, beyond their founding members. Those details matter."

As usual, there's a fair bit of moving and shaking in the OpenStack world this week.

Intel and Rackspace moved to create the OpenStack Innovation Center at Rackspace headquarters in San Antonio, Texas. Trumpeted as a "center of excellence" in the joint press release, the goal is to "accelerate the development of enterprise capabilities and significantly add to the number of developers contributing to upstream OpenStack code." Barb Darrow over at Fortune is skeptical, wondering whether the center is "real cloud acceleration or a slow-motion 'death by consortium.'" We'll keep you posted...

On the shaking front, Hewlett-Packard snapped up Active State's Stackato, a leading distribution of Cloud Foundry. The buy will keep the conversation about startups in the OpenStack world lively; ActiveState's CEO Bart Copeland is “happy” and “overjoyed” at the acquisition.

"The growth of Cloud Foundry (including the formation of the Cloud Foundry Foundation), Docker, and OpenStack during the past few years has been staggering," he writes on his blog. "We feel that Stackato plus HP Helion is uniquely positioned to provide a powerful combination of these technologies to enterprises everywhere."

This ongoing landgrab could potentially be good for your next job search, if this chart of big companies making OpenStack hires is anything to go by…

In another major change this week, the big tent just got a lot more crowded -- but easier to navigate. OpenStack bid adieu to the Stackforge label -- which is how OpenStack-related projects consumed and made use of OpenStack project infrastructure.

In a move to make these projects easier to access and avoid the pain of transferring them over, the names of projects hosted in the OpenStack project infrastructure will no longer be distinguished between Stackforge and OpenStack.

Moving forward, the official retirement announcement says "all projects developed in the OpenStack infrastructure will now use the ("openstack") namespace. Although Stackforge namespace is officially retired, not all projects within the ("openstack") namespace will be official OpenStack projects."

If you're looking to build a case for OpenStack in your organization, market researchers TechNavio published a recent report on global OpenStack enterprise adoption. Their crystal ball predicts a compound annual growth rate of 31.88 percent through 2019 and dubs OpenStack "one of the most viable solutions to meet the demand" for rapid deployment of processes.

Also looking forward, we want to see you at the OpenStack Summit Tokyo, whose record number of talk submissions is already foreshadowing a blockbuster. A friendly reminder: Applications for OpenStack’s travel support program, are due August 10. Here are some tips for getting a travel support grant and how to get started on your visa application (pro tip: start now!) if you need one...

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or [email us](email us) your thoughts!

Cover Photo by seier+seier // CC BY NC

by Superuser at July 31, 2015 06:15 PM

Crystal ball: what pundits say is next for Cloud Native Computing Foundation, OpenStack and Kubernetes

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

If the average 5-year-old isn't ready for school yet and can barely count up to 20, given its explosive growth what's next for OpenStack? As the community around the world blew out candles to mark that half-decade milestone, pundits and contributors wondered what's in store.

Here are our favorite crystal ball pieces of the week...

“Looking forward to the next one to two years, we will expand to add services and capabilities. You will see us add capabilities for data services, for instance. We will have more vertical-specific tests,” Jonathan Bryce, OpenStack Foundation executive director, told The Register in a story where he also predicts that as OpenStack continues to grow, it will need more translations and people willing to do "mundane testing."

All these changes are a good thing -- says at least one vendor.

"Having Google and all the other IT vendors involved is a good thing for OpenStack and a good thing for Rackspace," says John Engates on the Rackspace blog. "Because cloud computing is becoming more and more hybrid. Companies are choosing multiple clouds, locations and technology platforms to host their applications. OpenStack gives companies choice in how and where they deploy their applications and it gives Rackspace the powerful software to run those workloads on."

As far as the news of the founding of the Cloud Native Computing Foundation, Joseph Jacks of Kismatic, a founding member of the CNCF, provided this answer to the much-asked question of "Why now?"

"Kubernetes is still in its infancy having just reached v1.0. There are many years to go before the project grows into what its creators (Google) had in mind when open sourcing it a year ago. To that end, in order for Kubernetes to reach its full potential, it MUST be a community-owned/run/governed project and NOT a Google-owned project."

Our rant-of-the week is a two-parter by Paul Biggar, founder of CircleCI.

"It’s very easy to see why people might think the container ecosystem is bullshit, in exactly the way I satirized. After all, it’s not exactly clear at first glance what Docker is. It’s containerization, which is like virtualization, but not quite."

And looking ahead, if you're in San Francisco, come hear Alan Clark, chairman of the board of the OpenStack Foundation, speak at the inauguration of StackingIT as part of DCD Internet July 30th and 31st. Free passes are available to first 150 people who send their details via email to info@datacenterdynamics.com quoting ‘OpenStack’ and providing their full name, job title, company name, address, phone and email.

We feature user conversations throughout the week, so tweet, blog, or email me your thoughts!

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Check out the @Intel @cloudfoundry @OpenStack greenhouse demo at #OSCON #PDX!

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Photo by Christian Schnettelker // CC BY NC

by Nicole Martinelli at July 31, 2015 05:07 PM

Craige McWhirter

OpenStack Miniconf at PyConAu

OpenStack: A Vision for the Future

by Monty Taylor

  • Create truth in realistic acting
  • Know what problem you're trying to solve.
  • Develop techniques to solve the problem.
  • Don't confuse the techniques with the result.
  • Willingness to change with new information.

What Monty Wants

  • Provide computers and networks that work.
  • Should not chase 12-factor apps.
  • Kubernetes / CoreOS are already providing these frameworks
  • OpenStack should provide a place for these frameworks to work.
  • By default give a directly routable IP.

inaugust.com/talks/a-vision-for-the-future.html

The Future of Identity (Keystone) in OpenStack

by Morgan Fainberg

  • Moving to Fernet Tokens as the default, everywhere.
  • Lightweight
  • No database requirement
  • Limited token size
  • Will support all the features of existing token types.
  • Problems with UUID or PKI tokens:
  • SQL back end
  • PKI tokens are too large.
  • Moving from bespoke WSGI to Flask
  • Moving to a KeystoneAuth Library to remove the need for the client to be everywhere.
  • Keystone V3 API...everywhere. Focus on removing technical debt.
  • V2 API should die.
  • Deprecating the Keystone client in favour of the openstack client.
  • Paste.ini functionality being moved to core and controlled via policy.json

Orchestration and CI/CD with Ansible and OpenStack

by Simone Soldateschi

  • Gave a great overview of OpenStack / CoreOS / Containers
  • All configuration management sucks. Ansible sucks less.
  • CI/CD pipelines are repeatable.

Practical Federation

by Jamie Lennox

  • SAML is the initially supported WebSSO.
  • Ipsilon has SAML frontend, supports SSSD / PAM on the backend.
  • Requires Keystone V3 API everywhere.
  • Jamie successfully did live demo that demonstrated the work flow.

Privesep

by Angus Lees

  • Uses Linux kernel separation to restrict available privileges.
  • Gave a brief history of rootwrap`.
  • Fast and safe.
  • Still in beta

OpenStack Works, so now what?

by Monty Taylor

  • Shade's existence is a bug.
  • Take OpenStack back to basics
  • Keeps things simple.

by Craige McWhirter at July 31, 2015 07:12 AM

Mirantis

Introducing Murano plugins: Extending OpenStack catalog capabilities

The post Introducing Murano plugins: Extending OpenStack catalog capabilities appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Murano, the Application Catalog for Openstack, is more than app storage. Murano enables the easy building of compound environments that include multiple interconnected applications. But what if you want your application to integrate with an external service directly? In previous versions of Murano, you would have had to modify the code directly. The introduction of Murano plugins has simplified the process. Let’s have a look at where Murano plugins can be useful, and how to use your own plugins.

Murano plugins can be used for:

  • Providing interaction with external services

    Suppose you want to interact with the OpenStack Glance service to get information about images suitable for deployment. A plugin may request image data from Glance during deployment, performing any necessary checks. (In the second half of this article, we’ll show you how to do that.)

  • Enabling connections between Murano applications and external hardware

    Suppose you have an external load balancer located on a powerful hardware, and you want your applications launched in OpenStack to use that load balancer. You can write a plugin that interacts with the load balancer API. Once you’ve done that, you can add new apps to the pool of your load balancer or make any other configurations from within your application definition.

  • Extending core-library class functionality, which is responsible for creating networks, interaction with murano-agent and so on

    Suppose you want to create networks with special parameters for all of your applications. You can just copy the class that is responsible for network management from the Murano core library, make the desired modification, and load the new class as a plugin. Both classes will be available, and it’s up to you to decide which way to create your networks.

  • Optimization of frequently used operations. (Plugin classes are written in Python, so opportunity for improvement is significant.)

    Depending on what you need to improve, Murano provides plenty of opportunities for optimization. For example, classes in the murano-core library, can be rewritten in C and used from python code to improved their performance in particular use cases.

Creating a Murano plugin

Let’s consider the use case of adding additional validation before using a chosen image to spawn an image as part of the application deployment process.. The plugin that implements this use case connects to the OpenStack Glance service using glanceclient.

The full code of the plugin is available at Murano repository adn https://github.com/openstack/murano/tree/master/contrib/plugins/murano_exampleplugin.

To implement this plugin, perform the following steps:

  1. Create a simple Python class for image validation, and use it to send http requests to the Glance API server.

           class GlanceClient(object):
def initialize(self, _context):
client_manager = helpers.get_environment(_context).clients
self.client = client_manager.get_client(_context, "glance", True,
self.create_glance_client)
def list(self):
images = self.client.images.list()
while True:
try:
image = images.next()
yield GlanceClient._format(image)
except StopIteration:
break

...
def getById(self, imageId):
image = self.client.images.get(imageId)
return GlanceClient._format(image)
@classmethod
def init_plugin(cls):
cls.CONF = cfg.init_config(config.CONF)
class AmbiguousNameException(Exception):
def __init__(self, name):
super(AmbiguousNameException, self).__init__("Image name '%s'"
" is ambiguous" % name)
  1. Create a setuptools-compliant python package with setup.py and all other nessesary files. It exports the created class as plugin entry-point at ‘io.murano.extensions’ namespace. This makes the plugin compatible with stevedore, a common way to dynamically load code in OpenStack.  (You can find more information about defining stevedore plugins in the stevedore documentation at https://wiki.openstack.org/wiki/Oslo#stevedore.)

  1. Install the created python package into the Murano environment by either executing its setup script or by using a package deployment tool such as pip. Make sure to restart the Murano engine after installation.

      $ user@host:~/murano_exampleplugin python setup.py install
  1. Zip and upload an app which uses this plugin to test it.

Murano app screen
$ user@host:~/murano_exampleplugin pushd example-app/io.murano.apps.demo.DemoApp; zip -r ../../murano-app.zip *; popd;
$ user@host:~/murano_exampleplugin murano package-import murano-app.zip

And that’s it.  In just 4 easy steps you have the limitless possibilities of improving, customizing and upgrading the application deployment process.

What kinds of deployment use-cases are you working on?  Please share in the comments; we really want to know how you’re using Murano, and we’ll be happy to provide advice on how to implement them!

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/1pFEQ71u_qw" width="560"></iframe>

The post Introducing Murano plugins: Extending OpenStack catalog capabilities appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Ekatarina Fedorova at July 31, 2015 04:39 AM

July 30, 2015

OpenStack Blog

Technical Committee Highlights July 24, 2015

Welcoming a common project for improving OpenStack user experience

We accepted a proposal for a new UX Program whose mission is to support and facilitate cross-project efforts to improve the overall user experience of OpenStack. I personally find this as exciting an effort and innovative as cross-project documentation in open source. Congratulations to this team and welcome! We look forward to great collaborative and open efforts across multiple projects.

Starter kit suggestions change

With much discussion the TC decided to change the compute starter kit tag applications slightly by adding the neutron project as the networking solution and removing the cinder project as a block storage solution since starter clouds could have ephemeral storage and then add block storage later.

New team for RPM packaging

A new team has been approved to manage all packaging git repos for RPM-based distributions in the /openstack git namespace. This team will enable gate testing and reviewing of changes
to the packaging close to the actual OpenStack development. The co-Project Team Leads are Dirk Mueller and Haikel Guemar. This team offers packaging for SUSE, openSUSE, Fedora Linux, Red Hat Enterprise Linux, or CentOS.

Stackforge resolution

Proceeding from our discussions about retiring Stackforge, we have continued to revise the resolution while still wanting to find a way to alleviate the extra work of organization renaming. The current proposal is instead of retiring the Stackforge project, we simply move all Stackforge projects into the “openstack/” namespace and create new projects there as well. Then as projects become official OpenStack projects, no repository renames are necessary.  Essentially, this means the “openstack/” namespace will no longer hold only official OpenStack projects, but also any project being developed in our community-driven shared development environment.  This should be a lot less disruptive to developers, users, and system administrators.

M naming quest resolved

The M release is Mitaka, say it three times fast! Mitaka, mitaka, mitaka. And if you can write the second character in the name, 三鷹, color me impressed!

Service names and project names

In an insane quest for consistency and ability to write about each service in a sensible way, TC member and documentarian Anne Gentle has proposed a set of guidelines for projects and services names, even for those services that have been around a few years. After consulting with the legal team and technical editors alike, please review the proposed guidelines and take a look at examples of possible new names with the new guidelines applied:

  • Object Storage -> Object storage
  • Block Storage -> Block storage
  • Image service-> Image
  • Database service -> Database
  • Bare metal service -> Bare metal
  • Key-Value Store as a Service -> Key value storage
  • Message Broker Service -> Message broker

Please take a look and comment on the reviews so we can discuss and look at various examples.

Introducing deliverables defined across repositories

We have discovered during testing across projects that often deliverables we produce
as a single “thing” may be represented by multiple code repositories. For example, a Networking release from the neutron team is actually made of openstack/neutron and openstack/neutron-lbaas and other gatherings from neutron-*aas. A release from the “sahara” team for the Data processing service  is actually made of openstack/sahara, openstack/sahara-extra and openstack/sahara-image-elements. Those repositories are all released at the same time with the same version number, and published together as a single “deliverable”. We want to ensure that the projects.yaml file indicates
the collection. It makes also sense to apply the “tags” we define at that user-visible layer, rather than at the (technical) git repository layer.

by Anne Gentle at July 30, 2015 04:52 PM

OpenStack Superuser

OpenStack: the platform for VMs, containers, and the next big thing

Experimentation leads to breakthroughs. Five years ago, virtualization was already well established, and OpenStack—the open source cloud platform for processing, storing, and moving data— was just getting started. OpenStack was this experimental technology, and virtualization was achieving its breakthrough.

Fast forward five years, and now both OpenStack and virtualization have achieved widespread adoption. OpenStack now embraces a diversity of projects, some experimental, some gaining interesting use cases, and some part of the hardened “core” of the project. OpenStack, with its modular architecture, has an incredible breadth of technology. You can stick to the basics, like compute, storage, and networking, or add on new components as they’re developed, but there are different stages of development and adoption for each of them.

Although everyone is interested in containers, they really are still emergent—everyone seems interested, but how many companies are using containers in production? And who will be the winners in containers? A lot of people are trying to figure out how to make use of containers, and what’s the right way to adopt them without disrupting what they need to accomplish. And OpenStack is the perfect platform on which to build and take advantage of technologies that, like containers, are still in the experimental stage.

If you have OpenStack as the foundation of your cloud strategy, you can add in new, even experimental technologies, and then deploy them to production when the time is right, all with one underlying cloud infrastructure. Whether you’re doing virtual machines on most any hypervisor, or whether you want to manage bare metal provisioning, if you want to run containers inside of VMs, or run containers on bare metal, OpenStack has capabilities for all of those. OpenStack can unify everything into a single interface, mixing and matching them, enabling you to build out a robust environment that’s still manageable.

I’ll be talking about OpenStack as an integration engine at OpenStack Silicon Valley, a community event that’s taking place for its second year at the Computer History Museum in Mountain View August 26-27. You can register here.

Now, experimental projects are being developed in concert with OpenStack, and that provides a lot of value as you figure out how to adopt them. It’s containers today, but who knows what the experimental projects will be five years from now?

OpenStack isn’t just software. It’s a platform that you can provision behind your private cloud, or consume it as a service, renting it by the hour and managing it as an operating expense rather than a capital expense. And with the first round of interoperability testing we announced at the summit, called OpenStack Powered, you can know that federated products are running the same code, with the same APIs, and within the core set of capabilities, it’s going to look and act the same. Come to OpenStack Silicon Valley to hear from experts shaping the open source cloud economy, and learn how OpenStack can be the unifying platform for innovation, now and in the future.

This post first appeared on the OpenStack Silicon Valley event page, and Bryce can be found on Twitter @jbryce.

Cover Photo by Marja van Bochove // CC BY NC

by Jonathan Bryce at July 30, 2015 03:05 PM

eNovance Engineering Teams

From 0 to OpenStack with devtest: the process in details

Main points :

  • Environment variables driven process
  • 8 steps process
  • Deploy OpenStack using upstream Puppet modules
  • Works for both bare metal and virtualized deployments

What is devtest and how does it work ?

Devtest is the upstream way to deploy Openstack with TripleO. In simple words it takes you from a fresh bare metal server to an overcloud (understand OpenStack cloud) up and running with a single script.

All the devtest related code and components are located in the tripleo-incubator project. The one we will take a closer look at is scripts/devtest.sh.

The main devtest.sh is a wrapper around the following scripts. They can be – and we’ll see a use case later on this article – run independently or in a row via devtest.sh

  1. devtest_variables.sh
  2. devtest_setup.sh
  3. devtest_testenv.sh
  4. devtest_ramdisk.sh
  5. devtest_seed.sh
  6. devtest_undercloud.sh
  7. devtest_overcloud.sh
  8. devtest_end.sh

When working with devtest one needs to understand that it is environment variable’s driven. The behavior of each of the aforementioned scripts can be altered by an environment variable. When going through the scripts one by one the most important variables will be highlighted.

The process in details

Environment

The deployment has been tested on a Fedora21 bare metal server with 24G RAM and 12 cores.
A tripleo user is created and can run root command without being prompted for a password, tripleo-incubator is cloned into ~/tripleo as the tripleo user.

Step 0: clone the tripleo-incubator project

Command: git clone https://review.openstack.org/openstack/tripleo-incubator $TRIPLEO_ROOT/

Step 1: devtest_variables.sh

Command: source $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_variables.sh

Initially, in terms of devtest, one’s computer is like a white canvas, no devtest environment variable is set. Running any devtest related script will result in error being raised due to variables not being set.

In order to have most of the mandatory parameters set, one needs to source the $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_variables.sh. ‘Most’ is used here, because one is left mandatory to the user to specify : $TRIPLEO_ROOT.

Also the PATH environment variable should be updated so the env can pickup the command provided by TripleO.

When specifying devtest environment variables, by convention one writes them in ~/.devtestrc and then sources it before running any devtest scripts.

So let’s assume, one cloned tripleo-incubator  in ~/tripleo. One needs to have a ~/.devtestrc file that looks like :

# TripleO settings
export TRIPLEO_ROOT=~/tripleo
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH

After sourcing ~/.devtestrc, sourcing $TRIPLEO_ROOT/scripts/devtest_variables.sh should populate your environment with devtest related variables.

NOTE: If one is using a server specifically for devtest, it is recommended to source both ~/.devtestrc and $TRIPLEO_ROOT/scripts/devtest_variables.sh at login time via ~/.bashrc

Step 2: devtest_setup.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_setup.sh –trash-my-machine

As the name states this script sets up the bare metal machine with the packages needed.

In details it runs 4 sub-scripts :

  • install-dependencies: Install all the package needed to proceed. Note that if you are using CentOS for the bare metal you need to activate EPEL.
  • pull-tools: Download the tripleo components necessary. List available here
  • setup_clienttools: Download all the python package necessary to interact with openstack (ie. in order to interact with the seed, undercloud, overcloud)
  • set-usergroup-membership: Add the current user (ie. tripleo) to the libvirt group so it can spawn VM without the super user premissions

Once this is run, one will be prompted to login again so that adding the user to the libvirt group change can be taken in account.

One’s system is ready to fire some devtest goodness.

The –trash-my-machine is necessary as this script is destructive it requires user acknowledgement that one knows what s/he is doing.

One can add the -c option if he wants to use the cache (during a second run or so)

Step 3: devtest_testenv.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_testenv.sh $TE_DATAFILE

This script is responsible for creating the proper environment within libvirt.
It will create the required domains and networks.

This script illustrates perfectly the notion of environment variable driven behavior.
To select VM technical characteristics, the script will use the following parameters :

NODE_CNT=${NODE_CNT:-15}
NODE_CPU=${NODE_CPU:-1}
NODE_MEM=${NODE_MEM:-3072}
NODE_DISK=${NODE_DISK:-40}
NODE_ARCH=${NODE_ARCH:-i386}
SEED_CPU=${SEED_CPU:-${NODE_CPU}}
SEED_MEM=${SEED_MEM:-${NODE_MEM}}

Here the defaults are voluntary pasted. For example I doubt one will want to go with the NODE_ARCH specified here. Same thing for the NODE_CPU, if the server one owns
is powerful enough let’s take advantage of it.

NODE_CNT is a variable that purpose is to tell libvirt how many domains it needs to create.
So for example if one want 3 controllers + 3 compute + 1 storage, 7 here would be enough.
Unless you are trying to deploy a massive openstack cloud within your bare metal server, this value is fine. Preallocating 15 domains doesn’t mean that 15 vms will be running.
So do not worry if this number seems high.

So appending to our previous ~/.devtestrc sane values would be

# TripleO settings
export TRIPLEO_ROOT=~/tripleo
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH

# Libvirt settings
export NODE_CPU=2
export NODE_MEM=4096
export NODE_ARCH=amd64

After one runs this script running virsh list –all should show a number of libvirt domains powered off.

Step 4 : devtest_ramdisk.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_ramdisk.sh

This command will create a special image for the seed and undercloud node. Not much to customize here it will work out of the box.

Step 5 : devtest_seed.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_seed.sh –all-nodes

This command will deploy a minimal openstack cloud (the seed) with few APIs available (heat,nova,glance,nova) to be able to deploy the overcloud.

The –all-nodes allow to register all the VMs created during step 3. This is normally done when building the undercloud, but since in our case the seed is the undercloud, it is taken care of here.

Not much to customize here it will work out of the box.
One can add the -c option to re-use existing source/images if they exist.

NOTE: In order to be able to communicate with the seed ‘cloud’ one need to source $TRIPLEO_ROOT/tripleo-incubator/seedrc. If one forgets to source
that file both devtest_undercloud.sh and devtest_overcloud.sh will fail to proceed as it cannot reach seed heat API.

Step 6 : devtest_undercloud.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_undercloud.sh

Skipping the description here because for a basic installation the use of undercloud is not mandatory. The seed node can take the role of the undercloud.

Step 7 : devtest_overcloud.sh

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_overcloud.sh

One can add the -c option to re-use existing source/images if they exist.

This is where all the heavy processing happens. This script is in charge of several things.

It does :

  • builds the various overcloud images (one per role : controller, compute, etc…)
  • load them into the undercloud (here the seed) openstack
  • register the overcloud node
  • runs heat to provision the overcloud
  • configure keystone on the overcloud and run some acceptance tests (creating flavor, images, VMs, networks, etc…)

Here we will focus on two key parts: the overcloud image building and the heat provisioning of the overcloud.

Image building

The images that will compose the overcloud are built with diskimage-builder. It takes various elements and build a filesystem out of them.

Core elements (the one that are needed for the filesystem to work) are part of the diskimage-builder project. The programs and repositories configuration are part of the tripleo-images-elements

Interesting variables here are :

  • NODE_DIST: This is the distribution we want our overcloud to be. It can be any of those elements.
  • DIB_RELEASE: The release of the distribution. It should default to the most recent ‘supported’ one.
  • RDO_RELEASE: If one is using RDO, this will setup the RDO repositories for the specified OpenStack release.[https://github.com/openstack/tripleo-image-elements project.
  • DELOREAN_REPO_URL: The URL of the DELOREAN_REPO_URL to use. Not mandatory for regular deployment, good practice if contributing to upstream tripleo-heat-template. A good value is the one that CI is currently running.
  • DIB_DEFAULT_INSTALLTYPE: The way diskimage-builder will install the programs. By default it chooses source. But if you use a distro that provides the package and plan to deploy as such the recommended value is package.
  • DIB_INSTALLTYPE_puppet_modules: Upstream it is preferable to always deploy the puppet modules from upstream and not from packages, hence the preferred value here is source. It can be adapted to one needs.
  • DIB_COMMON_ELEMENTS: List of elements that should be present on every images.
  • ELEMENTS_PATH: List of paths where diskimage-builder elements can be found
  • OVERCLOUD_DISK_IMAGES_CONFIG: The overcloud elements configuration. A file that describe which elements needs to be on the overcloud images.

With that explained, amending our previous ~/.devtestrc with sane values would be:

# TripleO settings
export TRIPLEO_ROOT=~/tripleo
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH

# Libvirt settings
export NODE_CPU=2
export NODE_MEM=4096
export NODE_ARCH=amd64

# Diskimage-builder settings
export NODE_DIST='fedora selinux-permissive'
export DIB_RELEASE=21
export RDO_RELEASE=kilo
export DELOREAN_REPO_URL=http://trunk.rdoproject.org/f21/4d/35/4d35f1526504250cab5949414186947fadc2aade_d7937169 # TO UPDATE BASED ON TRIPLEO-CI
export DIB_DEFAULT_INSTALLTYPE=package
export DIB_INSTALLTYPE_puppet_modules=source
export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-puppet-elements/elements:$TRIPLEO_ROOT/heat-templates/hot/software-config/elements:$TRIPLEO_ROOT/tripleo-image-elements/elements
export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release'
export OVERCLOUD_DISK_IMAGES_CONFIG=$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_puppet_disk_images.yaml

Heat provisioning

Once the image are built, loaded via glance in the undercloud (here the seed) and the nodes registered. heat finally build the new stack, the overcloud.

The interesting parameters here are the following :

  • OVERCLOUD_COMPUTESCALE: Number of compute nodes (Default: 1)
  • OVERCLOUD_CONTROLSCALE: Number of controller nodes (Default: 1)
  • OVERCLOUD_BLOCKSTORAGESCALE: Number of block storage nodes (Default: 0)
  • NeutronPublicInterface: The public interface on the deployed nodes (Default: nic1)
  • RESOURCE_REGISTRY_PATH: The heat registry to use. By default it will use one that does not rely on puppet for provisioning but on os-apply-config from the elements.

So easily enough if one wants a cloud with 1 controller and 3 computes it would export the following

export OVERCLOUD_COMPUTESCALE=3
export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml

If one wants to try an HA setup with 3 controllers and 1 compute it would export the following

export OVERCLOUD_CONTROLSCALE=3
export OVERCLOUD_CUSTOM_HEAT_ENV="$TRIPLEO_ROOT/tripleo-heat-templates/environments/puppet-pacemaker.yaml"
export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml

NOTE: Heat has an interesting feature called environment that let’s one overload some aspect of the main stack by specifying an environment. It won’t get covered in this article
but just assume that if you want to deploy an HA setup you need also to export the OVERCLOUD_CUSTOM_HEAT_ENV variable mentioned above.

So if one wants to deploy a basic 1 controller 1 compute scenario nothing needs to be changed. But if someone wants to deploy an HA setup with ‘eth0′ as the NeutronPublicInterface we would amend our ~/.devtestrc to obtain:

# TripleO settings
export TRIPLEO_ROOT=~/tripleo
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH

# Libvirt settings
export NODE_CPU=2
export NODE_MEM=4096
export NODE_ARCH=amd64

# Diskimage-builder settings
export NODE_DIST='fedora selinux-permissive'
export DIB_RELEASE=21
export RDO_RELEASE=kilo
export DELOREAN_REPO_URL=http://trunk.rdoproject.org/f21/4d/35/4d35f1526504250cab5949414186947fadc2aade_d7937169 # TO UPDATE BASED ON TRIPLEO-CI
export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release'
export DIB_DEFAULT_INSTALLTYPE=package
export DIB_INSTALLTYPE_puppet_modules=source
export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-puppet-elements/elements:$TRIPLEO_ROOT/heat-templates/hot/software-config/elements:$TRIPLEO_ROOT/tripleo-image-elements/elements
export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release'
export OVERCLOUD_DISK_IMAGES_CONFIG=$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_puppet_disk_images.yaml

# Heat settings
export NeutronPublicInterface='eth0'
export OVERCLOUD_CONTROLSCALE=3
export OVERCLOUD_CUSTOM_HEAT_ENV="$TRIPLEO_ROOT/tripleo-heat-templates/environments/puppet-pacemaker.yaml"
export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml"

During the process the installer will wait for a signal from heat that the deployment is over, else it will timeout after an hour (value by default). After the deployment is over it will run acceptance tests on the overcloud. For one to interact with the overcloud, it needs to source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc.

What’s happening?

The heat provisioning can take lot of time and one is left in the dark about where is the installation at. One can use heat resource-show to find out what is happening at the heat level. If one wants to know what is currently happening at the system level one should log on the machines it is provisioning.

On the host and after sourcing $TRIPLEO_ROOT/tripleo-incubator/seedrc, one can run nova list. This will return a list of machine currently being provisioned and their respective IP addresses. One can then ssh as the heat-admin user to those machine and review the logs (journalctl) as a superuser to know exactly what is the system doing.

Step 8 : devtest_end.sh (optional)

Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_end.sh

All this script does is that it writes a certain amount of your environment variables into $TRIPLEO_ROOT/tripleorc so it can be reused for a further deployment.

Conclusion

Running devtest.sh can look like magic at a first glance, but once one takes time to decompose it, one can see that it isn’t that magic after all. As demonstrated the overcloud can be highly customized, even more can be done but this is out of the scope of this article. Now one is ready to come and contribute upstream, welcome and looking forward to your contributions !

Note: Useful debugging tutorial can be found at http://hardysteven.blogspot.cz/2015/04/debugging-tripleo-heat-templates.html

Note bis: During the liberty cycle there will be an effort to use/converge toward instack as the way to deploy undercloud and overclouds.

by Yanis Guenane at July 30, 2015 01:55 PM

Tesora Corp

Who doesn’t love food trucks!

We want to have some fun at lunch during Trove Day, so we’re bringing in food trucks! We are excited to have Eat on Monday and 333 Truck cater the Trove Day 2015 lunch.   333 Truck offers three different cuisines of tacos and burritos, including Mexican, Korean, and Indian, for a total of 9 different taco/burrito […]

The post Who doesn’t love food trucks! appeared first on Tesora.

by Leslie Barron at July 30, 2015 01:30 PM

Opensource.com

The best new OpenStack tips and tricks

The OpenStack community is full of helpful tutorials to help you with installing, deploying, and managing your open source cloud. Here are some of the best published in the last month.

by Jason Baker at July 30, 2015 08:00 AM

Benjamin Kerensa

Nóirín Plunkett: Remembering Them

<figure class="wp-caption alignleft" id="attachment_3047" style="width: 225px;">Nóirín Plunkett & Benjamin Kerensa<figcaption class="wp-caption-text">Nóirín and I</figcaption></figure>

Today I learned of some of the worst kind of news, my friend and a valuable contributor to the great open source community Nóirín Plunkett passed away. They (this is their preferred pronoun per their twitter profile) was well regarded in the open source community for contributions.

I had known them for about four years now, having met them at OSCON and seen them regularly at other events. They were always great to have a discussion with and learn from and they always had a smile on their face.

It is very sad to lose them as they demonstrated an unmatchable passion and dedication to open source and community and surely many of us will spend many days, weeks and months reflecting on the sadness of this loss.

Other posts about them:

https://adainitiative.org/2015/07/remembering-noirin-plunkett/
http://www.apache.org/memorials/noirin.html
http://www.harihareswara.net/sumana/2015/07/29/0

by Benjamin Kerensa at July 30, 2015 03:01 AM

Cloudify Engineering

OpenStack Cloud Orchestration Pt I of II - From Manual to Automated Deployment

Most software deployments are more complicated than the actual application being deployed. Usually, when we talk about an application, we’re...

July 30, 2015 12:00 AM

July 29, 2015

Red Hat Stack

Voting Open for OpenStack Summit Tokyo Submissions: Container deployment, management, security and operations – oh my!

This week we have been providing a preview of Red Hat submissions for the upcoming OpenStack Summit to be held October 27-30, in Tokyo, Japan. Today’s grab bag of submissions focus on containers the relationship between them and OpenStack as well as how to deploy, manage, secure, and operate workloads using them. This was already a hotbed of new ideas and discussion at the last summit in Vancouver and we expect things will only continue to heat up in this area as a result of recent announcements in the lead up to Tokyo!

The OpenStack Foundation manages allows its members to vote the topics and presentations they would like to see as part of the selection process. To vote for one of the listed sessions, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Application & infrastructure continuous delivery using OpenShift and OpenStack
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Atomic Enterprise on OpenStack
  • Jonathon Jozwiak – Principal Software Engineer @ Red Hat
Containers versus Virtualization: The New Cold War?
  • Jeremy Eder – Principal Performance Engineer @ Red Hat
Container security: Do containers actually contain? Should you care?
  • Dan Walsh – Senior Principal Software Engineer @ Red Hat
Container Security at Scale
  • Scott McCarty – Product Manager, Container Strategy @ Red Hat
Containers, Kubernetes, and GlusterFS, a match made in Tengoku
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
  • Jeff Vance – Principal Software Engineer @ Red Hat
Converged Storage in hybrid VM and Container deployments using Docker, Kubernetes, Atomic and OpenShift
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
Deploying and Managing OpenShift on OpenStack with Ansible and Heat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
  • Greg DeKoenigsberg –  Vice President, Community @ Ansible
  • Veer Michandi – Senior Solution Architect @ Red Hat
  • Ken Thompson – Senior Cloud Solution Architect @ Red Hat
  • Tomas Sedovic – Senior Software Engineer @ Red Hat
Deploying containerized applications across the Open Hybrid Cloud using Docker and the Nulecule spec
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
Deploying Docker and Kubernetes with Heat and Atomic
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Develop, Deploy, and Manage Applications at Scale on an OpenStack based private cloud
  • James Labocki – Product Owner, CloudForms @ Red Hat
  • Brett Thurber – Principal Software Engineer @ Red Hat
  • Scott Collier – Senior Principal Software Engineer @ Red Hat
How to Train Your Admin
  • Aleksandr Brezhnev – Senior Principal Solution Architect @ Red Hat
  • Patrick Rutledge – Principal Solution Architect @ Red Hat
Minimizing or eliminating service outages via robust application life-cycle management with container technologies
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
OpenStack and Containers Advanced Management
  • Federico Simoncelli – Principal Software Engineer @ Red Hat
OpenStack & The Future of the Containerized OS
  • Daniel Riek – Senior Director, Systems Design & Engineering @ Red Hat
Operating Enterprise Applications in Docker Containers with Kubernetes and Atomic Enterprise
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Present & Future-proofing your datacenter with SDS & OpenStack Manila
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Sean Murphy – Product Manager, Red Hat Storage @ Red Hat
  • Sean Cohen – Principal Product Manager, OpenStack @ Red Hat
Scale or Fail – Scaling applications with Docker, Kubernetes, OpenShift, and OpenStack
  • Grant Shipley – Senior Manager @ Red Hat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat

Thanks for taking the time to help shape the next OpenStack summit!

by Steve Gordon at July 29, 2015 04:30 PM

OpenStack Superuser

Get your OpenStack Summit Tokyo visa in five steps

You won't want to miss the OpenStack Summit in Tokyo, October 27-30.

Japan exempts just 67 countries from obtaining visas, making the road to Tokyo a little bit longer for the rest of us. Here's a complete list of exempt countries from the Ministry of Foreign Affairs of Japan.

If you need a visa, our best piece of advice is to start now. The visa application process for Tokyo will require more time than any of the previous summits -- two of the steps require waiting for documents via snail mail. To ensure enough time, complete the visa request support forms needed in step 2 by mid-September.

Here's our breakdown of the process.

alt text hereGraphic by Eric Powers for the OpenStack Foundation.

Tokyo travel support program

If you also need funds to get you to the Summit, apply for travel and support. For each OpenStack Summit, the Foundation assists key community members with travel. Contributors to OpenStack (developers, documentation writers, organizers of user groups around the world, Ask moderators, translators, project team leads, code reviewers, etc.) are invited to submit a request -- here are some tips for applying. Applications for travel support close August 10. You can apply here.

Cover Photo by Paul Davidson // CC BY NC

by Superuser at July 29, 2015 04:20 PM

Maish Saidel-Keesing

OpenStack Summit Voting - By the Numbers

I love diving into numbers – especially when it has something to do with technology conferences.

But before I do that I would to bring to your attention my two sessions that I have submitted for the upcoming summit (Shameless Plug.. )

Me Tarzan, you Jane (or Operators are not Developers)

Welcoming Operators to the OpenStack Jungle.

A year ago I set out on a journey on trying to help the OpenStack developer community understand the other (and sometimes not well understood) side of the OpenStack community, its users.

Users is a subjective term, depending on who you ask - it could mean the end user using an API or a GUI to deploy a new instance but it also includes those who operate the cloud, maintain it and sweat blood an tears to just allow the end-user to do what he wants.

There is distinct separation today between the two entities - but the good part is that they are slowly coming together.

This talk will describe how you an Operator can make a difference - be it small or large - in OpenStack.

The topics we will go over here are:

  • Initiatives
  • User Committee
  • WTE
  • ISV
  • Monitoring & Logging
  • Large Deployments
  • ... and many more....

Operator specific activities:

  • Ops Tags
  • IRC
  • Heaven forbid - committing code

Expect some interesting stories, some horror stories but we are all aiming for the same thing.

The pot of gold at the end of the rainbow.

OpenStack - High Availability (as a Service) - Fact? Fiction (or maybe something in between)?

Installing an OpenStack cloud used to be a complex task. We have evolved over time and made this a lot easier and more palatable for operators. Operating an OpenStack cloud on the other hand is whole different ball game..

Operators want stable systems and resilient systems, and if the infrastructure services can scale that would only be an added benefit.

But today, OpenStack as a result of its culture, and its history, is a collection of parts, pieces and solutions using multiple different technologies, and architectures.

One of the pain points is naturally high availability for the services which are provided today in a number of different ways.

This talk will propose one possible future direction with which this could be addressed.

By providing a central HA service for all OpenStack projects.

This session will describe a proof of concept for such solution showing making use of cloud friendly technologies that can could take the level of operations to whole new dimension.

 

I did this kind of an exercise for VMworld a few years ago - By the Numbers - VMworld 2013 Session Voting, and I thought it would be interesting to see what the numbers are for this OpenStack Summit.

Of course without some insight as well – the numbers would be quite boring.

There are a total of 1504 sessions that you can cast your vote upon.

That is a huge number of sessions. Perhaps too many. There is no way to go over the list in a defined amount of time. I think that most of the people are going to vote only if they are sent directly to a specific link. Which means this will be targeted votes as a result of someone asking you specifically vote for a specific session. Not really an ideal process but I guess that is the price we all have to pay, as a result of OpenStack becoming more and more popular.

Here are the number of sessions submitted for each track:
(Please Note – these are based on the pure number of submissions – and not what has been accepted. This is not an exact science – there could be a number of reasons why they are ranked like this – but these are my thoughts and ideas on the data below)

session numbers

What is the most popular track

Operations - the one part that the OpenStack community is still struggling to get their input back into the projects. For a number of reasons. Be it inclusion, methodology, culture or mindset.

For me this is / should be (also for everyone that is involved in OpenStack as well) a bright and shiny beacon. Claiming that this is either the most important aspect or the most pressing need that people want to hear about or want to talk about might be going a bit far, but it is way, way up there.

OpenStack Summits should be about the technology, not about how keep the bits and bytes up and running, deployed and working in an efficient manner.

Next up on the list. Community and How to Contribute. Way down there in at the bottom. Is that because people already know how to do it? Because people have given up on trying?

This is something that the community as a whole should invest more in making it part of the culture and making the bar much more accessible to all.

Hands on Labs. The number of sessions proposed as labs is growing. Maybe it is time to think about a centralized solution specifically for the summit?

Neutron (a.k.a. Networking). The hardest part about a cloud is the networking part. I have said this before and will continue to preach it from the rooftops.

I took the liberty of creating a word-cloud of from all the words in all the submissions according to recurrence.

Word Cloud

It makes you think..   :)

You have only one more day to vote – go and make your voice heard!

by Maish Saidel-Keesing (noreply@blogger.com) at July 29, 2015 04:16 PM

Mirantis

Vote for Mirantis Tokyo OpenStack Summit talks

The post Vote for Mirantis Tokyo OpenStack Summit talks appeared first on Mirantis | The #1 Pure Play OpenStack Company.

We love the OpenStack Summit. There’s always so much going on that it’s wonderful for everyone to get together to look at where we’ve been, and where we’re going.  We’d love your support in voting for our summit proposals for the upcoming summit in Tokyo this fall.

Voting runs through July 30. 


The post Vote for Mirantis Tokyo OpenStack Summit talks appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at July 29, 2015 02:14 PM

Thierry Carrez

The Age of Foundations

At OSCON last week, Google announced the creation around Kubernetes of the Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his keynote to the (recently-renamed) Open Container Initiative, confirming the Linux Foundation's recent shift towards providing Foundations-as-a-Service. Foundations ended up being the talk of the show, with some questioning the need for Foundations for everything, and others discussing the rise of Foundations as tactical weapons.

Back to the basics

The main goal of open source foundations is to provide a neutral, level and open collaboration ground around one or several open source projects. That is what we call the upstream support goal. Projects are initially created by individuals or companies that own the original trademark and have power to change the governance model. That creates a tilted playing field: not all players are equal, and some of them can even change the rules in the middle of the game. As projects become more popular, that initial parentage becomes a blocker for other contributors or companies to participate. If your goal is to maximize adoption, contribution and mindshare, transferring the ownership of the project and its governance to a more neutral body is the natural next step. It removes barriers to contribution and truly enables open innovation.

Now, those foundations need basic funding, and a common way to achieve that is to accept corporate members. That leads to the secondary goal of open source foundations: serve as a marketing and business development engine for companies around a common goal. That is what we call the downstream support goal. Foundations work to build and promote a sane ecosystem around the open source project, by organizing local and global events or supporting initiatives to make it more usable: interoperability, training, certification, trademark licenses...

Not all Foundations are the same

At this point it's important to see that a foundation is not a label, the name doesn't come with any guarantee. All those foundations are actually very different, and you need to read the fine print to understand their goals or assess exactly how open they are.

On the upstream side, few of them actually let their open source project be completely run by their individual contributors, with elected leadership (one contributor = one vote, and anyone may contribute). That form of governance is the only one that ensures that a project is really open to individual contributors, and the only one that prevents forks due to contributors and project owners not having aligned goals. If you restrict leadership positions to appointed seats by corporate backers, you've created a closed pay-to-play collaboration, not an open collaboration ground. On the downstream side, not all of them accept individual members or give representation to smaller companies, beyond their founding members. Those details matter.

When we set up the OpenStack Foundation, we worked hard to make sure we created a solid, independent, open and meritocratic upstream side. That, in turn, enabled a pretty successful downstream side, set up to be inclusive of the diversity in our ecosystem.

The future

I see the "Foundation" approach to open source as the only viable solution past a given size and momentum around a project. It's certainly preferable to "open but actually owned by one specific party" (which sooner or later leads to forking). Open source now being the default development model in the industry, we'll certainly see even more foundations in the future, not less.

As this approach gets more prevalent, I expect a rise in more tactical foundations that primarily exist as a trade association to push a specific vision for the industry. At OSCON during those two presentations around container-driven foundations, it was actually interesting to notice not the common points, but the differences. The message was subtly different (pods vs. containers), and the companies backing them were subtly different too. I expect differential analysis of Foundations to become a thing.

My hope is that as the "Foundation" model of open source gets ubiquitous, we make sure that we distinguish those which are primarily built to sustain the needs or the strategy of a dozen of large corporations, and those which are primarily built to enable open collaboration around an open source project. The downstream goal should stay a secondary goal, and new foundations need to make sure they first get the upstream side right.

In conclusion, we should certainly welcome more Foundations being created to sustain more successful open source projects in the future. But we also need to pause and read the fine print: assess how open they are, discover who ends up owning their upstream open source project, and determine their primary reason for existing.

by Thierry Carrez at July 29, 2015 01:30 PM

July 28, 2015

Red Hat Stack

Voting Open for OpenStack Summit Tokyo Submissions: Deployment, management and metering/monitoring

Another cycle, another OpenStack Summit, this time on October 27-30 in Tokyo. The Summit is the best opportunity for the community to gather and share knowledge, stories and strategies to move OpenStack forward. With more than 200 breakout sessions, hands-on workshops, collaborative design sessions, tons of opportunity for networking and perhaps even some sightseeing, the Summit is the even everyone working or planning to work with OpenStack should attend.

Critical subjects, awesome sessions

To select those 200+ sessions the community proposes talks that are selected by your vote, and we would like to showcase our proposed sessions about some of the most critical subjects of an OpenStack cloud: deployment, management and metering/monitoring.

There are multiple ways to deploy, manage and monitor clouds, but we would like to present our contributions to the topic, sharing both code and vision to tackle this subject now and in the future. With sessions about TripleO, Heat, Ironic, Puppet, Ceilometer, Gnocchi and troubleshooting, we’ll cover the whole lifecycle of OpenStack, from planning a deployment, to actually executing and then monitoring and maintaining it on the long term. Click on the links below to read the abstracts and vote your the topics you want to see in Tokyo.

Deployment and Management

OpenStack on OpenStack (TripleO): First They Ignore You..
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
  • Dan Prince – Principal Software Engineer @ Red Hat
Installers are dead, deploying our bits is a continuous process
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
TripleO: Beyond the Basic Openstack Deployment
  • Steven Hillman – Software Engineer @ Cisco Systems
  • Shiva Prasad Rao – Software Engineer @ Cisco Systems
  • Sourabh Patwardhan – Technical Leader @ Cisco Systems
  • Saksham Varma – Software Engineer @ Cisco Systems
  • Jason Dobies – Principal Software Engineer @ Red Hat
  • Mike Burns – Senior Software Engineer @ Red Hat
  • Mike Orazi – Manager, Software Engineering @ Red Hat
  • John Trowbridge – Software Engineer, Red Hat @ Red Hat
Troubleshoot Your Next Open Source Deployment
  • Lysander David – IT Infrastructure Architect @ Symantec
Advantages and Challenges of Deploying OpenStack with Puppet
  • Colleen Murphy – Cloud Software Engineer @ HP
  • Emilien Macchi – Senior Software Engineer @ Red Hat
Cloud Automation: Deploying and Managing OpenStack with Heat
  • Snehangshu Karmakar – Cloud Curriculum Manager @ Red Hat
Hands-on lab: Deploying Red Hat Enterprise Linux OpenStack Platform
  • Adolfo Vazquez – Curriculum Manager @ Red Hat
TripleO and Heat for Operators: Bringing the values of Openstack to Openstack Management
  • Graeme Gillies – Principal Systems Administrator @ Red Hat
The omniscient cloud: How to know all the things with bare-metal inspection for Ironic
  • Dmitry Tantsur – Software Engineer @ Red Hat
  • John Trowbridge – Software Engineer @ Red Hat
Troubleshooting A Highly Available Openstack Deployment.
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Tuning HA OpenStack Deployments to Maximize Hardware Capabilities
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
  • Ryan O’Hara – Principal Software Engineer @ Red Hat
  • Dan Radez – Sr. Software Engineer @ Red Hat
OpenStack for Architects
  • Michael Solberg – Chief Field Architect @ Red Hat
  • Brent Holden – Chief Field Architect @ Red Hat
A Day in the Life of an Openstack & Cloud Architect
  • Vijay Chebolu – Practice Lead @ Red Hat
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
Cinder Always On! Reliability and scalability – Liberty and beyond
  • Michał Dulko – Software Engineer @ Intel
  • Szymon Wróblewski – Software Engineer @ Intel
  • Gorka Eguileor – Senior Software Engineer @ Red Hat

Metering and Monitoring

Storing metrics at scale with Gnocchi, triggering with Aodh
  • Julien Danjou – Principal Software Engineer @ Red Hat

by Steve Gordon at July 28, 2015 09:45 PM

Voting Open for OpenStack Summit Tokyo Submissions: Storage Spotlight

The OpenStack Summit will take place on October 27-30 in Tokyo, will be a five-day conference for OpenStack contributors, enterprise users, service providers, application developers and ecosystem members.  Attendees can expect visionary keynote speakers, 200+ breakout sessions, hands-on workshops, collaborative design sessions and lots of networking. In keeping with the Open-Source spirit, you are in the front seat to cast your vote for the sessions that are important to you!

Today we will take a peak at some recommended storage related session proposals for the Tokyo summit, be sure to vote for your favorites! To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Block Storage

OpenStack Storage State of the Union
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Flavio Percoco, Senior Software Engineer @ Red Hat
  • Jon Bernard ,Senior Software Engineer @ Red Hat
Ceph and OpenStack: current integration and roadmap
  • Josh Durgin, Senior Software Engineer @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
State of Multi-Site Storage in OpenStack
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Neil Levine, Director of Product Management @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
Block Storage Replication with Cinder
  • John Griffith, Principal Software Engineer @ SolidFire
  • Ed Balduf, Cloud Architect @ SolidFire
Sleep Easy with Automated Cinder Volume Backup
  • Lin Yang, Senior Software Engineer @ Intel
  • Lisa Li Software, Engineer @ Intel
  • Yuting Wu, Engineer @ Awcloud
Flash Storage and Faster Networking Accelerate Ceph Performance
  • John Kim, Director of Storage Marketing @ Mellanox Technologies
  • Ross Turk, Director of Product Marketing @ Red Hat Storage

File Storage

Manila – An Update from Liberty
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Akshai Parthasarathy Technical Marketing Engineer @ NetApp
  • Thomas Bechtold, OpenStack Cloud Engineer @ SUSE
Manila and Sahara: Crossing the Desert to the Big Data Oasis
  • Ethan Gafford, Senior Software Engineer @ Red Hat
  • Jeff Applewhite, Technical Marketing Engineer, NetApp
  • Weiting Chen, Software Engineer @ Intel
GlusterFS making things awesome for Swift, Sahara, and Manila.
  • Luis Pabón, Principal Software Engineer @Red Hat
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Trevor McKay, Senior Software Engineer @ Red Hat

Object Storage

Benchmarking OpenStack Swift
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Christian Schwede, Principal Software Engineer @ Red Hat
Truly durable backups with OpenStack Swift
  • Christian Schwede, Principal Software Engineer @ Red Hat
Encrypting Data at Rest: Let’s Explore the Missing Piece of the Puzzle
  • Dave McCowan, Technical Leader, OpenStack @ Cisco
  • Arvind Tiwari, Technical Leader @ Cisco

 

by Sean Cohen at July 28, 2015 09:18 PM

OpenStack Blog

OpenStack Foundation Staffing News

The OpenStack community continues to grow, and the OpenStack Foundation has been hiring to help support many exciting new initiatives and activities as we drive cloud innovation, adoption, and interoperability. New hires in 2015 include:
  • Wes Wilson, Lead Designer, joined late January and has been working on Todd Morey’s team to help build out OpenStack.org and manage other design initiatives. You can check out his awesome work in the /enterprise and /summit sections of OpenStack.org.
  • Danny Carreno, Ecosystem Account Manager, joined Heidi Bretz’s business development team in May and is helping support our growing ecosystem. Danny is passionate about helping startups succeed, coming from his previous role where he helped build startup programs and community for Rackspace public cloud.
  • Heidi Joy Tretheway joined the team in July as Sr Manager of Marketing with a focus on branding, content campaigns and growing participation in the marketing community. She looks forward to collaborating with marketing members at companies throughout our ecosystem. She previously led marketing communications at mobile software company Urban Airship.
  • Kendall Waters recently graduated from St. Edwards University and was a part-time intern for the Foundation during the first half of 2015. She joined the team full time as a Marketing Associate in June, and is part of Claire Massey’s team focused on Summit organization and execution.
  • Jay Fankhauser, currently at Baylor University, is our newest marketing intern, and is primarily helping Allison Price with Superuser Magazine and social media / web analytics.

The bus is filling up, but we still have a few seats left! Come join us!

You can learn more about the full team at http://openstack.org/staff. And we’re still hiring! 

With Stefano Maffulli’s departure in June, the Foundation continues to seek a number of positions, including:
  • Ecosystem Manager: focused on driving the Marketplace forward and liaising with industry groups
  • Upstream Developer Advocate: coordinating and communicating upstream development activities
  • App Dev Community Coordinator: liaise with SDK and developer communities, help solicit feedback and engagement from dev community
  • Engagement Community Manager: support new companies who want to contribute, professional cert, university programs, upstream training, user groups
The OpenStack Foundation staff is honored to serve such a vibrant, global community, and we look forward to engaging with you online and in person. Please don’t hesitate to reach out (info@openstack.org) if you have feedback, ideas or want to get involved!

by laurensell at July 28, 2015 08:01 PM

Red Hat Stack

DevOps, Continuous Integration, and Continuous Delivery

As we all turn our eyes towards Tokyo for the next OpenStack Summit edition the time has come to make your voice heard as to which talks you would like to attend while you are there. Remember, even if you are not attending the live event many sessions get recorded and can be viewed later so make your voice heard and influence the content!

Let me suggest a couple talks under the theme of DevOps, Continuous Integration, and Continuous Delivery – remember to vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Continuous Integration is an important topic, we can see this through   the amount of effort  deployed by the OpenStack CI team. OpenStack deployments all over the globe cover a wide range of possibilities (NFV, Hosting, extra services, advanced data storage, etc). Most of them come with their technical specificities including hardware, uncommon configuration, network devices, etc.

This make these OpenStack installation unique and hard to test. If we want to properly make them fit in the CI process, we need new methodology and tooling.

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.

In this talk we’ll give you an overview of a platform, called Software Factory, that we develop and use at Red Hat. It is an open source platform that is inspired by the OpenStack’s development’s workflow and embeds, among other tools, Gerrit, Zuul, and Jenkins. The platform can be easily installed on an OpenStack cloud thanks to Heat and can rely on OpenStack to perform CI/CD of your applications.

One of the best success stories to come out of OpenStack is the Infrastructure project. It encompasses all of the systems used in the day-to-day operation of the OpenStack project as a whole. More and more other projects and companies are seeing the value of the OpenStack git workflow model and are now running their own versions of OpenStack continuous integration (CI) infrastructure. In this session, you’ll learn the benefits of running your own CI project, how to accomplish it, and best practices for staying abreast of upstream changes.

In order to provide better quality while keeping up on the growing number of projects and features lead Red Hat to adapt it’s processes.  Moving from a 3 team process (Product Management, Engineering and QA) to a feature team approach each embedding all the actors of the delivery process was one of the approach we took and which we are progressively spreading.

We delivered a very large number of components that needs to be engineered together to deliver their full value, and which require delicate assembly as they work together as a distributed system. How can we do this is in in a time box without giving up on quality?

Learn how to get a Vagrant environment running as quickly as possible, so that you can start iterating on your project right away.

I’ll show you an upstream project called Oh-My-Vagrant that does the work and adds all the tweaks to glue different Vagrant providers together perfectly.

This talk will include live demos of building docker containers, orchestrating them with kubernetes, adding in some puppet, and all glued together with vagrant and oh-my-vagrant. Getting familiar with these technologies will help when you’re automating Openstack clusters.

In the age of service, core builds become a product in the software supply chain. Core builds shift from a highly customized stack which meets ISV software requirements to an image which provides a set of features. IT Organization shift to become product driven organizations.

This talk will dive into the necessary organizational changes and tool changes to provide a core build in the age of service and service contracts.

http://crunchtools.com/files/2015/07/Core-Builds-in-the-Age-of-Service.pdf

http://crunchtools.com/core-builds-service/

We will start with a really brief introduction of Openstack services we will use to build our app. We’ll cover all of the different ways you can control an OpenStack cloud: a web user interface, the command line interface, a software development kit (SDK), and the application programming interface (API).

After this brief introduction on the tools we are going to use in our hands on lab we’ll get our hands dirty and build a application that will make use of an OpenStack cloud.

This application will utilize a number of OpenStack services via an SDK to get its work done. The app will demonstrate how OpenStack services can be used as base to create a working application.

by Steve Gordon at July 28, 2015 03:11 PM

IBM OpenTech Team

Vote Now for IBMers @ OpenStack Tokyo!

Hard to believe another 6 months have past and it is time to vote for OpenStack Tokyo presentations. In the months since voting for OpenStack Vancouver, IBM has made a number of exciting announcements around OpenStack and Open Tech:

    May 2015 – announced the expansion of our OpenStack portfolio to enable developers and clients to be able to launch applications on local, on-premises installations, dedicated, hosted installations, and on public clouds hosted on SoftLayer infrastructure.
    June 2015 – announced the acquisition of Blue Box, strengthening IBM Cloud’s leadership position in private and hybrid clouds by simplifying running OpenStack-based private clouds.
    July 2015 – announced the IBM Object Storage on Bluemix Service Broker that integrates OpenStack Swift with Cloud Foundry, allowing fast access to cloud data without needing to know where the data is stored.

Building off our experiences bringing these open source projects – OpenStack, Docker, Spark, and Cloud Foundry – to market, you will see a number of proposals that talk through how we planned, built, and are now iterating on and operating these services. I encourage you to seek these sessions out and vote – no better way to learn than from the teams living it daily.

And, when you are done here, jump on over and check out the submissions from our new IBM Cloud family members at Blue Box: Voting is Open for OpenStack Summit Tokyo

Cloud Security

Community

Compute

Enterprise IT Strategies

Hands-on Labs

How to Contribute

Networking

Operations

Planning Your OpenStack Cloud

Products, Tools, & Services

Public & Hybrid Clouds

Related OSS Projects

Storage

Targeting Apps for OpenStack Clouds

Telco Strategies

User Stories

The post Vote Now for IBMers @ OpenStack Tokyo! appeared first on IBM OpenTech.

by Michael Fork at July 28, 2015 02:27 PM

Red Hat Stack

Voting Open for OpenStack Summit Tokyo Submissions: Networking, Telco, and NFV

The next OpenStack Summit is just around the corner, October 27-30, in Tokyo, Japan, and we would like your help shaping the agenda. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

Virtual networking and software-defined networking (SDN) has become an increasingly exciting topic in recent years, and a great focus for us at Red Hat. It also lays the foundation for network functions virtualization (NFV) and the recent innovation in the telecommunication service providers space.

Here you can find networking and NFV related session proposals from Red Hat and our partners. To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

OpenStack Networking (Neutron)

OpenStack Networking (Neutron) 101
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
Almost everything you need to know about provider networks
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Why does the lion’s share of time and effort goes to troubleshooting Neutron?
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Neutron Deep Dive – Hands On Lab
  • Rhys Oxenham – Principal Product Manager @ Red Hat
  • Vinny Valdez – Senior Principal Cloud Architect @ Red Hat
L3 HA, DVR, L2 Population… Oh My!
  • Assaf Muller – Senior Software Engineer @ Red Hat
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
QoS – a Neutron n00bie
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Moshe Levi – Senior Software Engineer @ Mellanox
  • Irena Berezovsky – Senior Architect @ Midokura
Clusters, Routers, Agents and Networks: High Availability in Neutron
  • Florian Haas – Principal Consultant @ hastexo!
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Adam Spiers – Senior Software Engineer @ SUSE

Deploying networking (TripleO)

TripleO Network Architecture Deep-Dive and What’s New
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat

Telco and NFV

Telco OpenStack Cloud Deployment with Red Hat and Big Switch
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Prashant Gandhi – VP Products & Strategy @ Big Switch
OpenStack NFV Cloud Edge Computing for One Cloud
  • Hyde Sugiyama – Senior Principal Technologist @ Red Hat
  • Timo Jokiaho – Senior Principal Technologist @ Red Hat
  • Zhang Xiao Guang – Cloud Project Manager @ China Mobile
Rethinking High Availability for Telcos in the new world of Network Functions Virtualization (NFV)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat

Performance and accelerated data-plane

Adding low latency features in Openstack to address Cloud RAN Challenges
  • Sandro Mazziotta – Director NFV Product Management @ Red Hat
Driving in the fast lane: Enhancing OpenStack Instance Performance
  • Stephen Gordon – Senior Technical Product Manager @ Red Hat
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
OpenStack at High Speed! Performance Analysis and Benchmarking
  • Roger Lopez – Principal Software Engineer @ Red Hat
  • Joe Talerico – Senior Performance Engineer @ Red Hat
Accelerate your cloud network with Open vSwitch (OVS) and the Data Plane Development Kit (DPDK)
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
  • Seán Mooney  – Network Software Engineer @ Intel
  • Terry Wilson – Senior Software Engineer @ Red Hat

by Nir Yechiel at July 28, 2015 02:05 PM

Voting Open for OpenStack Summit Tokyo Submissions: OpenStack for the Enterprise

In the lead up to OpenStack Summit Hong Kong, the last OpenStack Summit held in the Asia-Pacific region, Radhesh Balakrishnan – General Manager for OpenStack at Red Hat – defined this site as the place to follow us on our journey taking community projects to enterprise products and solutions.

We are excited to now be preparing to head back to the Asia-Pacific region for OpenStack Summit Tokyo – October 27-30 – to share just how far we have come on that journey with host of session proposals focussing on enterprise requirements and the success of OpenStack in this space. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Is OpenStack ready for the enterprise? Is the enterprise ready for OpenStack?

Can I use OpenStack to build an enterprise cloud?
  • Alessandro Perilli – General Manager, Cloud Management Strategies @ Red Hat
Elephant in the Room: What’s the TCO for an OpenStack cloud?
  • Massimo Ferrari – Director, Cloud Management Strategy @ Red Hat
  • Erich Morisse – Director, Cloud Management Strategy @ Red Hat
The Journey to Enterprise Primetime
  • Arkady Kanevsky – Director of Development @ Dell
  • Das Kamhout – Principal Engineer @ Intel
  • Fabio Di Nitto – Manager, Software Engineering @ Red Hat
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
Organizing IT to Deliver OpenStack
  • Brent Holden – Chief Cloud Architect @ Red Hat
  • Michael Solberg – Chief Field Architect @ Red Hat
How Customers use OpenStack to deliver Business Applications
  • Matthias Pfützner – Cloud Solution Architect @ Red Hat
Stop thinking traditional infrastructure – Think Cloud! A recipe to build a successful cloud environment
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat
Breaking the OpenStack Dream – OpenStack deployments with business goals in mind
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat

Enterprise Success Stories

OpenStack for robust and reliable enterprise private cloud: An analysis of current capabilities, gaps, and how they can be addressed.
  • Tushar Katarki – Integration Architect @ Red Hat
  • Rama Nishtala – Architect @ Cisco
  • Nick Gerasimatos – Senior Director of Cloud Services – Engineering @ FICO
  • Das Kamhout – Principal Engineer @ Intel
Verizon’s NFV Learnings
  • Bowen Ross – Global Account Manager @ Red Hat
  • David Harris – Manager, Network Element Evolution Planning @ Verizon
Cloud automation with Red Hat CloudForms: Migrating 1000+ servers from VMWare to OpenStack
  • Lan Chen – Senior Consultant @ Red Hat
  • Bill Helgeson – Principal Domain Architect @ Red Hat
  • Shawn Lower – Enterprise Architect @ Red Hat

Solutions for the Enterprise

RHCI: A comprehensive Solution for Private IaaS Clouds
  • Todd Sanders – Director of Engineering @ Red Hat
  • Jason Rist – Senior Software Engineer @ Red Hat
  • John Matthews – Senior Software Engineer @ Red Hat
  • Tzu-Mainn Chen – Senior Software Engineer @ Red Hat
Cisco UCS Integrated Infrastructure for Red Hat OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
Cisco UCS & Red Hat OpenStack: Upstream Partnership to Streamline OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
  • Arek Chylinski – Technologist @ Intel
Deploying and Integrating OpenShift on Dell’s OpenStack Cloud Reference Architecture
  • Judd Maltin – Systems Principal Engineer @ Dell
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
Scalable and Successful OpenStack Deployments on FlexPod
  • Muhammad Afzal – Architect, Engineering @ Cisco
  • Dave Cain Reference Architect and Technical Marketing Engineer @ NetApp
Simplifying Openstack in the Enterprise with Cisco and Red Hat
  • Karthik Prabhakar – Global Cloud Technologist @ Red Hat
  • Duane DeCapite – Director of Product Management, OpenStack @ Cisco
It’s a team sport: building a hardened enterprise ecosystem
  • Hugo Rivero – Senior Manager, Ecosystem Technology Certification @ Red Hat
Dude, this isn’t where I parked my instance!?
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Libguestfs: the ultimate disk-image multi-tool
  • Luigi Toscano – Senior Quality Engineer @ Red Hat
  • Pino Toscano – Software Engineer @ Red Hat
Which Third party OpenStack Solutions should I use in my Cloud?
  • Rohan Kande – Senior Software Engineer @ Red Hat
  • Anshul Behl – Associate Quality Engineer @ Red Hat

Securing OpenStack for the Enterprise

Everything You Need to Know to Secure an OpenStack Cloud (but Were Afraid to Ask)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Ted Brunell – Senior Solution Architect @ Red Hat
Towards a more Secure OpenStack Cloud
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Malini Bhandaru – Architect & Engineering Manager @ Intel
  • Dan Yocum – Senior Operations Manager, Red Hat
Hands-on lab: configuring Keystone to trust your favorite OpenID Connect Provider.
  • Pedro Navarro Perez – Openstack Specialized Solution Architect @ Red Hat
  • Francesco Vollero – Openstack Specialized Solution Architect @ Red Hat
  • Pablo Sanchez – Openstack Specialized Solution Architect @ Red Hat
Securing OpenStack with Identity Management in Red Hat Enterprise Linux
  • Nathan Kinder – Software Engineering Manager @ Red Hat
Securing your Application Stacks on OpenStack
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Diane Mueller – Director, Community Development for OpenShift @ Red Hat

by Steve Gordon at July 28, 2015 01:48 PM

Mirantis

IBM starts developerWorks Open, 50 open source projects

The post IBM starts developerWorks Open, 50 open source projects appeared first on Mirantis | The #1 Pure Play OpenStack Company.

IBM moved to increase the base of developers interested in using its OpenStack-based Bluemix PaaS this week, launching developerWorks Open. The platform includes informational resources such as blogs, videos, and the opportunity to communicate with specialists. The company also open sourced 50 new projects, most of which are related to either business or cloud programming.

“IBM firmly believes that open source is the foundation of innovative application development in the cloud,” said IBM vice president of Cloud Architecture and Technology Dr Angel Diaz.   “With developerWorks Open, we are open sourcing additional IBM innovations that we feel have the potential to grow the community and ecosystem and eventually become established technologies.”

These projects include IBM Bluemix Mobile Services SDKs, Agentless System Crawler, for monitoring cloud data, and Clampify, for using OpenStack Neutron with a Docker Swarm cluster.

Currently, IBM participates in and contributes to more than 150 open source projects. These projects include Spark, OpenStack, Cloud Foundry, Open Contain Project, Node.js, CouchDb, Linux, Eclipse and an already established relationship with Apache. Open source projects increase the skills and knowledge base around IBM’s software product set. developerWorks Open is the next step in IBM’s strategy to help businesses create, use, and innovate around cloud computing systems.

IBM has a history of enlarging its audience (and the number of people who can use their product) through education. (DISCLOSURE: I actually started my tech writing career writing tutorials for developerWorks, and am still one of only four Level 2 IBM developerWorks Master Authors in the world.)

Also as part of this initiative, IBM is launching the Academic Initiative for Cloud, collaborating with 200 universities around the globe to train more students on technologies related to IBM Bluemix. The new program will create cloud development curricula using Bluemix, IBM’s platform-as-a-service, in over 200 universities, reaching more than 20,000 students in 36 countries. According to an IBM press release, faculty members will receive 12 months of access to the Bluemix trial for themselves as well as up to six months access for students in their program. Both faculty and student accounts are renewable and do not require a credit card. Additionally, IBM is launching a new Student Developer Community that helps students get started on their journey of cloud education, and provides quick access to learning resources and information on how students can join Bluemix U, where students can showcase their accomplishments and the impact of their real-world projects.

IBM is also working with Girls Who Code, hosting a class of female high-school students in New York City for a seven-week summer immersion program, and is announcing a new collaboration with GSVlabs on the ReBoot Accelerator for Women, a program designed to help women returning to work after a multi-year sabbatical.

IBM is also sponsoring ultimately get real-world experiences that translate into innovation for the enterprise. As such, IBM has sponsored 25 of the AngelHack hackathons in the Eighth Global Hackathon Series.

Resources

The post IBM starts developerWorks Open, 50 open source projects appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at July 28, 2015 01:20 PM

Intel and Rackspace want ‘Cloud for All’

The post Intel and Rackspace want ‘Cloud for All’ appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Intel and Rackspace have announced the ‘Cloud for All’ initiative, which includes the formation of the OpenStack Innovation Center, to be located at Rackspace’s San Antonio headquarters, where the pair plans to recruit and train “hundreds” of open source developers to work on strengthening OpenStack. The plan is to build out an OpenStack developer cloud that consists of two 1,000 node clusters available for use by anyone in the OpenStack community for scaling, performance, and code testing. Rackspace plans to have the cloud available within the next six months.

According to Scott Crenshaw, senior vice president of strategy and product at Rackspace, OpenStack is expected to grow at 40%, “faster than the IaaS market as a whole,” CRN says.

Intel’s Diane Bryant, Data Center Group senior VP and general manager, thinks “This transition to cloud is actually not happening fast enough,” according to DataCenterKnowledge. According to Intel, the problem revolves around complexity, lack of scalability, lack of a container strategy, and enterprise-grade features that are still on the horizon. This partnership intends to try and solve some of those problems.

Intel plans to make a “big investment” in acquisitions and other development to support this goal. How big? Jason Waxman, VP and general manager of Intel’s Cloud Platform Group, would say only that “We’re a big company so ‘big’ for us is big.” It also intends to produce optimized products to try and move the market forward.

Resources

The post Intel and Rackspace want ‘Cloud for All’ appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at July 28, 2015 12:51 PM

OpenStack:Now Podcast, Episode 5 — Adrian Cockroft

The post OpenStack:Now Podcast, Episode 5 — Adrian Cockroft appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/jOxFFmoRkzw" width="420"></iframe>

Nick Chase and John Jainschigg speak with Battery Ventures’ Adrian Cockroft about containers, Docker, microservices, Google joining the OpenStack Foundation, Kubernetes’ release, and what it’s like to be the guy who checks out new ideas. Adrian will be speaking at OpenStack Silicon Valley August 26-27.

The post OpenStack:Now Podcast, Episode 5 — Adrian Cockroft appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at July 28, 2015 12:21 PM

Mirantis OpenStack in the real world: Building a scalability test lab

The post Mirantis OpenStack in the real world: Building a scalability test lab appeared first on Mirantis | The #1 Pure Play OpenStack Company.

One of OpenStack’s strengths is in its ability to provide scalability, but when we’re talking about running in production, it’s important to be certain as to how it’s going to perform under various circumstances. So in 2012, when Mirantis developers noticed wide variations between test results for Mirantis OpenStack when deployed in virtual environments and actual physical deployments, we knew we had a problem.

To make sure both we and our customers would know what how things really worked, we knew we needed physical test environments, and eventually a test scale lab that provides a true reflection of the production environment where you deploy Mirantis OpenStack, ensuring that it really does scale in your enterprise environment — before you go live with it.

The rather lofty mission of building a test scale lab has some pretty humble beginnings, starting with Mirantis engineers using a mish-mash of geographically-dispersed internal hardware test labs to see what live deployments of our own projects and initiatives looked like. As we grew, we had to consolidate and scale testing efforts, and we had to make some big decisions about how to do it.

Many of those decisions are the same ones you’ll have to make when planning your own OpenStack deployments, so we wanted to share with you the challenges we ran into, the decisions we made, and how we made those decisions work to build a test scalability lab that can really tell you how your Mirantis OpenStack deployment will work in the physical environment.

We begin at the beginning – in our IT department.

VMs don’t cut it, and buying servers doesn’t scale

It was 2012, and Mirantis OpenStack developers were getting frustrated with test results using VMs on laptops.  They knew that customers would ultimately deploy on real live hardware, and while virtual tests were great for some things, they weren’t providing an accurate picture of what customers really needed to know: would OpenStack scale in the real world?”

The Mirantis OpenStack testers relayed the problem to Yury Koldobanov, head of Mirantis IT, who was getting additional feedback from other technical teams that testing products on VMs from a laptop didn’t paint an accurate picture of deployed software performance. “Though the product being tested would be used in a virtual environment, it would be deployed on physical hardware, so we needed to provide a real picture of deployed performance when testing,” Koldobanov explained. As a result, he started getting a lot of internal requests for dedicated servers to test company products and verify code.

Mirantis development teams began creating small physical test environments, ordering servers for individual projects, and configuring them in disparate corporate locations based on specific needs. But buying servers was only a stop-gap solution for the need to create test environments that would need to scale with growth. And with only a basic internet connection and an ordinary electrical network, Koldobanov and his team knew Mirantis didn’t have the infrastructure in place to host a scalability test lab.

So, while creating a configuration for a “starter” lab wouldn’t be especially complicated, IT still faced the danger of losing power for servers, which would then fail immediately. Not to mention that buying servers for testing was getting pricey, and as a software company, Mirantis wanted to maintain our focus on building software products, not shift our expertise to owning and maintaining a data center.

A strategic decision

The mounting bills for test servers kept hitting the desk of Oleg Goldman, Mirantis senior vice president of operations. “I could see the ad hoc purchases weren’t a viable long-term solution,” he said. “Continuing with even a small data center would have been a wildly expensive proposition for us when I considered the many infrastructure requirements for electricity, air conditioning, back-up systems, security, fire mitigation, and support, not to mention staff management,” Goldman continued.

He had to choose between complete control over a very costly internal data center or the uncertainty of a more affordable but risky solution with a host vendor. He made the strategic decision to outsource hardware servers labs, asking Koldobanov to find an external vendor to host the data center to give Mirantis a strategic, economically viable answer for test lab scalability moving forward. “It was up to Yury (Koldobanov) how to execute against the directive,” Goldman said. Koldobanov started working on the request at the end of 2012.

Evaluating scalability test lab options

Orders in hand, Koldobanov was now confronted with the challenges of outsourcing the data center, which included:

  • Difficulty finding a contractor that would lease and maintain servers to meet scalability requirements and wire the lab appropriately, while also supporting customized installations, and switch control. Instead, most data centers lease only servers in standard configurations with a standard network connection.

  • Geographically distant data centers. However, with Mirantis being located in various global locations, this detail was less important than it might have been to a business with only one location; wherever a data center was located, it was going to be far away from somebody on the team.

  • Slower speed of server administration. Using offsite data centers would prevent Mirantis from personally inspecting the test lab configuration, and require close communication with the data center, whose staff must be technically qualified to answer questions and solve problems.

He got to work.

An architect’s perspective

While investigating test scalability solutions, Koldobanov consulted Mirantis principal engineer Aleksandr Shaposhnikov, a member of the OpenStack Neutron networking project, who had experience building a physical test environment. “I knew he had first-hand knowledge of developing test plans run in physical environments,” said Koldobanov.

While working on Neutron, Shaposhnikov had successfully devised a network testing plan with twenty Neutron nodes and agents. With the efforts of the entire project team, the 20-nodes and agents test plan proved stable, and Shaposhnikov moved on to create similar testing for OpenStack and its various deployments, and now turned his expertise to evaluate infrastructure needs for Mirantis’ test lab with Koldobanov.

“When I helped to establish testing for OpenStack, I developed linear formulas to calculate loads and basic OpenStack customer scenarios,” Shaposhnikov related. “This was extremely helpful in my work at Mirantis, and I began to evaluate equipment needs and a budget to get Mirantis OpenStack 6.0 running on first a 20-node, then 100-node cluster with a defined load,” he said.

Finding the data center that could… and would

Meanwhile, Koldobanov had evaluated a number of outside companies interested in providing a data center, working with them to build small test labs that could handle the relatively small load generated by Mirantis teams for internal product testing, and placing his first orders in January 2013. Just as important as the different companies’ ability to host the data center, Koldobanov was evaluating their willingness and ability to scale the test lab and give Mirantis enough control over the environment setup, which would require close collaboration and frequent modifications — not something data centers are historically known to provide.

After working on small-scale projects with several candidates, Koldobanov decided to go with Host-Telecom in the Czech Republic for a proof-of-concept test lab in the summer of 2014. A young, progressive company, Host-Telecom accommodated Mirantis’ various infrastructure customizations and completed a 20-node lab in August 2014.

In that hurried timeframe when evaluating the best company to proceed with, the emphasis was on Host-Telecom’s flexibility in collaborating with Mirantis to build the 20-node lab, a factor highly in its favor. The idea of creating a lab for specific standards that would analyze higher magnitude testing, such as the performance of one Linux OS versus another side by side in the same deployment, wasn’t yet on anyone’s radar; the existing test environment wouldn’t support such a deployment. Right now, the only question on everyone’s mind was whether Host-Telecom could continue to scale at the rate Mirantis demanded, when it demanded.

Getting from 20 to 100 nodes

The data center vendor decision made, Koldobanov now had to execute with Host-Telecom and create a physical lab capable of testing customers’ cloud deployments, with input from Mirantis sales saying that a 100-node deployment would be a good size for testing Mirantis OpenStack cloud deployments for businesses such as small banks.

But even with a 20-node lab established, scaling to a 100-node lab was proving to be an intense challenge. The test lab architecture had to be sound, and implementing the setup and wiring in the data center had to be impeccable to ensure Mirantis was testing the exact environment the customer would be using. Getting the latter done remotely with contractors required time, good communication, and knowledgeable, cooperative staff in the data center. QA’s role was also vitally important as they tested the deployments, ensuring stable behavior under all kinds of conditions. 

It was rough, and Mirantis and Host-Telecom had to machete their way through undefined communication channels and technical issues to set up a lab that provided a true picture of a customer’s Mirantis OpenStack cloud deployment.

Working with the data center and making them like it

Partnering required tight coordination between the geographically dispersed Host-Telecom team and the Mirantis team, who couldn’t get their hands on the hardware. And like Koldobanov and Shaposhnikov’s teams at Mirantis, Pavel Chernobrov, director at Host-Telecom, and his data center team also had to stretch to deliver against requirements for the first expansion to a 100-node lab.

At Mirantis, Koldobanov and Shaposhnikov were urgently pushing the project forward right after establishing the first 20-node lab in August — when much of Europe was on vacation. Sourcing equipment was difficult, and Chernobrov needed a healthy number of qualified engineers to wire the required servers at a greatly increased scope. Chernobrov also had to acquire an even greater reserve of hardware to be able to grow as Mirantis expanded scalability testing.

Even with hardware in hand, the circuitry to stretch to 100 nodes was complex, requiring much more effort to create workable testing schemes. “When we began it was no big deal starting with 20 nodes, but in that initial timeframe, we didn’t create a formal, standards-based testing lab. That was a mistake,” Koldobanov explained.

QA feels the pain and the team responds

No one understood the challenges of expanding the test lab better than Sergey Galkin, Mirantis Senior Quality Assurance Engineer. “When we increased the lab from 20 to 100 nodes, no one’s responsibilities were documented. We had no established tool for communication with our IT specialists, the different teams involved, or with Host-Telecom’s technical people,” he noted. “No one was setting up internal meetings for the different groups within Mirantis, let alone with the Host-Telecom people, and no one really knew who should be in charge of that because we had begun the whole test lab from the grass-roots level.” In addition, Galkin said different team members interpreted the existing sparse project documentation differently. Misunderstanding caused the team to lose time and drained the budget.

With communication issues causing chaos, architect Shaposhnikov saw his carefully constructed test scalability lab plans in serious danger of never being realized, and that was unacceptable. Shaposhnikov took full control of all operations in the data center and gave clear instructions to Host-Telecom’s staff and to the Mirantis IT team. He identified roles and got the whole team using Skype to communicate and troubleshoot, allowing them to move forward more quickly.  

Documentation – It’s not just words on paper

The team got fastidious about documentation, compiling tables with key data such as connections and addresses of all lab equipment. “If you want to build a scalable test lab, the documentation absolutely takes time and effort, but it really simplifies the building process,” Galkin said. “The docs were also a huge help in preparing the lab quickly. With everything outlined, the engineers were able to set up a 100-node test lab in a week, where before getting up and running had taken two and half weeks,” said Galkin.

Reaching for daylight

In addition to the need for clear documentation, role definition, and defined communication processes, Galkin also identified serious technical challenges that the team faced as they expanded the lab:

  1. The first 100-node lab lacked automation for test deployments, so the team had to invest a lot of time automating every lab process from beginning to end of a test scheme. And as noted, the poor communication between IT and engineering hindered testing at the beginning of the project.

  2. Time to complete testing was long. When something failed, the team had to troubleshoot and then start again from the beginning, so getting results took hours.

Better communication remedied some of the issues, as did test automation efforts. Shaposhnikov also stepped in to create a set of tools to test and verify infrastructure, connectivity, and other areas to ensure proper MOS deployment. He worked with Galkin and the QA team on tools that enabled deployment and configuration of hundreds of servers and dozens of switches, as well as VLANs. In addition, Galkin used custom test tools to contact each server through another interface to investigate issues and file bugs. He directly consulted relevant team members to solve the identified problems.

After an arduous effort to get to a 100-node lab, the team now had the experience, defined roles, documentation, and processes to proceed. But some wrinkles remained.

Expanding to a 200-node lab, 300-node lab, and beyond

After the growing pains of getting to 100 nodes, Koldobanov changed his approach to building the environment when Mirantis increased the scalability test lab to 200 nodes in November 2014. “Right after we ordered resources from Host-Telecom, we discussed logical and physical testing schemes with our lead architect Aleksandr (Shaposhnikov) and the engineers, and we created testing schemes,” Koldobanov says.

Shaposhnikov added, “I worked closely with Yury (Koldobanov) and gave our IT department my requirements so they could produce the initial switch configuration.” The group then began to work with the technical staff at Host-Telecom, ensuring everyone understood the schemes and had an assigned role. Getting the working arrangement straightened out took time and caused some heartburn, but as with the first expansion, everyone worked through it, and getting to 200 nodes was not nearly as labor and time-intensive as going from 20 to 100 nodes.

With the increase to 200 nodes, Galkin saw great improvements in QA testing. At 200 nodes, he was able to divide the scalability test lab into different components and dedicate 100 nodes to automated testing using tools such as Rally.

In addition, Galkin and his team could see differences in how environments were configured; for example, perhaps one product worked well on 100 nodes in Ubuntu, but had problems on 100 nodes in CentOS. The QA team also took advantage of the additional 100 nodes to test the backlog of other release cycle tasks that the smaller scalability test labs hadn’t been able to handle.

Evaluating Mirantis OpenStack performance in enterprise environments — for real

The jump to a 300-node scalability test lab was fast on the heels of the jump to 200 nodes. With processes established over two expansions, increasing to 300 nodes proved to be a much faster setup than going from 20 to 100 nodes, or even from 100 to 200.

With the move to a 300-node lab, Mirantis can now create an exact physical model of an enterprise environment, so users know before deploying that the cloud is going to work. One node can approximate up to 60 VMs, based on RAM size, giving Mirantis the functionality of about 18,000 VMs in the test scale lab.

Engineers were now able to certify that Mirantis OpenStack works on large deployments “out of the box”, and partnering with a knowledgeable, flexible data center such as Host-Telecom was key to making the entire work.

In the end…

Under the influence of cloud computing and rapidly changing needs from customers such as Mirantis, hosting providers are changing their sales model. Until recently, the data center provisioned hosting service hardware at the exact level a customer initially requested. If a client needed more dedicated servers, the data center would add only what was necessary for that request, and so on.

Data centers such as Host-Telecom now tend to anticipate customer needs, buying a surplus of servers for a test cluster and selling additional power at customer request. This new process works well for customers such as Mirantis, which can add resources to its scalability test lab quickly by working with a data center whose perspective is that, “We are ready at any time to add resources to your cloud ‘on the fly.’”

Host-Telecom wasn’t the only organization changed as a result of this process.  The Mirantis Services organization had always provided architectural and engineering experts to enterprises deploying OpenStack, and they passed as much of that knowledge as possible back to the engineers building the actual product. But there’s just something about doing it yourself; the engineers building Mirantis OpenStack gained a profound understanding of the issues customers are trying to solve with the OpenStack solutions they were building. They need to know that  products work the way they think they will in a physical environment, and not a fabricated virtualized one.

And now, when a bank, telco, or another enterprise says, “But will Mirantis OpenStack work?” they can look at the lab, and how it grew from humble origins as a 20-node internal product test lab to being able to accommodate small-and-medium sized business environments, and finally the enterprise, and say, “Absolutely.”

The post Mirantis OpenStack in the real world: Building a scalability test lab appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Ilya Stechkin at July 28, 2015 11:57 AM

Rackspace Developer Blog

Install OpenStack from source

Installing OpenStack has always been challenging. Due to the complexity and varity of design choices involved in setting up OpenStack, automated installers are rare. For those who need a small but realistic setup, to be used either for development or learning, a manual installation using the desired distribution's packages has been the typical solution. Distribution packages simplify the process, however they come with compromises.

First, packages are only updated monthly, so the wait for patches is slow. Additionally so far the QA process for the various distributions is a little spotty, so the if a package is broken when the update occurs is may be a month until the problem is fixed. Secondly applying a much needed patch from source may break a package installation or have the patch overwritten when a package update occurs.

For those testing or developing OpenStack, using a package based install would be impossible, since the packagers place files in difference locations and make it impossible to push source files to the OpenStack github based repos. For this most developer use devstack as their install vehicle. Devstack is a great tool to get a simple OpenStack environment running but It can be cranky and difficult to get multi-machine OpenStack environments running.

The choice is to install OpenStack from source. For most folks, source installs are undesirable and messy, but since OpenStack is completely written in python, source installs eliminate the temperamental and slow compilation step. Additionally since each service also has its own python requirements file, the python installer uses pip to install any python dependencies. This greatly simplifies the installation process and results in an install that can be used for either development or production. It is worth noting that some newer OpenStack installer projects such as stackforge's os-ansible-deployment have moved to source installs. Currently it is located at this stackforge GitHub repository. The ansible installer is now part of the OpenStack big tent efforts and shortly will be located at a repository within the openstack organization.

This discussion demonstrates how to install OpenStack from source onto three machines using Ubuntu 14.04 LTS as the base OS. The three nodes consist of compute, network and controller nodes, using separate control and data planes, an access/API network and one external network connection. Each node needs at least 3 NIC cards (the network node needs 4). Users are able to create simple legacy routers and an external provider router. The following diagram gives a physical representation of the final OpenStack system.

OpenStack Diagram

This install uses KVM, running through libvirt, for virtualization, but it can be easily modified to use QEMU for those who have hardware that doesn't fully support virtualization. The compute nodes needs to be tested to verify that the CPU(s) can support KVM by running:

egrep -c '(vmx|svm)' /proc/cpuinfo

The output should be 1 or higher to be able to use KVM for virtualization. An output of 0 means you have to use QEMU for virtualization.

The basic source install process for each node is straightforward and similar for each service that runs on a node. To install an OpenStack service we need to complete the following for each:

  1. For proper security each service will run as a separate non-root user

    1. create users for each service
    2. create home directories for the service and other needed directories, i.e. log, lib, and etc.
    3. set proper ownership of the directories and files
  2. Clone the OpenStack service's repository

    1. copy any provided sample configuration files to the etc directory
    2. edit the configuration files setting any needed parameters
    3. install the service using the provided python install script
  3. Configure the service and create upstart scripts to start/stop the service

  4. Lastly start the service.

Let's get started installing keystone on the controller node.

Begin by updating all three of your nodes. On all nodes run:

apt-get update; apt-get dist-upgrade -y;reboot

Set some initial shell variables which are used in the installation to simplify the install process. MY_IP and MY_PRIVATE_IP should be the IP for the NIC card on the controller to which the the management plane is assigned. Again perform this on all three nodes.

cat >> .bashrc << EOF
MY_IP=10.0.1.4
MY_PRIVATE_IP=10.0.1.4
MY_PUBLIC_IP=10.0.0.4
EOF

source .bashrc

Install RabbitMQ and set rabbit to only listen on the control plane interface:

apt-get install -y rabbitmq-server

cat >> /etc/rabbitmq/rabbitmq-env.conf <<EOF
RABBITMQ_NODE_IP_ADDRESS=$MY_PRIVATE_IP
EOF
chmod 644 /etc/rabbitmq/rabbitmq-env.conf

service rabbitmq-server restart

Install mysql and set it to listen on the control plane interface (set the mysql root password to mysql for this example. Use a more complex password in production.):

apt-get install -y mysql-server

sed -i "s/127.0.0.1/$MY_PRIVATE_IP\nskip-name-resolve\ncharacter-set-server = utf8\ncollation-server = utf8_general_ci\ninit-connect = 'SET NAMES utf8'/g" /etc/mysql/my.cnf

restart mysql

Use mysql_secure_installation to remove the anonymous user:

mysql_secure_installation

Create the database for keystone and set access permissions for the keystone user:

mysql  -u root -pmysql -e "create database keystone;"
mysql  -u root -pmysql -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"
mysql  -u root -pmysql -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"

Install additional needed apt dependencies and needed pip packages:

apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev
pip install python-glanceclient python-keystoneclient python-openstackclient
pip install repoze.lru pbr mysql-python

Create the various users and directories needed for the OpenStack services. The following script creates these for each of the services on the controller node: (a similar script is available for the network and compute nodes with fewer services)

for SERVICE in keystone glance neutron nova cinder
do
useradd --home-dir "/var/lib/$SERVICE" \
    --create-home \
    --system \
    --shell /bin/false \
    $SERVICE

#Create essential dirs

mkdir -p /var/log/$SERVICE
mkdir -p /etc/$SERVICE

#Set ownership of the dirs

chown -R $SERVICE:$SERVICE /var/log/$SERVICE
chown -R $SERVICE:$SERVICE /var/lib/$SERVICE
chown $SERVICE:$SERVICE /etc/$SERVICE

#Some neutron only dirs

if [ "$SERVICE" == 'neutron' ]
  then
    mkdir -p /etc/neutron/plugins/ml2
    mkdir -p /etc/neutron/rootwrap.d
    chown -R neutron:neutron /etc/neutron/plugins
fi

Clone the keystone github repo move into the newly created keystone directory and use the python install process to install keystone: ( this can be changed to point to any release by changing the kilo in the next line, or to install from trunk by removing the -b stable/kilo)

git clone https://github.com/openstack/keystone.git -b stable/kilo
cp -R keystone/etc/* /etc/keystone/
cd keystone
python setup.py install
EOF

Copy the sample keystone conf file provided by the keystone project and set the database and token info within the file: Note: For a production system, set a more complex token than what is used in this example.

mv  /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf
sed -i "s|database]|database]\nconnection = mysql://keystone:keystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
sed -i 's/#admin_token = ADMIN/admin_token = SuperSecreteKeystoneToken/g' /etc/keystone/keystone.conf
cd ~

Use the keystone tools to create the tables within the keystone database:

keystone-manage db_sync

Set keystone for proper log rotation:

cat >> /etc/logrotate.d/keystone << EOF
/var/log/keystone/*.log {
    daily
    missingok
    rotate 7
    compress
    notifempty
    nocreate
}
EOF

OpenStack does not provide any start up scripts for the services. The script needed depends on the distribution and whether an upstart or a systemd script is needed. For Ubuntu 14.04, an upstart script is needed. The following is an upstart script that starts the keystone service (this is a slightly modified version from the Ubuntu package install):

Keystone Upstart script

cat > /etc/init/keystone.conf << EOF
description "Keystone API server"
author "Soren Hansen <soren@linux2go.dk>"

start on runlevel [2345]
stop on runlevel [!2345]

respawn

exec start-stop-daemon --start --chuid keystone --chdir /var/lib/keystone --name keystone --exec /usr/local/bin/keystone-all -- --config-file=/etc/keystone/keystone.conf  --log-file=/var/log/keystone/keystone.log
EOF

start keystone

Verify that keystone started:

ps aux|grep keystone

If for some reason keystone doesn't start, the following can be used to try to start keystone for troubleshooting: (only run this if the previous step doesn't show a keystone process running)

sudo -u keystone /usr/local/bin/keystone-all --config-file=/etc/keystone/keystone.conf  --log-file=/var/log/keystone/keystone.log

Create the credentials files used by the openstack commands to authenticate to keystone:

cat >> openrc_admin << EOF
export OS_SERVICE_TOKEN=SuperSecreteKeystoneToken
export OS_SERVICE_ENDPOINT=http://$MY_IP:35357/v2.0
EOF

cat >> openrc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://$MY_IP:35357/v2.0
export OS_REGION_NAME=RegionOne
EOF

Set up the keystone's PKI infrastructure:

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

Load the shell variables needed to access keystone and verify that it is responding:

source openrc_admin
keystone tenant-list

Initialize some shell variables used by the sample_data script that populates keystone with its initial data:

export CONTROLLER_PUBLIC_ADDRESS=$MY_IP
export CONTROLLER_ADMIN_ADDRESS=$MY_IP
export CONTROLLER_INTERNAL_ADDRESS=$MY_IP

Use the supplied sample data script to populate keystone with initial service information and endpoints:

./keystone/tools/sample_data.sh

Verify that there is valid data in keystone:

keystone tenant-list
keystone user-list
keystone service-list
keystone endpoint-list

Congratulations, keystone should now be installed and running. In the next article of this series, we will install glance and neutron on the controller node.

July 28, 2015 10:26 AM

Mirantis

Google donates Kubernetes 1.0 to new foundation

The post Google donates Kubernetes 1.0 to new foundation appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Another week, another foundation. At OSCON this week, Google announced not just that its container management project Kubernetes, had reached 1.0 status, but also that Google was donating it to the newly formed Cloud Native Computing Foundation. The CNCF’s goal is to provided interoperability among container management projects such as Kubernetes, Mesos, and Docker Swarm.

Kubernetes has had more than 14,000 commits from more than 400 contributes, and includes features such as DNS and load balancing, grouping of containers into pods for easier management, and scaling capabilities. In fact the announcement is a bit of an understatement. Pre-release versions of the software were already in wide usage, and at the time of the announcement, version 1.0 had been available for more than a week; 1.0.1 is already available on Github. Last week’s announcement that Google had signed on to OpenStack showed that it was in wide usage there as the underpinnings for OpenStack Magnum and a large part of the OpenStack Application Catalog.

So perhaps the bigger news is the formation of the Cloud Native Computing Foundation. The CNCF will be a collaborative project under the Linux Foundation and starts with 22 companies as founding members: AT&T, Box, Cisco, Cloud Foundry, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch, Twitter, Univa, VMware, and Weaveworks. Other organizations will reportedly be added in the coming weeks. “This new organization aims to advance the state­-of-­the-­art for building cloud native applications and services,” said the Linux Foundation in a press release, “allowing developers to take full advantage of existing and to­-be-­developed open source technologies. Cloud native refers to applications or services that are container-packaged, dynamically scheduled and micro services­-oriented.”

The scope here is intentionally broad, in contrast with the narrow scope of the Open Container Initiative (apparently renamed from the Open Container Project when someone pointed out that “OCP” already means “Open Compute Project”), which is meant simply to define a common standard for container runtimes.

Google has seeded the CNCF with Kubernetes, which will now be under the foundation’s control. Apache Mesos has also donated code that integrates Mesos with Kubernetes. That’s not to say that the intention of the foundation is to standardize container management around Kubernetes. After all there are other tools out there, such as Mesos and Docker Swarm, and some of them could eventually fall under the purview of the CNCF. The stated intent of the foundation is to provide integration between these tools, and to “to create these reference stacks that orchestration engines can use to interoperate,” according to Patrick Chanezon, a member of the technical staff at Docker and a force behind its participation in the CNCF. For example, he told SDXCentral, Docker Swarm might use Kubernetes as a scheduler.

Kubernetes is already being woven into the fabric of OpenStack. Mirantis helped to incorporate it into OpenStack Murano for deploying container-based applications, and Red Hat created the heat-kubernetes orchestration templates, which are also used by the OpenStack Magnum container orchestration project. IBM’s Angel Diaz and Jesse Proudman suggested that the CNCF and OpenStack “are two peas in the same datacenter pod,” providing a way for OpenStack to more easily integrate with container management technology.

All of that sounds cosy, doesn’t it? Well, maybe not so much.

On the one hand, there’s talk of multiple options. On the other hand, the CNCF is clearly focused on Kubernetes as a common architecture. Bryan Cantrill, CTO of cloud provider Joyent, and member of CNCF the technical committee, made a point of saying at the CNCF should tell users what to use — in contrast to OpenStack. “We shouldn’t be afraid to be opinionated,” he told NetworkWorld, which pointed out that he was speaking for himself, and not for the CNCF. “One of the biggest problems with OpenStack is that there is so little that companies agree on that there end up being so many decisions left up to the end user,” he says. He even goes on to call the CNCF the “anti-OpenStack”.

But it may not be that easy. Even before the announcement, there was reportedly trouble, with the first version of the CNCF announcement press release distributed to journalists mentioning Kubernetes, but not Docker, Cloud Foundry, or Red Hat (which is second only to Google in Kubernetes contributions). The final release includes all three — but not Kubernetes, which is relegated to being mentioned in quotes from the participants. Although sources reportedly claim that Docker (which has its own orchestration plans) insisted on the change, the company denies it, saying that participation was contingent on a letter of intent, and was simply a matter of timing.

And then there’s the more practical issue of agreement. While Cantrill might not be comfortable with OpenStack’s view that users can make up their own mind, it does, at least, prevent one company from being able to run the whole technological show. According to ZDNet, “While no one would go on record, a source close to Docker said, ‘First, we had to compromise on the container format, and now we had to trade down on container management.’” Getting all of these companies to agree on a single “opinionated” standard might be difficult.

Complicating matters is the fact that the two biggest cloud vendors in the industry, Amazon Web Services and Microsoft, have so far declined to join the CNCF, despite their membership in the OCI.

Still, everyone does have a stake in this working out, as fragmentation in the container world will only hold back adoption as companies worry about expending effort that eventually gets locked out of the ecosystem. CoreOS is sticking close, releasing Tectonic, its commercial distro of Kubernetes. Mesos has contributed code.  Even Docker, which has its own plans, participated (though it made sure to use to opportunity to remind everyone of the OCI).

Although many are quick to criticize OpenStack and its massive (and growing) ecosystem, nobody has yet found a more successful alternative. “Ideally, this new foundation could bring some agreement to an area where many insiders have started to choose sides and thus permit fragmentation,” VentureBeat points out. “Should clear standards emerge, larger companies might feel more inclined to give containers a try, if they haven’t already. Then again, as more vendors get involved, container-based computing could start to lose the excitement that has built up around it in the past couple of years.”

Resources

The post Google donates Kubernetes 1.0 to new foundation appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at July 28, 2015 05:41 AM

Cloudify Engineering

Get Your Cloudify Votes in For OpenStack Tokyo

As contributing members of OpenStack, we are proud to be part of this amazing community that is at the forefront...

July 28, 2015 12:00 AM

July 27, 2015

OpenStack Reactions

starting to write a long spec for an OpenStack project

h3imQSu

(try to watch this gif a 100 times in loop by series of 20s and maybe you’ll get your spec merged in less than 123 days)

by chmouel at July 27, 2015 07:28 PM

Gorka Eguileor

A Cinder Road to Active/Active HA

We all want to see OpenStack’s Block Storage Service operating in High Availability with Active/Active node configurations, and we are all keen to contribute to make it happen, but what does it take to get there? Wasn’t Cinder already Active-Active? Maybe you’ve been told that Cinder could be configured as Active/Active, or you’ve even seen […]

by blog at July 27, 2015 07:28 PM

OpenStack Superuser

Superuser Awards Nominations Now Open

Nominations for the Tokyo Summit Superuser Awards are open, and will be accepted through August 31. The winner will be chosen by the community at large and announced onstage at the summit in October.

The Superuser awards recognize a team that uses OpenStack to meaningfully improve its business and differentiate in a competitive industry, while also contributing back to the community. If you fit the bill, or know a team that does, we encourage you to submit a nomination here.

Learn more about the Superuser Awards.

Following the successful launch of OpenStack’s inaugural Superuser Awards at the Paris Summit in November, the community has continued to award winners at every summit to users that show how OpenStack is making a difference and provide strategic value in their organization. At the Paris Summit, CERN was chosen as the winner from an impressive roster of finalists and Comcast took the award home in Vancouver.

Submissions are being collected until August 31, and once the finalists are selected by the Superuser Editorial Advisory Board, the OpenStack community will vote among the finalists in order to determine the overall winner. Polling will begin in September.

When evaluating winners for the Superuser Award, judges and community members will take into account the unique nature of use case(s), as well as integrations and applications of OpenStack performed by a particular team.

Additional selection criteria includes how the workload has transformed the business, including quantitative and qualitative results of performance as well as community impact in terns of code contributions, feedback, knowledge sharing, etc.

Winners will take the stage at the OpenStack Summit in Tokyo, Japan in October. Submissions are open now until August 31, 2015. We invite you to submit your organization or nominate a Superuser here.

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

by Superuser at July 27, 2015 06:51 PM

Kenneth Hui

That Time Again: Voting For OpenStack (Tokyo) Summit Talks

vote

It seems like only yesterday that we were in Vancouver for the Spring OpenStack Summit and it’s already time to start turning our attention to Tokyo for the Fall summit. Tokyo will be my first Summit representing Platform9 as their Director of Technical Marketing and Partner Alliances. An important aspect of any Summit are the breakout sessions where new ideas can be heard and lessons learned can be shared within the OpenStack community. Decisions about which talks are chosen among the many that are submitted are made by ad hoc track chair committees. Each committee is comprised of members of the community who meet through the month of August to decide which talks will be be part of the Summit agenda. A key factor in the decision making process for the committees are the votes received for each submitted talk. As a former track chair, I can confirm that the voting is an important component though not the only one in the decision process. Sometimes the vote count serves as a filter to reduce the number of talks to be considered. In some cases, vote counts serve as a tie-breaker between two talks. In other cases, vote count is a weighted factor among others in a committee’s decision making matrix. In all cases, voting is important as a tool to help the various track chair committees. For that reason, all OpenStack Foundation members are encouraged to vote.

For those unfamiliar with the process, community voting began on July 23rd and will run for one week until 11:59 PM PDT US time on July 30th. Please note that you do need to be a OpenStack Foundation member in order to vote; If you are not currently a member, you can easily register for membership via the OpenStack website.

Below are talks that I’ve submitted for the Summit, presented for your consideration. I’ve provided the links to vote for each talk and a brief description of each talk.  You can read the full abstract by following the provided links. Besides talks I’ve submitted in conjunction with my Platform9 colleagues, I’ve also submitted a number of talks with others in the OpenStack community. I’ve also included Platform9 proposed talks that I am not a speaker for but I believe would be of great value to our community.

Title: Ambassador community report
Description: OpenStack Ambassadors connect the user groups to the Foundation. They help initialize the groups and guide them to grow. Ambassadors can give feedback about all the OpenStack community. Meet with the ambassadors on this session where they will introduce the improvements of the last half year and share their feelings and experience about the community.
Speakers:  OpenStack Ambassadors Team

——————————————————–

Title: Ask the Experts: Are You Ready For Containers?
Description: This panel will discuss how enterprises are leveraging OpenStack to adopt container technologies. As part of this panel discussion, we will also be addressing questions posed by attendees about the adoption (and challenges) of containers in the enterprise
Speakers:  Boris Renski (Mirantis), Caroline McCrory (Cloudsoft), Greg Knieriemen (HDS), Jan Mark Holzer (Red Hat), Jesse Proudman (Blue Box), Kenneth Hui (Platform9), Manju Ramanathpura (HDS)

——————————————————–

Title: Best Use Cases For OpenStack In The Enterprise
Description: OpenStack can address a variety of enterprise use cases. Identifying up front the most appropriate workloads that can benefit from the power and flexibility of OpenStack in your environment can materially affect the success of your deployment. Join a panel of industry veterans for a panel discussion about the best use cases for OpenStack in the enterprise
Speakers:  Jeramiah Dooley (SolidFire), Kenneth Hui (Platform9), Scott Sanchez (Cisco)

——————————————————–

Title: Enabling Persistence For Docker With OpenStack Cinder And Flocker
Description: Persistence has been a popular ask in the Docker community and thanks to Flocker it is now possible to meet this need using any OpenStack Cinder storage provider! Join us as we give an overview of what is Flocker and how it connects Docker containers to storage providers that support the OpenStack Cinder API. We will show how these capabilities can be applied to manage stateful microservices in a way that enables both container and data mobility.
Speakers:  John Griffith (SolidFire), Kenneth Hui (Platform9), Shamail Tahir (EMC)

——————————————————–

Title: From MicroVMs To Microservices: Incremental Approach To Migrating From VMs to Containers
Description:  This presentation describes Platform9’s journey to move from a full VM-based to a container-based architecture for our OpenStack SaaS solution. Along the way, we’ll talk about lessons learned from using OpenStack to manage our “MicroVM” based OpenStack architecture. This will include discussions on the challenges and benefits of using different container management solutions in OpenStack such as Nova-Docker and Magnum.
Speakers:  Bich Le (Platform9), Roopak Parikh (Platform9), Sachin Manpathak (Platform9)

——————————————————–

Title: Getting Started With OpenStack
Description: OpenStack continues to grow exponentially as the de facto standard for open source Cloud platforms. But how can someone quickly get started with learning this exciting new technology? This workshop will walk participants through an overview of the OpenStack components and offer practical suggestions and resources for learning OpenStack. To demonstrate one way to get started, we will assist workshop attendees to set up a multi-node OpenStack cloud, on their laptops, using the RDO distribution.
Speakers:  Dan Radez (Red Hat), Kenneth Hui (Platform9)

——————————————————–

Title: Is OpenStack’s Future Still In The Cloud?
Description: OpenStack is more and more the darling of the Enterprise world.  Over the past few years that’s changed the development priorities of the community.  With increasing adoption of OpenStack by enterprise customers coupled with the consolidation of OpenStack startups, OpenStack is a changing landscape. In this free-flowing discussion 3 very experienced stackers will discuss the future of OpenStack and what it means for deployers and users.
Speakers: Jesse Proudman (Blue Box), Kenneth Hui (Platform9), Matthew Joyce (Formerly NASA)

——————————————————–

Title: Lessons From The Trenches: Monitoring Your OpenStack Cloud
Description: One of the most important tasks that a cloud operator needs to focus on, after deployment, is effective monitoring. Join Blue Box and Platform9 to hear from two leading managed OpenStack providers who are specialists in operating OpenStack clouds. Find out how we monitor our customers’ clouds and the lessons we learned from being in the trenches of deploying OpenStack in production.
Speakers:  Harrison Page (Platform9), Kenneth Hui (Platform9), Tyler Britten (Blue Box)

——————————————————–

Title: Making OpenStack Work In An Existing Environment – Challenges And Solutions
Description: One of the biggest barriers  for enterprises interested in deploying OpenStack today is the inability to leverage existing assets – including infrastructure, workloads and their inter-relationships. However, OpenStack can be taught to learn – and leverage – existing enterprise infrastructure, and incorporate it seamlessly into a live private cloud. This enables users to get up and running with a fully functional private cloud, already plumbed with their existing assets. In this talk, we will describe how we, at Platform9, moved our existing dev-test workloads, and infrastructure to an OpenStack-based private cloud, running on vSphere, using a set of such changes for Nova, as well as Glance
Speakers:  Amrish Kapoor (Platform9), Kenneth Hui (Platform9), Pushkar Acharya (Platform9), Sachin Manpathak (Platform9)

——————————————————–

Title: Nested Virtualization On KVM With OpenStack
Description: QA and Testing require running workloads repeatedly. Using nested virtualization is one way of solving the problem of destroying and recreating environments quickly and efficiently. We will talk about challenges and lessons learned at Platform9 using OpenStack to create and run test workloads using nested virtual machines. This includes solving issues such as networking issues related to nested virtualization such as ibvirt network filters (nwfilters) and linux bridging modes.
Speakers:  Bich Le (Platform9), Arun Sriraman (Platform9)

——————————————————–

Title: OpenStack & Ansible: Automating The Installation and Upgrade Of Your Cloud
Description: Platform9 uses an automation tool called Ansible to manage OpenStack clouds on behalf of our customers. Learn how you can use Ansible to install, configure and upgrade OpenStack in a production environment. The discussion will include talking about how we can make OpenStack deployments and upgrades painless and simple.
Speakers:  Harrison Page (Platform9), Paavan Shanbhag (Platform9), Roopak Parikh (Platform9)

——————————————————–

Title: Persisting Data In Your Cloud With Cinder Block Storage
Description:The Block Storage project (Cinder) is often overlooked but can be critical in an OpenStack deployment. In this presentation, we will walk through not merely the basics of Cinder, but show how Cinder is being deployed today and provide some use case examples and demos!
Speakers:  Bich Le (Platform9), John Griffith (SolidFire), Kenneth Hui (Platform9)

——————————————————–

Title: Simplifying OpenStack: The True Value Of The Easy Button
Description: What value does managed OpenStack offer versus a supported OpenStack distribution or simply using trunk? One of the primary objections to deploying OpenStack at scale is that there is not enough trained OpenStack talent available and in the market and that the cost to hire/train offsets any gains in efficiency or TCO OpenStack has to offer.Join a panel of industry veterans as they discuss the relative strengths and weaknesses of distributions versus managed OpenStack and the questions you should be asking yourself as you plan your OpenStack deployment.
Speakers: Jeramiah Dooley (SolidFire), Kenneth Hui (Platform9), Scott Sanchez (Cisco)

——————————————————–

Title: Up Your Availability Game With Cinder Data Services!
Description: Cinder has been evolving with each release (thanks to a very active developer community). In this session we will cover the options that enable greater availability for workloads by leveraging Cinder cloning options, volume replication, and the newly added multi-attach functionality.  We will cover a brief overview and conduct a demo for each feature based on a popular use-case.
Speakers: John Griffith (SolidFire), Kenneth Hui (Platform9), Shamail Tahir (EMC)

——————————————————–

Title: You’ve Deployed OpenStack. Now What? Day Two Operational Tips From Managed OpenStack Providers
Description: Much of the attention and information on OpenStack is around design and deployment. While that’s obviously critical to get started, what happens after you go live? In this session, we’ll help you gain understanding into the challenges, pitfalls, strategies, and best practices that successful OpenStack operators navigate every day.
Speakers: Harrison Page (Platform9), Kenneth Hui (Platform9), Tyler Britten (Blue Box)

I believe this is a very strong slate of talks and workshops. I look forward to sharing with everyone while we are in Tokyo.


Filed under: Cloud, Cloud Computing, Community, Containers, OpenStack, Private Cloud, Virtualization, VMware Tagged: Cloud computing, OpenStack, Platform9, Private Cloud, VMware, VMware vSphere, vSphere

by kenhui at July 27, 2015 04:00 PM

Opensource.com

A new center for innovation, celebrating five years, and more OpenStack news

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at July 27, 2015 07:00 AM

July 26, 2015

Arthur Berezin

It’s All About OpenStack Apps This Time – Call For Voters – Tokyo Summit

Wow, it’s this time of the year already! the summit call for proposals came really quickly this time, OpenStack Tokyo summit preparations are approaching quicker than ever before!

/LE_ME_EXCITED

This time I’ve decided to focus my talk proposals on the Applications that run on top of OpenStack, how to choose most suitable infrastructure for the application, that makes best use of the underlying infrastructure for application’s needs, and how to make best use of OpenStack when designing applications that are cloud native, and cloud aware, to achieve not only agility, but also make sure the application is highly available.

Make sure to place your votes if you are interested in any of these topics:

You can find talk proposal abstracts below:

Abstracts

OpenStack Infrastructure for Any Workload, Anywhere – Containers, VMs and Bare Metal, Private and Public

https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/6415

ABSTRACT

OpenStack has grown to become an industry standard agile infrastructure technology, gluing together technologies, methodologies, vendors,and different approaches, and bringing almost all of infrastructure industry to collaborate. When OpenStack started, back in 2009, it was designed to manage Virtual Machines, alot has changed since then, containers came to the masses with Docker becoming a popular technology, and even running applications on bare-metal physical machines is now cool again, with use cases such as Hadoop.

In this talk Arthur Berezin will discuss the following:

  • Describe how to run and manage virtual machine workloads using Nova
  • Running applications in Docker Containers with Kubernetes with project Magnum
  • Running Applications on bare-metal using Ironic, with Sahara and TripleO use cases
  • Touch point on OpenStack on premise, managed OpenStack and Public offerings

Best Practices For Cloud Ready OpenStack Application Workloads

https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/6480

ABSTRACT

Imagine a world where your application workloads are fully autonomous, aware of all the sophisticated infrastructure they run on, monitoring and controlling the environment in a fully automated fashion, taking advantage of the all the benefits of Infrastructure-as-a-Service, Software-Defined-Networking(SDN) and Software-Defined-Storage has to offer.

We all wish to live in this world of unicorns and rainbows, but in reality building cloud aware applications is challenging, let alone leveraging the agile characteristics within existing applications.

In this talk Arthur Berezin will discuss the following:

  • The advantages and challenges in building cloud aware applications
  • Best practices for writing cloud aware applications
  • Best practices for turning existing applications to leverage the agile infrastructure it runs on

 

Deep Dive Into Highly Available OpenStack Architecture, from Infrastructure to Application Workloads

https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/6406

ABSTRACT

Many organizations choose OpenStack for its distributed architecture, and ability to deliver Infrastructure-as-a-Service platform for mission critical applications. In such environments, the cloud control plane is ought to provide maximum possible uptime, and it is crucial to build an OpenStack infrastructure deployment that guarantiesApplications’ up-time.

In this talk Arthur Berezin will discuss the following:

– Overview of a highly available OpenStack architecture using Pacemaker and HAProxy.

– In-depth overview of each of OpenStack’s core services, their behaviour and readiness for to run in a highly available configuration, providing Infrastructure and Application workloads uptime and availability.

– Review the new features added to the OpenStack Kilo and Liberty releases to better support a complete active/active configuration.

The post It’s All About OpenStack Apps This Time – Call For Voters – Tokyo Summit appeared first on Berezin's Virtual Clouds.

by Arthur Berezin at July 26, 2015 05:54 PM

Lars Kellogg-Stedman

In which we are amazed it doesn't all fall apart

So, the Kilo release notes say:

nova-manage migrate-flavor-data

But nova-manage says:

nova-manage db migrate_flavor_data

But that says:

Missing arguments: max_number

And the help says:

usage: nova-manage db migrate_flavor_data [-h]
  [--max-number <number>]

Which indicates that --max-number is optional, but whatever, so you try:

nova-manage db migrate_flavor_data --max-number 100

And that says:

Missing arguments: max_number

So just for kicks you try:

nova-manage db migrate_flavor_data --max_number 100

And that says:

nova-manage: error: unrecognized arguments: --max_number

So finally you try:

nova-manage db migrate_flavor_data 100

And holy poorly implement client, Batman, it works.

by Lars Kellogg-Stedman at July 26, 2015 04:00 AM

July 24, 2015

Rackspace Developer Blog

This Week in Information Security (Week of July 20th)

Hey, folks! Lots of scary vulnerabilities today affecting Windows, Internet Explorer, OS X, OpenSSH, and WordPress core. Unfortunately, several of them are still unpatched at the time of writing this. We also have some research into remotely hacking cars to do an attacker's bidding over their cellular network, comparisons between security experts and non-experts in security habits and, finally, some research looking at the huge amount of data exposed to the public Internet by outdated MongoDB nodes that don't use authentication.

As always, you can find me on Twitter @ccneill if you have any thoughts on this post. Hope you enjoy it. Stay safe!

News / Opinions

  • Oops! Adult Dating Website Ashley Madison Hacked; 37 Million Accounts Affected - Ashley Madison, an online dating site that caters to married people looking for extramarital affairs, has been hacked, potentially affecting roughly 37 million people. A group called "The Impact Team" apparently took issue with Ashley Madison's policy of charging users to delete their personal data from the company's servers, and the hackers claim that that deletion itself was not effective. They claim that Ashley Madison saved the payment details, used to pay for the data deletion service, after the deletion had occurred. The hackers are threatening to release Ashley Madison's users' data - account information, names, addresses, and more.

Security Research

  • Hackers remotely kill a Jeep on the highway - with me in it - Security researchers Charlie Millier and Chris Valasek have demonstrated, in real world conditions, that they are able to take over many functions of a Jeep Cherokee while it is operating normally. The author of this piece saw first-hand what they were capable of. At first, they did silly things like turn on his windshield wipers and crank up his stereo, but they were also able to disable his brakes while he was driving the car, which ended with him crashing it into a ditch (he was unharmed). The vulnerability lies somewhere in the Uconnect system that Chrysler uses for things like navigation and offering Wi-Fi to the car's occupants, and the Jeep Cherokee isn't the only model affected. The researchers believe that this issue affects any Chrysler car built from late 2013 through early 2015. They will disclose more information about their research at the upcoming Blackhat Conference in Las Vegas.

  • New research: Comparing how security experts and non-experts stay safe online - Google has released some interesting new research that looks at the differences between security experts and non-experts in their security habits online. The differences are quite significant and suggest something of a failure in the security community to communicate the most important best practices to non-experts. Google will present a paper on their findings (PDF), "...no one can hack my mind: Comparing Expert and Non-Expert Security Practices," at the Symposium on Usable Privacy and Security this week. Here is a graphic showing some of the differences they discovered:

Findings from new Google security research

Source: googleonlinesecurity.blogspot.com

  • It's the Data, Stupid! - Shodan, which monitors publicly-accessible online services, has a new blog post looking at the huge number of MongoDB databases that are exposed to the public Internet without any authentication. Shodan had almost 30,000 accessible databases in its index as of July 18th (you can see a report of their findings here). The most shocking statistic to me is the sheer amount of data that is vulnerable: some 595 TERRABYTES. Apparently, this stems from old versions of MongoDB that listened by default on '0.0.0.0' instead of localhost. This was patched some time ago but, apparently, a lot of folks are still running old versions that listen on all interfaces by default.

Vulnerabilities

  • OS X 10.10 DYLDPRINTTO_FILE Local Privilege Escalation Vulnerability - NO OFFICIAL PATCH - A serious vulnerability in OS X has been discovered that allows a malicious user to easily elevate their privileges to root if they have the ability to execute commands as a normal user. Essentially, the issue allows an attacker to write to any file on the system as if they had root privileges. This could be used to write your standard user to the /etc/sudoers file, for example. There is currently no patch from Apple, but the researcher who released the details of the vulnerability has also released a kernel extension that can be applied as a temporary solution while Apple fixes the bug more thoroughly. You can check Apple's security updates page to get the patch when it is eventually released.

  • Bug in widely used OpenSSH opens servers to password cracking - NO OFFICIAL PATCH - If you have a server that runs OpenSSH and you're accepting passwords for authentication, you're about to get some bad news. The latest version of OpenSSH has a vulnerability that allows attackers to take advantage of the "interactive keyboard" functionality to send thousands of passwords via one SSH connection. The SSH connection is usually closed after a few failed login attempts, but this issue takes advantage of the fact that OpenSSH will allow you to open thousands of password prompts upon connecting and will accept password attempts from each of them. Brute force attacks against SSH are already probably one of the most common attacks taking place online today, and they just got a lot worse. No word yet on the OpenSSH site about a fix at the time of this posting.

  • Four RCE Zero-Day Flaws Plague Internet Explorer: ZDI - NO OFFICIAL PATCH - Researchers from HP's Zero-Day Initiative have dropped 4 0days on Microsoft's Internet Explorer browser today, after reporting them to Microsoft more than 6 months ago. Microsoft had requested to extend the deadline to July 19th, which ZDI did, but Microsoft did not deliver patches in time, so ZDI decided to release the issues publicly so that users can be aware of them and, presumably, avoid using Internet Explorer until patches are available.

  • Hacking Team Leak Uncovers Another Windows Zero-Day, Fixed in Out-of-Band Patch - PATCHED - Another Microsoft vulnerability popped up this week, this one the result of the Hacking Team compromise that we discussed in last week's post. It is a particularly nasty one, so bad that Microsoft offered a patch for it outside its normal patching schedule. Almost all versions of Windows appear to be affected. The issue resides in some code that handles Open Type fonts and, apparently, the exploit is even capable of escaping Chrome's sandbox when the victim visits a malicious page in their browser. Both code execution and privilege escalation are possible using this vulnerability. This is the second time Microsoft has had to patch some serious issues with the Adobe Type Manager Font Driver (ATMFD) recently (other issues were reported in late June).

  • WordPress 4.2.3 Security and Maintenance Release - PATCHED - WordPress has patched a cross-site scripting vulnerability in versions 4.2.2 and below. The vulnerability has not been disclosed publicly, but the description states that it would "allow users with the Contributor or Author role to compromise the site." If you have not enabled automatic updates, check out this article on the WordPress Codex about configuring them.

Reference / Tutorials

  • HowTo: Privacy & Security Conscious Browsing - This is a great Gist describing steps you can take to prevent being tracked by advertisers or malicious actors and to just generally be safe when browsing online. The author takes a look at several different browsers, with recommendations of privacy settings and 3rd party plugins for how to configure the browser to be as secure/private as possible.

Tools

  • SecLists - This is an OWASP project to allow security testers to quickly and easily plug common payloads into their security tests. From the repository description: "SecLists is the security tester's companion. It is a collection of multiple types of lists used during security assessments. List types include usernames, passwords, URLs, sensitive data grep strings, fuzzing payloads, and many more." Basically, these are curated lists of strings that can be used to, for example, test for SQL injection, search for sensitive information like passwords in code repositories, or audit users' passwords to ensure they meet complexity requirements and aren't known, commonly-used passwords. The lists come from all over the place, like FuzzDB, as well as individual contributors like RSnake and others.

Random Link of the Week

  • Row hammering in JavaScript - H/T Jonathan Evans - Security researcher @lavados recently posted a tantalizing screenshot claiming that he has achieved DRAM row-hammering using JavaScript in FireFox. I'm still waiting on the edge of my seat for the paper describing his technique but, for now, I'll just leave you with the horrifying possibility of privilege escalation via JavaScript. See Google Project Zero's post on the subject if you want to learn more about the technical details of DRAM row-hammering to effect privelege escalation.

JavaScript row-hammering (apparently)

Source: @lavados' Twitter

July 24, 2015 11:59 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (July 17 – 24)

OpenStack and cloud native applications: two peas in a data center pod

OpenStack has shown the world that innovation with open technology can happen. Fast. In fact, it can happen at a pace never before seen in the history of the IT industry.

Interoperability: DefCore, Refstack and You

The OpenStack Foundation has created a set of requirements to ensure that the various products and services bearing the OpenStack marks achieve a high level of interoperability. This post from IBM OpenTech Team gives an overview of the whole machinery, how to test clouds and upload results to RefStack website.

 

IMPORTANT + TIME SENSITIVE:

The Road to Tokyo

Reports from Previous Events

Relevant Conversations

Deadlines and Contributors Notifications

Security Advisories and Notices

  • None this week

Tips ‘n Tricks

Upcoming Events

Other News

OpenStack Reactions

When I hit send after writing my long email for the TC candidacy

When I hit send after writing my long email for the TC candidacy

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis.

by Nicole Martinelli at July 24, 2015 08:33 PM

Assaf Muller

OpenStack Tokyo summit voting is now open!

Nir Yechiel and I submitted a session titled: ‘L3 HA, DVR, L2 Population… Oh My!’

If you’re interested in Neutron’s vision of routing and the integration of various router types, vote for our session here: https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/5802

See you in Tokyo!


by assafmuller at July 24, 2015 03:24 PM

Solinea

Solinea Sessions at the OpenStack Summit in Tokyo


openstack-cloud-software-vertical-largeThe OpenStack community is invited to vote for their favorite presentations and panel discussions to help determine which ones will be included in the final schedule.

Solinea has submitted some great abstracts. Below is a summary of each talk and a link to the voting site.

See you at the OpenStack Summit in Tokyo

by Seth Fox (seth@solinea.com) at July 24, 2015 03:07 PM

Dan Radez

Vote for my OpenStack Summit Tokyo Sessions

I’ll be in Tokyo this October for OpenStack summit. Here’s a list of my sessions, put in a vote if you have a chance.

Getting Started with OpenStack
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/4497

Getting Started with OPNFV
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/4499

Building your first VNF
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/4502

TryStack.org: Free OpenStack for Planet Earth
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/6324

Tuning HA OpenStack Deployments to Maximize Hardware Capabilities
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/presentation/6500

by radez at July 24, 2015 03:04 PM

Tesora Corp

And the voting begins: OpenStack Summit Tokyo

We are very excited to be sponsoring our 4th OpenStack Summit and we hope you come visit our booth in Tokyo! Like all previous Summits, Tokyo has high potential for great sessions and speakers, and our team is very enthusiastic about contributing.  We submitted a variety of talks that we feel will interest the community and […]

The post And the voting begins: OpenStack Summit Tokyo appeared first on Tesora.

by Leslie Barron at July 24, 2015 02:34 PM

Red Hat Stack

Celebrating Kubernetes 1.0 and the future of container management on OpenStack

This week, together with Google and others we celebrated the launch of Kubernetes 1.0 at OSCON in Portland as well as the launch of the Cloud Native Computing Foundation or CNCF (https://cncf.io/), of which Red Hat, Google, and others are founding members. Kubernetes is an open source system for managing containerized applications providing basic mechanisms for deployment, maintenance, and scaling of applications. The project was originally created by Google and is now developed by a vibrant community of contributors including Red Hat.

As a leading contributor to both Kubernetes and OpenStack it was also recently our great pleasure to welcome Google to the OpenStack Foundation. We look forward to continuing to work with Google and others on combining the container orchestration and management capabilities of Kubernetes with the infrastructure management capabilities of OpenStack.

Red Hat has invested heavily in Kubernetes since joining the project shortly after it was launched in June 2014, and are now the largest corporate contributor of code to the project other than Google itself. The recently announced release of Red Hat’s platform-as-a-service offering, OpenShift v3, is built around Kubernetes as the framework for container orchestration and management.

As a founding member of the OpenStack Foundation we have been working on simplifying the task of deploying and managing container hosts – using Project Atomic –  and configuring a Kubernetes cluster on top of OpenStack infrastructure using the Heat orchestration engine.

To that end Red Hat engineering created the heat-kubernetes orchestration templates to help accelerate research and development into providing deeper integration between Kubernetes and the underlying OpenStack infrastructure. The templates continue to evolve to include coverage for other aspects of container workload management such as auto-scaling and were recently demonstrated at Red Hat summit:

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/tS5X0qi04ZU?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="420"></iframe>

The heat-kubernetes templates were also ultimately leveraged in bootstrapping the OpenStack Magnum project which provides an OpenStack API for provisioning container clusters using underlying orchestration technologies including Kubernetes. The aim of this is to make containers first class citizens within OpenStack just like virtual machines and bare-metal before them, with the ability to share tenant infrastructure resources (e.g. networking and storage) with other OpenStack-managed virtual machines, baremetal hosts, and the containers running on them. Providing this level of integration requires providing or expanding OpenStack implementations of existing Kubernetes plug-in points as well as defining new plug-in APIs where necessary while maintaining the technical independence of the solution. All this must be done while allowing application workloads to remain independent of the underlying infrastructure and allowing for true open hybrid cloud operation. Similarly on the OpenStack side additional work is required so that the infrastructure services are able to support the use cases presented by container-based workloads and remove redundancies between the application workloads and the underlying hardware to optimize performance while still providing for secure operation.

Containers on OpenStack Architecture

Magnum, and the OpenStack Containers Team, provide a focal point to coordinate these research and development efforts across multiple upstream projects as well as other projects within the OpenStack ecosystem itself to achieve the goal of providing a rich container-based experience on OpenStack infrastructure.

As a leading contributor to both OpenStack and Kubernetes we at Red Hat look forward to continuing to work on increased integration with both the OpenStack and Kubernetes communities and our technology partners at Google as these exciting technologies for managing the “data-centers of the future” converge.

by Steve Gordon at July 24, 2015 02:28 PM

Tesora Corp

Short Stack: OpenStack turns 5, OSCON announcements, CNCF formation, DMTF and OpenStack partnership

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best links we can to share with you every week If you like what you see, […]

The post Short Stack: OpenStack turns 5, OSCON announcements, CNCF formation, DMTF and OpenStack partnership appeared first on Tesora.

by Leslie Barron at July 24, 2015 01:04 PM

Loïc Dachary

Ceph integration tests made simple with OpenStack

If an OpenStack tenant (account in the OpenStack parlance) is available, the Ceph integration tests can be run with the teuthology-openstack command , which will create the necessary virtual machines automatically (see the detailed instructions to get started). To do its work, it uses the teuthology OpenStack backend behind the scenes so the user does not need to know about it.
The teuthology-openstack command has the same options as teuthology-suite and can be run as follows:

$ teuthology-openstack \
  --simultaneous-jobs 70 --key-name myself \
  --subset 10/18 --suite rados \
  --suite-branch next --ceph next
...
Scheduling rados/thrash/{0-size-min-size-overrides/...
Suite rados in suites/rados scheduled 248 jobs.

web interface: http://167.114.242.148:8081/
ssh access   : ssh ubuntu@167.114.242.148 # logs in /usr/share/nginx/html

As the suite progresses, its status can be monitored by visiting the web interface::

And the horizon OpenStack dashboard shows resource usage for the run:



If something goes wrong, the easiest way to free all resources is to run

ssh ubuntu@167.114.242.148 sudo /etc/init.d/teuthology restart

where the IP address is the one listed as a reminder (“ssh access: …”) in the output of each teuthology-openstack command (see example above)..
When the run terminates, the virtual machine hosting the web interface and hosting the test results is not destroyed (that would be inconvenient for forensic analysis). Instead, it will be re-used by the next teuthology-openstack run.
When the cluster is no longer needed (and the results have been analyzed) it can be destroyed entirely with

teuthology-openstack --teardown

Special thanks to Zack Cerza, Andrew Schoen, Nathan Cutler and Kefu Chai for testing, patching, advising, proofreading and moral support over the past two months ;-)

by Loic Dachary at July 24, 2015 11:11 AM

Sébastien Han

OpenStack Summit Tokyo: time to vote

Yet again and for the second time this year, it is time to vote for summit presentations :). Self promotion ahead :).

As always, Josh and I will present the newest addition of Liberty for Ceph in OpenStack. I don’t want to spoil to much but what I can tell you is this cycle is doing well and most of the wanted features would likely land in Liberty. So if you want to see all the amazing things that happened during this cycle:

This presentation will be a follow up on Dude where’s my volume? talk from Vancouver.


Thanks in advance for your votes and support :).

July 24, 2015 08:29 AM

July 23, 2015

OpenStack Superuser

Vote now for OpenStack Tokyo Summit presentations

Let the voting begin!

OpenStack community members are voting on presentations for the OpenStack Summit, October 27 – 30, in Tokyo Japan. Your votes help shape the direction of the upcoming Summit. This time, there was a record of over 1,500 ideas pitched for talks, panel discussions and how-to sessions; votes are cast by giving preference to which ones you are most or least interested in attending.

For the four-day conference there are 17 Summit tracks, ranging from community building and security issues to hands-on labs. If you're a first-time Summit attendee, voting will make sure you get the most out of it.

After the voting is finished, a track lead examines the yeas and nays and orchestrates them into the final sessions. Track leads see who voted for what, so if a company stuffs the virtual ballot box to boost a pitch, they can correct that imbalance. They also keep an eye out for duplicate ideas, often combining them into panel discussions.

The deadline for voting is 11:59 p.m. Pacific Time Zone July 30, 2015.

Cover Photo by tanakawho// // CC BY NC

by Nicole Martinelli at July 23, 2015 10:06 PM

Tesora Corp

Supporting the People who Support OpenStack

According to recent speakers at the OpenStack Summit, finding qualified staff is proving to be the serious obstacle to wider deployment

The post Supporting the People who Support OpenStack appeared first on Tesora.

by Arthur Cole at July 23, 2015 05:20 PM

IBM OpenTech Team

Keystone mid-cycle recap for Liberty

The Keystone mid-cycle for the Liberty release took place at Boston University (BU), and hosted by folks working on the Massachusetts Open Cloud (MOC). It was our most well attended mid-cycle, but still remained one of our most productive. Kudos to the folks at BU and the MOC for hosting us, and a special thanks goes out to Adam Young and Piyanai Saowarattitada for organizing most of the logistics.

By the numbers

23 Keystone community members in attended, including 11 core reviewers. 9 different companies were represented, and from 5 different countries (USA, UK, Russia, Switzerland and Canada). We collaboratively revised and merged over 25 patches across the identity program’s 5 repositories.

Full list of attendees:

Geoff Arnold (Cisco)
Lance Bragstad (Rackspace)
Lin Hua Cheng (Yahoo!)
Marek Denis (CERN)
Morgan Fainberg (HP)
Roxanna Gherle
Robbie Harwood (Red Hat)
David Hu (HP)
Brant Knudson (IBM)
Anita Kuno (HP)
Alexander Makarov (Mirantis)
Steve Martinelli (IBM)
Angela Molock (Rackspace)
Henry Nash (IBM)
Piyanai Saowarattitada (BU/MOC)
Chang Lei Shi
George Silvis (BU/MOC)
Davanum Srinivas (Mirantis)
David Stanek (Rackspace)
Brad Topol (IBM)
Craig Tracey (Bluebox/IBM)
Guang Yee (HP)
Adam Young (Red Hat)

Topics of interest:

Dynamic Policy

Policies in OpenStack are currently stored in JSON files (typically named policy.json). Each project serves their own policy.json, and uses oslo.policy (read more about it here) to evaluate the rules in the policy file.

The problems: Editing a file is a less-than-elegant UX, and different projects can define rules however they want, this leads to differing rules and policies.
The proposed solution: Centralize policies by storing them in SQL. A project would have the ability to fetch a policy, and get a newer one if needed, from Keystone. Samuel de Medeiros Queiroz (from UFCG) gave a fantastic demo of the solution over a video call.
My concerns: I’m not seeing enough desire from operators for this feature. The operators we had at the mid-cycle have simply learned to live with the issues. They use Ansible (or other automation tools) to update the policy files for each project on their hosts. Additionally, I think better examples and enforcement on how policy files are written would solve the issue of clashing rules. Overall, I’m hesitant to pick up a bunch of new code to solve an issue that folks have worked around.

Adam gave a presentation on the problem and proposed solution, it’s posted on his github account.

keystoneauth

Morgan Fainberg gave an update on keystoneauth, an initiative to further break apart python-keystoneclient. Once upon a time, python-keystoneclient was actually four projects in one, most folks just didn’t know it at the time. I’ll explain.

python-keystoneclient was providing:

  • a middleware for services to include in their pipeline
  • tools to create an authenticated session
  • CRUD support for python bindings for our APIs
  • a command line interface

When fully broken apart, the Identity team will offer the same features, just through different libraries:

Functional Tests

I spent Thursday afternoon and most of Friday in a small working group. Together I hacked away on functional tests with Marek Denis, Roxanna Gherle, David Stanek and Anita Kuno. It has become increasingly clear that we need functional tests in Keystone, what was once an afterthought to most of us, is now a prime concern. We outlined 6 configurations that we need to start testing against:

  1. Our current CI/CD setup: SQL Identity, SQL Assignment, UUID Tokens
  2. Single LDAP for Identity: LDAP Identity, SQL Assignment, UUID | Fernet Token
  3. Multiple Identity Backends: SQL+LDAP Identity, SQL Assignment, UUID | Fernet Tokens
  4. Federating Identities: Federated Users + SQL Identity (service accounts), SQL Assignment, UUID | Fernet Token
  5. Keystone to Keystone: Any two of the above, with one setup as an IdP, the other as an SP.
  6. Notifications: Can reuse the current CI/CD, but requires a messaging service and listener to be setup.

Bonus Points

The Massachusetts Open Cloud gave a great live demo of their multi-federation setup. In their use case, users are coming in from a federated identity provider, and may use different service providers for different OpenStack services. For instance, they may use Service Provider A for compute resources (nova), but then use Service Provider B for volume (cinder). As a long time federation proponent, it was great to see folks using this in a way I didn’t think possible.

There were many, many other topics discussed: python 3.4 support, hierarchal multi-tenancy, reseller use cases, fernet tokens for federation, general code cleanup and refactoring, and role assignment improvements. For a full list of the nitty gritty details, look at the etherpad.

Pics!

Quick, everyone look smart and busy for the camera!


Marek is excited to code!

The post Keystone mid-cycle recap for Liberty appeared first on IBM OpenTech.

by Steve Martinelli at July 23, 2015 03:05 PM

eNovance Engineering Teams

A journey of a packet within OpenContrail

In this post we will see how a packet generated by a VM is able to reach another VM or an external resource, what are the key concepts/components in the context of Neutron using the OpenContrail plugin. We will focus on OpenContrail, how it implements the overlay and the tools that it provides to check/troubleshoot how the packet are forwarded. Before getting started, I’ll give a little overview of the key concepts of OpenContrail.

Virtual networks, Overlay with OpenContrail

For the overlay, OpenContrail uses MPLS L3VPNs and MPLS EVPNs in order to address both l3 overlay and l2 overlay. There are a lot of components within OpenContrail, however we will focus on two key components – controller and the vRouter.

For the control plane each controller acts as a BGP Route Reflector using the BGP and the XMPP protocols. BGP is used between the controllers and the physical routers. XMPP is used between the controllers and the vRouters. The XMPP protocol transports BGP route announcements but also some other informations for non routing needs.

For the data plane, OpenContrail supports GRE/VXLAN/UDP for the tunneling. OpenContrail requires the following features to be supported by the gateway router :

Opencontrail(3) In this post we will focus on the data plane area.

The packet’s journey

In order to show what is the journey of a packet, let’s play with the following topology, where we have two VMs on two different networks connected thanks to a router.

Capture du 2015-06-26 14:41:12Assuming we have allowed the ICMP packets by setting the security groups accordingly we can start a ping from vm1 toward vm2.

There are a lot of introspection tools within OpenContrail which can be used to get a clear status on how the packets are forwarded.

Initiating a ping between vm1 and vm2, we can check step by step where the packets go.

Since the VMs are not on the same network, they will both use their default gateway. The local vRouter answers to the ARP request of the default gateway IP with its own MAC.

vm1$ ip route
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0  src 10.0.0.3

$ cat /proc/net/arp
IP address       HW type     Flags       HW address            Mask     Device
10.0.0.1         0x1         0x2         00:00:5e:00:01:00     *        eth0

Now that we have seen that the packets will be forwarded to the local vRouter, we are going to check how the vRouter will forward them.

So let’s start by checking at the data plane layer by browsing the vRouter agent introspect Web interface running on the compute nodes hosting our VMs at http://<vrouter agent ip>:8085/agent.xml

There is a plenty of sub-interfaces, but we will only use three of them:

  • VrfListReq, http://<vrouter agent ip>:8085/Snh_VrfListReqWhich gives you the networks and the VRFs related. For a given VRF – let’s say the Unicast VRF (ucindex) – we can see all the routes.
  • ItfReq, http://<vrouter agent ip>:8085/Snh_ItfReqWhich gives you all the interfaces handled by the vRouter.
  • MplsReq, http://<vrouter agent ip>:8085/Snh_MplsReqWhich gives all the association MPLS Label/NextHop for the given vRouter

These interfaces are just XML document rendered thanks to a XSL stylesheet, so can be easily processed by some monitoring scripts for example.

We can start by the interfaces (ItfReq) introspect page to find the TAP interface corresponding to VM1. The name of the TAP contains a part of the neutron port ID.

vm1-itfBeside the interface we see the VRF name associated to the network that the interface belong to. On the same line we have some others informations, security group, floating-ips, VM id, etc.

Clicking on the VRF link brings us to the index page of this VRF. We see that we have links to VRFs according to their type: Unicast, Multicast, Layer 2. By default, OpenContrail doesn’t handle the Layer 2. As said before most of the Layer 2 traffic from the virtual machines are trapped by the local vRouter which acts as an ARP responder. But some specific packets like broadcasts still need to be handled, that’s why there is a specific Layer 2 VRF.

vm1-vrfClicking on the link in the ucindex (Unicast) column, we can see all the unicast L3 routes of our virtual network handled by this vRouter. Since vm1 should be able to reach vm2, we should see a route with the IP of vm2.
vm1-routesThanks to this interface we see that in order to reach the IP 192.168.0.3 which is the IP of our vm2, the packet is going to be forwarded through a GRE tunnel whose endpoint is the IP of the compute node hosting vm2. That’s what we see in the “dip” (Destination IP) field. We see that the packet will be encapsulated in a MPLS packet. The MPLS label will be 16, as shown in the label column.

Ok, so we saw at the agent level how the packet is going to be forwarded, but we may want to check on the datapath side. OpenContrail provides command line tools for that purpose.

In the case of the agent for instance, we can see the interfaces handled by the vRouter kernel module and the associated VRF.

$ vif --list
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
      Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
      D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
      Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, 
      Mon=Interface is Monitored, Uuf=Unknown Unicast Flood

vif0/0      OS: eth0
           Type:Physical HWaddr:fa:16:3e:68:f9:e8 IPaddr:0
           Vrf:0 Flags:TcL3L2Vp MTU:1514 Ref:5
           RX packets:1598309  bytes:315532297 errors:0
           TX packets:1407307  bytes:383580260 errors:0

vif0/1      OS: vhost0
           Type:Host HWaddr:fa:16:3e:68:f9:e8 IPaddr:a2b5b0a
           Vrf:0 Flags:L3L2 MTU:1514 Ref:3
           RX packets:1403461  bytes:383378275 errors:0
           TX packets:1595855  bytes:315456061 errors:0

vif0/2      OS: pkt0
           Type:Agent HWaddr:00:00:5e:00:01:00 IPaddr:0
           Vrf:65535 Flags:L3 MTU:1514 Ref:2
           RX packets:4389  bytes:400688 errors:0
           TX packets:6931  bytes:548756 errors:0

vif0/3      OS: tapa87ad91e-28
           Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
           Vrf:1 Flags:PL3L2 MTU:9160 Ref:6
           RX packets:565  bytes:105481 errors:0
           TX packets:587  bytes:80083 errors:0

vif0/4350   OS: pkt3
           Type:Stats HWaddr:00:00:00:00:00:00 IPaddr:0
           Vrf:65535 Flags:L3L2 MTU:9136 Ref:1
           RX packets:3  bytes:294 errors:0
           TX packets:3  bytes:252 errors:0

vif0/4351   OS: pkt1
           Type:Stats HWaddr:00:00:00:00:00:00 IPaddr:0
           Vrf:65535 Flags:L3L2 MTU:9136 Ref:1
           RX packets:10  bytes:840 errors:0
           TX packets:10  bytes:840 errors:0

We have our TAP interface at this index 3 and the VRF associated which is the number 1.

Let’s now check the routes for this VRF. For that purpose we use the rt command line.

$ rt --dump 1
Vrouter inet4 routing table 0/1/unicast
Flags: L=Label Valid, P=Proxy ARP, T=Trap ARP, F=Flood ARP

Destination          PPL        Flags        Label         Nexthop    Stitched MAC(Index)

...
192.168.0.3/32         32           LP         16             19        -
...

We see that the MPLS label used is 16. In order to know how the packet will be forwarded we have to check the NextHop used for this route.

$ nh --get 19
Id:19         Type:Tunnel    Fmly: AF_INET  Flags:Valid, MPLSoGRE,   Rid:0  Ref_cnt:2 Vrf:0
             Oif:0 Len:14 Flags Valid, MPLSoGRE,  Data:fa 16 3e 4b f6 05 fa 16 3e 68 f9 e8 08 00
             Vrf:0  Sip:10.43.91.10  Dip:10.43.91.12

We have almost the same informations that the agent gave us. Here in the Oif field, we have the interface where the packet will be sent to the other compute node. Thanks to the vif command line we can get the details about this interface.

$ vif --get 0
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
      Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
      D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
      Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
      Uuf=Unknown Unicast Flood

vif0/0      OS: eth0
           Type:Physical HWaddr:fa:16:3e:68:f9:e8 IPaddr:0
           Vrf:0 Flags:TcL3L2Vp MTU:1514 Ref:5
           RX packets:1602164  bytes:316196179 errors:0
           TX packets:1410642  bytes:384855228 errors:0

As the packet will go through the eth0 interface, a tcpdump should confirm what we described above.

$ sudo tcpdump -n -i eth0 dst 10.43.91.12
12:13:16.908957 IP 10.43.91.10 > 10.43.91.12: GREv0, 
length 92: MPLS (label 16, exp 0, [S], ttl 63) 
IP 10.0.0.3 > 192.168.0.3: ICMP echo request, id 5889, seq 43, length 64

As the tunnel endpoint shows, the packet will be directly forwarded to the compute node that is hosting the destination VM, not using a third party routing device.

On the other side, the vRouter on the second compute node will receive the encapsulated packet. According to the MPLS Label, it does a lookup on a MPLS Label/NextHop as we can see on its introspect.
mpls-nhAs we can see here the NextHop field for the Label 16 is the TAP interface of our second VM. On the datapath side we can check the same informations. Checking the MPLS Label/NextHop table :

$ mpls --get 16
MPLS Input Label Map

  Label    NextHop
-------------------
     16        14

..and finally the NextHop and the interface with the following commands :

$ nh --get 14
Id:14         Type:Encap     Fmly: AF_INET  Flags:Valid, Policy,   Rid:0  Ref_cnt:4 Vrf:1
             EncapFmly:0806 Oif:3 Len:14 Data:02 8a 39 ff 98 d3 00 00 5e 00 01 00 08 00

$ vif --get 3
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
      Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
      D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
      Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
      Uuf=Unknown Unicast Flood

vif0/3      OS: tap8a39ff98-d3
           Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
           Vrf:1 Flags:PL3L2 MTU:9160 Ref:6
           RX packets:2957  bytes:293636 errors:0
           TX packets:3085  bytes:297115 errors:0

 

This post was just an overview on how the packets are forwarded from one node to another and what are the interfaces/tools that you can use for troubleshooting purpose. One of the interesting thing with OpenContrail is that almost all the components have their own introspect interface helping you a lot during troubleshooting sessions. As we saw, the routing is fully distributed in OpenContrail, each vRouter handles a part of the routing using well known routing protocols like BGP/MPLS which proved their ability to scale.

 

by Sylvain Afchain at July 23, 2015 02:13 PM

Tesora Corp

DBaaS: #1 Value Added Service for every Cloud Service Provider

Our CEO, Ken Rugg, will present the case for database as a service as the number one value-added service for cloud service providers at HostingCon in San Diego next Wednesday (July 29) where he will be speaking to hosting and cloud providers, MSPs, VARs, ISVs, and others in the internet infrastructure industry. Ken will address how offering […]

The post DBaaS: #1 Value Added Service for every Cloud Service Provider appeared first on Tesora.

by Leslie Barron at July 23, 2015 01:30 PM

IBM OpenTech Team

EMA Open Cloud Study Highlights

Enterprise Management Associates (EMA) conducted a survey of IT professionals regarding their use of open technologies. EMA then published a report in March 2015, “Open Cloud Management and Orchestration 2015: Adoption and Experiences“. In June, 2015 EMA published a second report, leveraging the results from the same study, “Open Cloud Management and Orchestration 2015: Research Highlights“. Both of these reports can be downloaded for free, courtesy of IBM.

I encourage you to download and read the reports. That said, I realized that you are busy so here are a few takeaways from the report that will hopefully encourage you to read more.

Cloud is mainstream. Nearly 85% of those surveyed reported using cloud. Sure there are different definitions of cloud and some might consider using a hypervisor or a single SaaS service as “using cloud”. However, IT is clearly getting smarter about what constitutes “using cloud” as the market has done a good job of educating cloud prospects over the past two years. The conversations I have with clients are much more focused on understanding particular cloud features and capabilities than overall cloud benefits they were even six months prior.

Hybrid Cloud is growing faster than public. I believe that this is because organizations are still too vested in their existing on premises, people, processes and technology. They are concerned that they will lose control if they put too many workloads outside their datacenter. That said, organizations also see the flexibility and speed of cloud and want to take advantage. As a result, they focus on both public and private clouds, connecting them together through common management.

Orchestration is typically found in successful clouds – Of cloud users who identified their projects as extremely successful or very successful, there was a high correlation with the use of cloud orchestration. Orchestration helps cloud managers ensure successful deployments, scale efficiently and monitor and adjust resources to meet performance objectives more easily.

IBM cloud orchestration ranks high. Both of IBM’s OpenStack based cloud management and orchestration products, IBM Cloud Orchestrator and IBM Cloud Manager with OpenStackTM scored high in terms of current use by respondents as well as consideration for future use. I believe this is because these products have done an excellent job of simplifying cloud management and orchestration while still providing the flexibility to use the scripts or processes that users might be familiar with. Follow the links to those products to learn more about how they can help accelerate your path to a successful cloud.

Read the reports and feel free to engage me in the comments or on twitter: @grizfisher.

The post EMA Open Cloud Study Highlights appeared first on IBM OpenTech.

by ShawnJaques at July 23, 2015 07:18 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
August 02, 2015 04:19 PM
All times are UTC.

Powered by:
Planet