December 19, 2015

Tesora Corp

Short Stack: OpenStack guides and howtos, User story, Predictions in the OpenStack market

short stack_b small_0_0_0.jpg

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Check out this week's articles:

OpenStack User Story: Pass the Mic: Matt Haines, Time Warner Cable | Superuser blog

Cable operator Time Warner explains how OpenStack fits into their infrastructure.  OpenStack helps them improve agility while reducing operating costs for their video services. 

Pigs Fly and VMware Vaults Into The Top-3 OpenStack Vendors | Readwrite Web

Boris Renski from Mirantis talks about what their partnership and reference architecture means to OpenStack.  He also makes some bold predictions about VMware and Red Hat's relative positions in the OpenStack market. 

OpenStack Trove Today and Tomorrow: An Interview with Doug Shelley, Tesora VP of R&D | Tesora Blog

In this Q&A, Doug Shelley from Tesora shares his thoughts on the progress and future of the OpenStack Trove project.

Advance your OpenStack with new guides and howtos | Opensource.com 

Opensource.com compiled some of the latest tutorials and how to guides for people working with OpenStack.

Embracing OpenStack | Network Computing

This article discusses the tough decisions that many major IT vendors have made around building their next generation of cloud services.  One such question is whether to build proprietary cloud platforms or embracing OpenStack.

by 1 at December 19, 2015 05:00 AM

December 20, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (Dec 13 – 20)

The Way Forward

The OpenStack project structure has been under heavy discussion over the past months. There was a long email thread, a lot of opinionated blogposts, a cross-project design summit session in Paris, and various strawmen proposed to our governance repository. Based on all that input, the OpenStack Technical Committee worked on a clear specification of the problems we are trying to solve, and the proposed way to fix them. Thierry Carrez wrote an excerpt of the approved resolution. To ejnoy with a cup of your favorite brew and away from distractions.

OpenStack Object Storage Swift 2.2.1 released

The work of 28 contributors (including 8 first-time contributors), this release is definitely operator-centric. Upgrade is recommended; as always you can upgrade to this release with no customer downtime.

Relevant Conversations

Deadlines and Development Priorities

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

2015 OpenStack Foundation Events Plan and Calls for Papers Available. The 2015 events plan is now available on the Global Events Calendar wiki. The detailed spreadsheet covers industry events, Summits (dates and locations), as well as regional OpenStack Days organized by the community.  If you’re interested in sponsoring or speaking at a planned OpenStack Day, please contact the contact listed in the spreadsheet.

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Matt Borland Erwan Velu
sarat inuguri Aleksey
Tom Swanson yuntong
Peter Penchev lu huichun
Moshe Levi Ph. Marek
Jie Li Shaoquan Chen
Jean-Frédéric watanabe.isao
Erik Wilson Putta Challa
Chris Gacsal Brian Ruff
Kai Qiang Wu
Jing Zeng

OpenStack Reactions

lmhywsk

Trying to argue about how to implement a spec

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at December 20, 2014 12:44 AM

December 19, 2014

SUSE Conversations

Towards Zero Downtime OpenStack Clouds

For businesses of all kinds holiday cheer can quickly go from Ho! Ho! Ho! To Oh No! No! No! Cause when it comes to downtime failure is not option for enterprise IT organizations. Whether caused by operational failures, natural disasters or other events, periods of downtime can have significant consequences for business. As companies move …

+read more

The post Towards Zero Downtime OpenStack Clouds appeared first on Conversations. Douglas Jarvis

by Douglas Jarvis at December 19, 2014 06:24 PM

SwiftStack Team

Swift 2.2.1 Released

Below is an email I sent this morning to the OpenStack mailing list.

I'm happy to announce the release of Swift 2.2.1. The work of 28 contributors (including 8 first-time contributors), this release is definitely operator-centric. I recommend that you upgrade; as always you can upgrade to this release with no customer downtime.

Below I've highlighted a few of the more significant updates in this release.

  • Swift now rejects object names with unicode surrogates. These unicode code points are not able to be encoded as UTF-8, so they are now formally rejected.

  • Storage node error limits now survive a ring reload. Each Swift proxy server tracks errors when talking to a storage node. If a storage node sends too many errors, no further requests are sent to that node for a time. However, previously this error tracking was cleared with a ring reload, and a ring reload could happen frequently if some servers were being gradually added to the cluster. Now, the error tracking is not lost on ring reload, and error tracking is aggregated across storage polices. Basically, this means that the proxy server has a more accurate view of the health of the cluster and your cluster will be less stressed when you have failures and capacity adjustments at the same time.

  • Clean up empty account and container partitions directories if they are empty. This keeps the system healthy and prevents a large number of empty directories from (significantly) slowing down the replication process.

  • Swift now includes a full translation for Simplified Chinese (zh_CN locale).

I'd like to thank all of the Swift contributors for helping with this release. I'd especially like to thank the first-time contributors listed below:

  • Cedric Dos Santos
  • Martin Geisler
  • Filippo Giunchedi
  • Gregory Haynes
  • Daisuke Morita
  • Hisashi Osanai
  • Shilla Saebi
  • Pearl Yajing Tan

Thank you, and have a happy holiday season.

John

December 19, 2014 06:00 PM

Opensource.com

Advance your OpenStack with new guides and howtos

The cloud is the future, and now is the time to start learning more about how you can use OpenStack to solve your organization's IT infrastructure needs. Fortunately, we're here to help with that. Every month, we compile the very best of recently published how-tos, guides, tutorials, and tips into this handy collection.

by Jason Baker at December 19, 2014 08:00 AM

December 18, 2014

OpenStack Blog

2015 OpenStack Foundation Events Plan and Calls for Papers Available

The 2015 events plan is now available on the Global Events Calendar wiki.  The detailed spreadsheet covers industry events, Summits (dates and locations), as well as regional OpenStack Days organized by the community.  If you’re interested in sponsoring or speaking at a planned OpenStack Day, please contact the contact listed in the spreadsheet.  If you are interested in organizing an OpenStack Day in your region, please reference http://www.openstack.org/brand/event-policy/ and contact events@openstack.org.

 

We’d like your input on the industry events plan. The events calendar is fully editable, so you can update the following criteria:

  • Share if your organization is attending, sponsoring or exhibiting (COLUMN G)
  • Provide feedback or ideas on events (COLUMN H)
  • Add vendor-independent industry events to the calendar (insert in chronological order and complete ALL criteria)

 

The Foundation is sponsoring two industry events with Pavilions, offering the ecosystem the opportunity to exhibit and/or speak in an OpenStack context, at a much lower cost than direct sponsorship.  Pavilions with speaking opportunities are available for these international events.  Commitments must be in place by January 31, 2015.  If you’re interested in joining us, please begin your internal authorization process today.

  • Cloud Expo Europe, March 11-12, 2015, London
    • For 4500 British Pounds, participants will receive:
      • A kiosk in the OpenStack section of the Open Cloud Park in the exhibit hall, to include a plasma display, wireless Internet, backwall graphics, power and lighting.  Lead retrieval is extra.
      • A 20 minute speaking session in the in-booth theater (you can scan attendees)
      • Your logo on event website, promotional emails, and name and description in conference guide.
      • Personalized promotion provided for your marketing effort
      • VIP telemarketing to 20 of your provided prospects
  • CONNECT 2015 April 21-22, 2015, Melbourne
    • For 8,000-10,000 Australian Dollars + 10% VAT, participants will receive:
      • Silver sponsorship and branding
      • Speaking slot in full day for-fee OpenStack Conference on April 21
      • Counter in Pavilion including:
        • Graphics, wireless internet, power and lighting
        • AV, lead retrieval is extra
      • A 25 minute presentation in the CONNECT ICT theater on expo floor
      • 1 full conference pass

Check out all of the upcoming CFP deadlines - the below deadlines that are bolded are approaching soon, so act quickly!

  • Cloud Expo Europe, March 11-12, 2015
    • OpenStack has two on-expo-floor theater slots for community speakers to represent the Foundation
    • Also offering a co-sponsorship package including expo floor Open Cloud Park theater spot.
    • Contact kathyc@openstack.org by January 31, 2015
  • Interop
    • April 27-May 1, 2015 in Las Vegas, NV
    • Asking for ecosystem input on sponsored (e.g. paid) sessions or vendor tech panels
    • Contact kathyc@openstack.org by January 31, 2015
  • SDN & NFV Europe
    • April 29, 2015 in London, England, UK
    • OpenStack is offered a theater speaking slot
    • Contact kathyc@openstack.org if interested, by January 31, 2015
    • In addition, see proposed agenda/topics then contact Ruki Rehman at rrehman@whitehallmedia.co.uk
  • Cloud Expo East, DevOps Summit, Big Data Expo, IoT Expo, SDDC Expo (all co-located, different tracks), June 9-11, 2015 in New York City, NY
    • Deadline December 31, 2014
    • http://www.cloudcomputingexpo.com/general/papers2015east.htm
  • DevNation, June 21-25, 2015 in Boston,MA
  • OSCON, July 20-24, 2015 in Portland, OR
    • Deadline February 2, 2015
    • http://www.oscon.com/open-source-2015/public/cfp/360o deadline listed
    • CloudOpents.linuxfoundation.org/events/cloudopen-north-america/program/cfp
  • Supercomputing 15, November 15-20, 2015 in Austin, TX

If you have any questions on industry events, please contact Kathy Cacciatore at kathyc@openstack.org.  Thank you for your continued support for OpenStack Events!

by Allison Price at December 18, 2014 04:42 PM

Adam Spiers

How to build an OpenStack cloud from SUSEcon’s free USB stick handouts

Once again, SUSEcon was a blast! Thanks to everyone who helped make it such a great success, especially all our customers and partners who attended.

If you attended the final Thursday keynote, you should have been given a free USB stick preloaded with a bootable SUSE Cloud appliance. And if you missed out or couldn’t attend, download a copy here! This makes it possible for anyone to build an OpenStack cloud from scratch extremely quickly and easily. (In fact, it’s almost identical to the appliance we used a few weeks ago to win the “Ruler of the Stack” competition at the OpenStack summit in Paris.)

Erin explained on stage at a high-level what this appliance does, but below are some more specific technical details which may help in case you haven’t yet tried it out.

The appliance can be booted on any physical or virtual 64-bit x86 machine … but before we start! – if you would like try running the appliance in a VM using either KVM or VirtualBox, then there is an even easier alternative which uses Vagrant to reduce the whole setup to a one-line command. If you like the sound of that, stop reading and go here instead. However if you want to try it on bare metal or with a different hypervisor such as VMware or HyperV, read on!

Requirements

You’ll need the following:

  • At least three physical or virtual 64-bit x86 machines, each with at least 2GB RAM and 8GB disk, and no valuable data on any of the attached disks:

    • one admin node running the Crowbar deployment tool which will provision the other nodes from scratch,
    • at least one controller node which will run OpenStack infrastructure services, and
    • at least one compute node which will host VM instances within the cloud. (If this compute node is a VM, then the VM instances in the cloud will have to be run either using KVM nested virtualization, or QEMU software virtualization which is slower but good enough for “kicking the tires”.)
  • A private IPv4 network which all the machines must be connected to. Setting this up is the only potentially tricky bit of the whole exercise. By default the network in question needs to be 192.168.124.0/24 with no DHCP server enabled, so:

    • if you are installing the appliance in a VM, you should be able to set up a NAT or host-only virtual network and configure your hypervisor so that it does not serve DHCP on that network, or
    • if you are installing on bare metal, ensure there is no DHCP server active on that L2 segment.
  • Another machine (physical or virtual) with a modern web browser on the same private IPv4 network.

Installing the appliance

Attach the bootable USB media (physical or virtual), and boot the machine. This will automatically install a SUSE Cloud Admin Node onto the disk. Caution: this will wipe any pre-existing OS, so only use it on a spare machine or freshly-created VM, with no valuable data on any of the attached disks!

disk-destroying confirmation dialog box

After confirming you are OK to wipe all existing data on the disk, the appliance will be written to disk and then booted. Shortly after, YaST will appear, allowing you to configure:

  • which keyboard layout you want,
  • what password to use for the root user,

root password configuration dialog box

  • what hostname and domain name to use (the defaults are fine),

hostname/DNS configuration dialog box

  • the network setup (the default of 192.168.124.10/24 is recommended, otherwise you will also have to configure Crowbar prior to installation),
  • the clock and time zone, and finally
  • the NTP configuration (supplying an upstream NTP server is recommended but not required).

Logging in to the Admin Node

Log in as root (with the password you specified above) either on the console or via ssh root@192.168.124.10 if you have another machine configured to be on the same subnet.

Press q and then y to accept the beta EULA, which highlights that the appliance is partially based on unreleased code. Please do not use it for production deployments!

Configuring Crowbar (optional)

Crowbar is very powerful and flexible in terms of network configuration. If you have other traffic on the L2 network segment (e.g. if you are using bare metal hardware and a physical network rather than a dedicated virtual network) then you should check that its default networks don’t conflict with your existing traffic. To do this, type yast crowbar:

how to launch the YaST Crowbar module

and select the Networks tab:

Crowbar network configuration dialog

From here you can examine and change the networks Crowbar will use. Some are only used when various options are selected later on, but at a minimum, admin, nova_fixed, and nova_floating will all be used. For more information, see the Networking section of the SUSE Cloud Deployment Guide.

Installing SUSE Cloud

From the shell prompt, type screen go to initiate the installation of SUSE Cloud. This will take several minutes.

Exploring the Crowbar web interface

On another machine on the same network as the machine running the now-installed appliance, start a browser, and navigate to http://192.168.124.10:3000/ (adjust the IP accordingly if you changed the admin network above).

Crowbar web UI

PXE-boot some other nodes

Now simply PXE-boot your other nodes. (Typically this requires PXE-booting to be enabled in the BIOS, and/or manually selecting it from the BIOS boot menu which is often accessible by hitting the F11 key or similar during boot.)

This will run a small inventory ramdisk image on each one to detect its hardware and report the discovery back to Crowbar, without touching the node’s local disk(s). Each node will then appear in the Crowbar web UI, and sit in an idle loop whilst awaiting task allocation via Crowbar:

Crowbar web UI with new node discovered

Clicking on that node will show the results of the automatic hardware inventorying, and give us the option to allocate the node:

viewing the newly discovered node in Crowbar's web UI

When editing the node, we can give it a more human-friendly alias (e.g. node1), and then click Allocate to install a minimal SLES OS:

editing the newly discovered node in Crowbar's web UI

A full OS will be automatically installed on the node via AutoYaST:

autoyast installation in progress

When OS installation has finished, the console looks like this:

autoyast installation completed

and then the node turns green in the web UI indicating that it’s ready to have roles assigned to it:

Crowbar's web UI showing node1 as ready

Multiple nodes can be installed via PXE/AutoYaST at the same time.

Deploying OpenStack via Crowbar barclamps

By this point you should have at least two freshly-installed nodes managed by Crowbar (excluding the admin node itself which Crowbar runs on), in which case you are ready to deploy OpenStack via Crowbar’s barclamps, which can be found via the Barclamps drop-down in Crowbar’s web interface:

navigating to the OpenStack barclamps in Crowbar's web UI

This process is relatively straightforward, and a full explanation can be found in the corresponding chapter of the SUSE Cloud 4 Deployment Guide.

However, it is also possible to automate the whole deployment from this point on, using Crowbar’s batch subcommand and an appropriately crafted .yaml file. There are three sample .yaml files in /root on the admin node. The simplest configuration reflected by these three files is simple-cloud.yaml, which assumes a single controller node and a single compute node, with aliases controller1 and compute1 respectively:

simple 2-node cloud scenario shown by Crowbar's UI

In this case, assuming your nodes are given the above aliases, you could set up the entire OpenStack cloud with this single command run as root on the admin node:

crowbar batch --timeout 1800 build simple-cloud.yaml

starting crowbar batch build

It takes a while to apply all the barclamps, as can be seen from the timestamps whilst it’s running:

crowbar batch build on cinder

(The other two sample .yaml files are for a highly available control plane which assumes you have three nodes aliased controller1, controller2, and compute1.)

Once the barclamps are all applied, they should show as green in the Crowbar UI view:

Crowbar UI showing barclamps successfully applied

Exploring OpenStack

From the main Nodes dashboard in the Crowbar web UI, click the controller1 node (or whichever one you deployed OpenStack’s Dashboard to), and you will see a couple of links to the OpenStack Dashboard (a.k.a “Horizon”):

Crowbar UI showing barclamps successfully applied

Click on OpenStack Dashboard (admin) and it will take you to the OpenStack Dashboard, where you can log in as the admin user with a password of (by default) crowbar:

OpenStack Dashboard login page

OpenStack Dashboard compute overview page

Congratulations! You have set up a full OpenStack cloud from scratch! Now you can start reading the SUSE Cloud Admin and End User guides to learn more about how to use OpenStack.

Support

Whilst this bootable appliance is partially based on unreleased, unsupported code, we are still very interested to hear feedback from our customers and partners. So while we (obviously!) cannot offer unlimited free support for it, if you post any questions / issues to the SUSE Cloud web forum, we will try to respond on a best-effort basis. (And of course full commercial support for the released version of SUSE Cloud is available if you want it ;-) )

As we say in the SUSE world, have a lot of fun!

Share

by Adam at December 18, 2014 01:26 PM

Thierry Carrez

The Way Forward

The OpenStack project structure has been under heavy discussion over the past months. There was a long email thread, a lot of opinionated blogposts, a cross-project design summit session in Paris, and various strawmen proposed to our governance repository. Based on all that input, the OpenStack Technical Committee worked on a clear specification of the problems we are trying to solve, and the proposed way to fix them. What follows is an excerpt of the approved resolution.

Problem description

Our project structure is currently organized as a ladder. Developers form teams, work on a project, then apply for incubation and ultimately graduate to be part of the OpenStack integrated release. Only integrated projects (and the few horizontal efforts necessary to build them) are recognized officially as "OpenStack" efforts. This creates a number of issues, which were particularly visible at the Technical Committee level over the Juno development cycle.

First, the integrated release as it stands today is not a useful product for our users. The current collection of services in the integrated release spans from cloud native APIs (swift, zaqar in incubation), base-level IaaS blocks (nova, glance, cinder), high-level aaS (savana, trove), and lots of things that span domains. Some projects (swift, ironic...) can be used quite well outside of the rest of the OpenStack stack, while others (glance, nova) really don't function in a different context. Skilled operators aren't deploying "the integrated release": they are picking and choosing between components they feel are useful. New users, however, are presented with a complex and scary "integrated release" as the thing they have to deploy and manage: this inhibits adoption, and this inhibits the adoption of a slice of OpenStack that could serve their need.

Second, the integrated release being the only and ultimate goal for projects, there is no lack of candidates, and the list is always-growing. Why reject Sahara if you accepted Trove? However, processes and services are applied equally to all members of the integrated release: we gate everything in the integrated release against everything else, we do a common, time-based release every 6 months, we produce documentation for all the integrated release components, etc. The resources working on those integrated horizontal tasks are very finite, and complexity grows non-linearly as we add more projects. So there is outside pressure to add more projects, and internal pressure to resist further additions.

Third, the binary nature of the integrated release results in projects outside the integrated release failing to get the recognition they deserve. "Non-official" projects are second- or third-class citizens which can't get development resources. Alternative solutions can't emerge in the shadow of the blessed approach. Becoming part of the integrated release, which was originally designed to be a technical decision, quickly became a life-or-death question for new projects, and a political/community minefield.

In summary, the "integrated release" is paradoxically too large to be effectively integrated, installed or upgraded in one piece, and too small to express the diversity of our rich ecosystem. Its slow-moving, binary nature is too granular to represent the complexity of what our community produces, and therefore we need to reform it.

The challenge is to find a solution which allows to address all those three issues. Embrace the diversity of our ecosystem while making sure that what we produce is easily understandable and consumable by our downstream users (distributions, deployers, end users), all that without putting more stress on the already overworked horizontal teams providing services to all OpenStack projects, and without limiting the current teams access to common finite resources.

Proposed change

Provide a precise taxonomy to help navigating the ecosystem

We can't add any more "OpenStack" projects without dramatically revisiting the information we provide. It is the duty of the Technical Committee to help downstream consumers of OpenStack understand what each project means to them, and provide them with accurate statuses for those projects.

Currently the landscape is very simple: you're in the integrated release, or you're not. But since there was only one category (or badge of honor), it ended up meaning different things to different people. From a release management perspective, it meant what we released on the same day. From a CI perspective, it meant what was co-gated. From an OpenStack distribution perspective, it meant what you should be packaging. From some operator perspective, it meant the base set of projects you should be deploying. From some other operator perspective, it meant the set of mature, stable projects. Those are all different things, and yet we used a single category to describe it.

The first part of the change is to create a framework of tags to describe more accurately and more objectively what each project produced in the OpenStack community means. The Technical Committee will define tags and the objective rules to apply them. This framework will allow us to progressively replace the "integrated release" single badge with a richer and more nuanced description of all "OpenStack" projects. It will allow the Technical Committee to provide more precise answers to the Foundation Board of Directors questions about which set of projects may make sense for a given trademark license program. It will allow our downstream users to know which projects are mature, which are security-supported, which are used in more than one public cloud, or which are really massively scalable.

Recognize all our community is a part of OpenStack

The second part of the change is recognizing that there is more to "OpenStack" than a finite set of projects blessed by the Technical Committee. We already have plenty of projects that are developed on OpenStack infrastructure, follow the OpenStack way of doing things, have development discussions on the openstack-dev mailing-list and use #openstack-meeting channels for their team meetings. Those are part of the OpenStack community as well, and we propose that those should considered "OpenStack projects" (and be hosted under openstack git namespaces), as long as they meet an objective criteria for inclusion (to be developed as one of the work items below). This might include items such as:

  • They align with the OpenStack Mission: the project should help further the OpenStack mission, by providing a cloud infrastructure service, or directly building on an existing OpenStack infrastructure service

  • They follow the OpenStack way: open source (licensing), open community (leadership chosen by the contributors to the project), open development (public reviews on Gerrit, core reviewers, gate, assigned liaisons), and open design (direction discussed at Design Summit and/or on public forums)

  • They ensure basic interoperability (API services should support at least Keystone)

  • They submit to the OpenStack Technical Committee oversight

These criteria are objective, and therefore the Technical Committee may delegate processing applications to another team. However, the TC would still vote to approve or reject applications itself, based on the recommendations and input of any delegates, but without being bound to that advice. The TC may also decide to encourage collaboration between similar projects (to reduce unnecessary duplication of effort), or to remove dead projects.

This proposed structure will replace the current program-driven structure. We'll still track which team owns which git repository, but this will let multiple different "OpenStack" teams potentially address the same problem space. Contributors to projects in the OpenStack git namespace will all be considered ATCs and participate in electing the Technical Committee.

Transition

As for all significant governance changes, we need to ensure a seamless transition and reduce the effect of the reform on the current development cycle. To ensure this seamless transition, the OpenStack taxonomy will initially define one tag, "integrated-release", which will contain the integrated projects for the Kilo cycle. To minimize disruption, this tag will be used throughout the Kilo development cycle and for the Kilo end release. This tag may be split, replaced or redefined in the future, but that will be discussed as separate changes.

Next steps

I invite you to read the full text of this Technical Committee resolution to learn more about the proposed implementation steps or the impact on current projects.

It's important to note that most of the work and decisions are still ahead of us: those proposed changes are just the base foundational step, enabling the future evolution of our project structure and governance. Nevertheless, it still is a significant milestone to clearly describe the issues we are working to solve, and to agree on a clear way forward to fix those.

The next step now is to communicate more widely about the direction we are going, and start the discussion on some more difficult and less consensual details (like the exact set of objective rules applied to judge new entrants, or the need - and clear definition for - a compute-base tag).

by Thierry Carrez at December 18, 2014 01:15 PM

December 17, 2014

Stefano Maffulli

How to get better at IRC-based meetings

Marvel’s Professor X sure knows how hard IRC meetings are

When people approach discussions via IRC for the first time, most of the time they get overwhelmed. Being IRC the most common way to hold meetings in OpenStack, practicing written communication on IRC should be suggested to start immediately to all new contributors.

I still remember the first time I went on IRC: I felt like I was in a room where everyone was speaking loudly and I could hear every single person in the room. It wasn’t like being in a loud crowd where all you hear is noise and your brain is perfectly trained/evolved to filter out: I felt like comic book characters may feel when they actually hear and comprehend everything everybody is saying. It was scary. It took me a while to learn how to filter the written noise and follow multiple threads in real time, write an answer to someone while reading a question from someone else.

In the past days I’ve been working closely with people in teams that had no practice of holding conversations on IRC. Especially some of these people had no practice in holding meetings on IRC. I hear often the complaint that written chat meetings are too slow but I believe it’s the opposite: they’re too fast, they require skills and training that may take months of daily practice to master. IRC has strong advantages though, that speed at which discussions can progress with 10-15 people and more is sweet, so sweet that when I am stuck in audio-video conferences, everything seems to be too slow and distractions abund.

To all people new to IRC I suggest to practice and practice and practice some more. It’s really not that hard to join an IRC channel and just spend few minutes a day to read the history and try to make sense of it. It’s a habit: if your job description includes writing a specs for OpenStack then an IRC client should start every time you login in your machine. Even if you think you have nothing to say: get started when you don’t need to say anything, just start to get your brain trained for IRC super-hero powers, like those of telepath Professor X. Learn to filter the background, follow multiple threads if needed, improve your typing speed. The OpenStack wiki page has some basic pointers to get started. If you have suggestions, feel free to edit that page.

by stefano at December 17, 2014 12:35 AM

December 16, 2014

Red Hat Stack

IBM and Red Hat Join Forces to Power Enterprise Virtualization

Adam Jollans is the Program Director  for Cross-IBM Linux and Open Virtualization Strategy
IBM Systems & Technology Group

IBM and Red Hat have been teaming up for years. Today, Red Hat and IBM are announcing a new collaboration to bring Red Hat Enterprise Virtualization to IBM’s next-generation Power Systems through Red Hat Enterprise Virtualization for Power.

A little more than a year ago, IBM announced a commitment to invest $1 billion in new Linux and open source technologies for Power Systems. IBM has delivered on that commitment with the next-generation Power Systems servers incorporating the POWER8 processor which is available for license and open for development through the OpenPOWER Foundation. Designed for Big Data, the new Power Systems can move data around very efficiently and cost-effectively. POWER8’s symmetric multi-threading provides up to 8 threads per core, enabling workloads to exploit the hardware for the highest level of performance.

Red Hat Enterprise Virtualization combines hypervisor technology with a centralized management platform for enterprise virtualization. Red Hat Enterprise Virtualization Hypervisor, built on the KVM hypervisor, inherits the performance, scalability, and ecosystem of the Red Hat Enterprise Linux kernel for virtualization. As a result, your virtual machines are powered by the same high-performance kernel that supports your most challenging Linux workloads.

Enterprise organizations seeking to optimize their virtualization environments must be able to centrally manage the full range of virtual machine dependencies, including storage and networking. Red Hat Enterprise Virtualization includes Red Hat Enterprise Virtualization Manager, a centralized management console that can manage hundreds of hosts and tens of thousands of virtual machines.  Through the management interface, Red Hat Enterprise Virtualization provides the flexibility of managing a mixture of x86 and Power Systems. While the Red Hat Enterprise Virtualization management server runs on an x86 architecture platform, it can now manage clusters of Power architecture hosts, as well as separate clusters of x86 architecture hosts – all from a single pane of glass.

In addition to the benefits of centralized management of the virtualized infrastructure, the availability of Red Hat Enterprise Virtualization for Power provides simplified use to some of the more advanced functionality of KVM:

  1. High Availability – If one host were to go down or lose the ability to use virtual machines, Red Hat Enterprise Virtualization quickly migrates those virtual machines to  other hosts within the environment to minimize downtime.
  2. Live Migration and Storage Live Migration – Red Hat Enterprise Virtualization can move a virtual machine from one host to another for preventive maintenance without downtime. This means end users continue to enjoy access the virtual machines if it is necessary to deploy patches or install updates to a host.
  3. Intelligent Load Balancing – Because of the shared nature of virtualization , it user want to avoid  one VM affecting the performance of another. In the event that one VM begins to impactthe performance of another, Red Hat Enterprise Virtualization balances the workloads to mitigate the potential impact so that operations continue smoothly.
  4. Centralized Template Management – Red Hat Enterprise Virtualization provides the ability to build and manage templates for virtual machines and provision them to another host with a few mouse clicks, enhancing the ability to provision new virtual machines rapidly.
  5. Self-Service Portal for Quick Provisioning – Users who consume the virtual infrastructure services – particularly in a lab or test and development environment – need to be able to spin up virtual machines quickly. Red Hat Enterprise Virtualization’s full self-service portal allows these users to log in, provision their virtual machines, shut them down, and have control over the part of the environment that they have been allocated without having to go through IT staff requests for provisioning.

Together, the integration of IBM POWER8 – with its capabilities for high performance – and Red Hat Enterprise Virtualization’s enterprise virtualization and management features provide a strong combination – particularly for larger enterprise deployments and mission-critical applications.

The Value of Red Hat Enterprise Virtualization and Power Systems

For those IBM Linux on Power Systems customers that have not yet fully virtualized their infrastructure, they will be able to now deploy Red Hat Enterprise Virtualization and easily leverage the opportunities that virtualization provides. And, for users that move into open applications and frameworks with Red Hat Enterprise Linux, this provides a great opportunity to have access to Power and the flexibility that the next-generation POWER8 architecture provides. All software for Red Hat Enterprise Virtualization for Power will be provided through Red Hat, with tier 1 through tier 3 support available.

In addition, Red Hat has just released the beta of Red Hat Enterprise Linux 7.1 which includes a version for Power processors running in little endian mode. This enables users and ISVs to easily move Linux on x86 applications to Linux on Power Systems with minimal or no porting, and is just another example of Red Hat and IBM working closely to provide better features and functionality to our joint customers. To learn more or sign up for the beta, visit https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/.

And stay tuned for more in 2015. We expect the adoption of Red Hat Enterprise Virtualization for Power as a  supported platform to evolve and expand over time. We see this as only the beginning of a larger Red Hat collaboration with IBM around POWER8.

 

by adamjollans at December 16, 2014 05:47 PM

Percona

OpenStack Live tutorials & sessions to bring OpenStack users up to speed

I attended the OpenStack Paris summit last month (Percona had a booth there). It was my first opportunity to meet face-to-face with this thriving community of developers and users. I’m proud that Percona is part of this open source family and look forward to reconnecting with many of the developers and users I met in Paris – as well as meeting new faces – at OpenStack Live in Silicon Valley April 13-14.

OpenStack Live 2015: Sneak peak of the April conferenceOpenStack summits, generally held twice a year, are the place where (for the most part) developers meet and design “in the open,” as the OpenStack organization says. OpenStack Live 2015, held in parallel with the annual Percona Live MySQL Conference and Expo, will be a unique opportunity for users and enthusiasts to learn from leading OpenStack experts in the field about top cloud strategies, improving overall cloud performance, and operational best practices for managing and optimizing OpenStack and its MySQL database core.

OpenStack Live will also provide some serious classroom-style learning. Percona announced the OpenStack Live tutorials sessions a couple days ago. Most sessions are three hours long and because they really are “hands-on” require that you bring your laptop – and a power cord (not to be confused with a “power chord,” though those also welcome”).

Let’s take a closer look at the OpenStack Live tutorial sessions.


Barbican: Securing Your Secrets.” Join Rackspace gurus Douglas Mendizábal, Chelsea Winfree and Steve Heyman on a tour through the magical world of Barbican (yes, they are dedicated members of the Barbican project).

Don’t be intimidated if don’t have any previous experience with Barbican (and if you’ve never heard of it, more the reason to attend!). A basic understanding of security components (such as keys and certificates) and a basic understanding of ReST is helpful, but not required.

By the end of the class you will know:
1)   Importance of secret storage
2)   How to store & retrieve secrets with Barbican
3)   How to submit an order with Barbican
4)   How to create a container
5)   Use cases for Barbican / Examples
6)   The future of Barbican –Ordering SSL Certs


Deploying, Configuring and Operating OpenStack Trove.” As the title suggests, these three hours focus squarely on Trove. The tutorial – led by Tesora founder & CTO Amrith Kumar, along with Doug Shelley, the company’s vice president of product development – will begin with a quick overview of OpenStack and the various services.

If you attend this tutorial you’ll actually deploy your own OpenStack environment – and create and manage a Nova (compute) instance using a command line and a graphical user interface (Horizon). And the fun continues! You’ll then install and configure Trove, and create and manage a single MySQL instance. Finally, pupils will create and operate a simple replicated MySQL instance pair and ensure that data is being properly replicated from master to slave.


Essential DevStack.” DevStack is an opinionated script to quickly create an OpenStack development environment. It can also be used to demonstrate starting/running OpenStack services and provide examples of using them from a command line. The power of DevStack lies within small trick that if people understand can hugely improve the contribution effectiveness, quality and required time. This three-hour tutorial will be led by Red Hat senior software engineer Swapnil Kulkarni.


OpenStack Networking Introduction,” with PLUMgrid’s Valentina Alaria and Brendan Howes. Buckle your seat belts! Designed for IT professionals looking to expand their OpenStack “networking” (no, not the LinkedIn sort of networking) knowledge, OpenStack Networking Fundamentals will be a comprehensive and fast-paced course.

This half-day training provides an overview of OpenStack, its components and then dives deep into OpenStack Networking – the features and plugin model and its role in building an OpenStack Cloud. The training is concluded with a hands-on lab to bring all the concepts together.

OpenStack Networking (Neutron) Introduction [1 hour]
– Goal of Neutron
– Architecture of Neutron
– Plugin Architecture
– Use cases for Neutron
– What’s new in Juno & what’s planned for Kilo

OpenStack Networking (Neutron) Advanced [1 hour]
– Interaction with other OpenStack components (Compute & Storage)
– Designing Neutron for HA
– Installing Neutron
– Troubleshooting Neutron

Hands-on Lab [1 hour]
– Creation of tenant networks
– Configuration of external connectivity
– Advanced Features Configurati


Percona’s director of conferences, Kortney Runyan, offered a sneak peek at the OpenStack sessions last week. Attendees of the Percona Live MySQL Conference and Expo 2015 (April 13-16, 2015) with full-pass access are also welcome to attend OpenStack Live sessions. The two conferences are running in parallel, which is very exciting since there will be crossover opportunities between them.

I hope to see you next April! And be sure to take advantage of Early Bird pricing for OpenStack Live (register here).
And if you are an organizer of an OpenStack (or MySQL) Meetup and need some financial help, Percona is happy to chip in as a sponsor. Just let me know and I’ll work with you to set that up! You can drop me a note in the comments and I’ll contact you offline.

OpenStack Live tutorials & sessions to bring users up to speed

The post OpenStack Live tutorials & sessions to bring OpenStack users up to speed appeared first on MySQL Performance Blog.

by Tom Diederich at December 16, 2014 05:28 PM

The Official Rackspace Blog » OpenStack

OpenPOWER: Opening The Stack, All The Way Down

Rackspace has publicly announced our affiliation with the OpenPOWER Foundation, and we are now an official member. OpenPOWER is a community dedicated to opening access to the lowest-level parts of servers: chips, buses, boards, firmware, and so forth. We anticipate that this movement will bring increased freedom and value to two other communities that we participate in: OpenStack and the Open Compute Project. We think that by working within this new community, Rackspace can deliver improved performance, value, and features for our customers.

That’s the news in a nutshell, but perhaps you’d like a little more detail. Why now? Why OpenPOWER? And what’s next? Those are good questions. The answers may surprise you.

Why now?

Rackspace has actually been involved with OpenPOWER for more than 18 months. We worked amongst the founders for months before the foundation launched, and remained engaged after launch, but chose to remain quiet. We’ve been evaluating the technology and the movement for quite some time. Both the current results and future potential are so promising that we are preparing to build an OpenPOWER-based, Open Compute platform. And it will run OpenStack services.

We intend to work alongside our partners in all three communities to achieve this goal. Today, we publicly communicate that intent.

Why OpenPOWER?

In the world of servers, it’s getting harder and more costly to deliver the generational performance and efficiency gains that we used to take for granted. There are increasing limitations in both the basic materials we use, and the way we design and integrate our systems. In hindsight, one could say that Rackspace started to address these issues by starting at the top of the stack, and moving down; first with OpenStack, then with Open Compute. As we were building OnMetal, our single-tenant, bare-metal Cloud Servers, we began to delve into firmware for Basic Input/Output System (BIOS) and systems management, a still-closed frontier. Moving forward, as we consider the performance levels we want to provide customers with future cloud offerings, we’ll need to start moving into chips, memory, and storage.

And we don’t want to do development in this realm alone. We want the open source community to be involved. OpenStack developers may not think much about these things today, but they will in the future. Linux and Open Compute developers have been encountering these challenges for a while.

The OpenPOWER community has grown steadily over the last year, along with supported applications, operating systems and peripherals. Joining OpenPOWER puts us in great company. Currently, 80 organizations are represented, including Google, IBM, Canonical, Nvidia, Samsung, and Mellanox. Key OpenPOWER members have made meaningful contributions to OpenStack, and we hope to help OpenPOWER build fruitful partnerships within the Open Compute Project. OpenPOWER is also partnered with the Linux Foundation.

OpenPOWER brings an increasingly open firmware stack, and deeper access to chips, memory, and storage than anywhere else. This is unprecedented, and it invites the open source community to participate at all layers.

What’s next?

In the coming months, we’ll engage with our partners in the community to design and build an OpenPOWER-based, Open Compute platform. We want to see that platform contributed to Open Compute, complete with a highly-functional open source firmware set.

We aim to put that platform into production at Rackspace, integrated with OpenStack cloud services.

The way we use servers is already changing. We’re already seeing the lines beginning to blur between conventional processors, memory, and storage. End users will continue to ask for more, and we need the software and solutions to enable them. We need solutions that sweep the whole stack, from hardware, to firmware, to operating systems, to applications. And we want them to be open. OpenStack and Open Compute have an opportunity to get involved early, and drive this change.

It’s our vision that OpenPOWER enables OpenStack and Open Compute developers to work all the way down the stack. Where Open Compute opened and revolutionized data center hardware and OpenStack opened up cloud software and infrastructure-as-a-service, OpenPOWER is doing the same for the last black boxes in our servers: chips, buses, and firmware.

We want our systems open, all the way down. This is a big step in that direction.

by Aaron Sullivan at December 16, 2014 05:00 PM

Tesora Corp

OpenStack Trove Today and Tomorrow: An Interview with Doug Shelley, Tesora VP of R&D

We recently connected with Doug Shelley, our VP of R&D to get his point of view on the world of OpenStack Trove.  Over the past year, his team has directly contributed to the Trove project. This gives them unique perspective on the successes and challenges of bringing Database as a Service (DBaaS) to the OpenStack Party. DougS-highres.jpg

What makes you excited about contributing to the OpenStack Trove project?

Doug: At Tesora, we’ve been thinking about and working on database-as-a-service technologies for several years and being able to apply this experience to benefit the OpenStack Trove project is very rewarding. Combining our deep database experience with the experiences of the other community members is generating an offering within the OpenStack eco-system that is world class.

For example, we worked with many of the members of the community to develop the MySQL replication feature during the Juno cycle. All involved contributed their ideas and experience to deliver a feature that is robust and production-ready.

What do you see as the most significant events in the Trove Project over the past year?

Doug: I would highlight three things:

  1. The OpenStack Icehouse release represented Trove’s official coming out as an Integrated project – this was very significant for the project. This blessing from the Technical Committee was an important signal to customers that the project is now ready to use within their OpenStack deployments.
  2. Starting in February 2014 the community initiated mid-cycle meetups – these between Summit face-to-face meetings are a great opportunity for community members to get together to discuss priorities, challenges and the project vision and roadmap.
  3. Trove Day in Cambridge in August 2014 – This mini-conference dedicated to Trove and OpenStack-related projects was very well attended and gave operators and contributors a chance to come together and focus on DBaaS needs

What do you see as risks to the widespread adoption of Trove in the OpenStack community?

Doug: The biggest risk I see is that Trove gets viewed merely as a way to provision databases and therefore could be replaced with Orchestration or some type of PaaS solution. The Trove community needs to continue to re-inforce the vision and current functionality. Trove is much more than provisioning; It encompasses aspects of life-cycle management and over time will continue to evolve by adding more database management and monitoring capabilities.

We’re now two releases into Trove and while there has been some real progress, there is also much work to do. What do you see as important things the Trove community needs to achieve in the Kilo and “L” releases?

Doug: Continuing to evolve the capabilities from a “Dev/Test” orientation towards Production use-cases. As the leading contributor to the Trove project, we look forward to helping the community address items such as these:

  • Providing for scalability and availability is critical for production database usage so adding clustering and replication support beyond MongoDB and Mysql is needed.
  • Developing a model and process for supporting commercial databases. Products such as Oracle, DB2 and the Enterprise versions of Mongo and Datastax are widely used in production. Currently there are challenges around adding support for these products due to commercial licensing and the impact on community based installation and testing.
  • Developing and maintaining a solid and comprehensive test suite in Tempest. Production use cases require a quality and stability that needs to be “tested in”. While the current “int-tests” provide a reasonable amount of coverage, OpenStack developers and operators have standardized on Tempest so the Trove community needs to continue to move towards that.

As a Canadian, what does it mean to you to have the next OpenStack Summit in Vancouver?

Doug: It is certainly an honour (note the proper spelling of that word) – given that Summit has only been outside the US twice. It is a great city with lots to offer – including many venues for awesome evening social events. Hopefully, it will be a chance for more locals to attend and add more “Canadian-flare” to the event and to the OpenStack community, eh. Plan to book your igloo soon to not be disappointed…..

 

 

by 1 at December 16, 2014 04:17 PM

December 15, 2014

Tesora Corp

Database as a Service Offers a Golden Opportunity for Cloud Providers

The cloud industry is in a very peculiar position when it comes to the enterprise.Datacenter_servers_0.jpg

The dominant players like Amazon, Google and Dropbox have established positions as providers of bulk, commodity resources, namely storage, compute and increasingly, networking. They have done this primarily by appealing directly to users in a never-ending race to the bottom in terms of cost. This has made the acquisition things like storage cheap and convenient for the user, but it leaves the enterprise in a difficult position when it comes to managing infrastructure. And if left unchecked, it could very well zero out the profit margins of the top providers, or even send them in the red.

The obvious win-win-win solution for users, the enterprise and cloud provider alike, is to tailor offerings more toward  organizations than individuals. Of course, the enterprise is interested in more than basic resources, preferring instead to craft cloud-based, application-focused data environments that can be customized to their specific needs. And it is here that even small providers can carve a wedge into the dominant positions of the top players.

One way to do this is through cloud platforms like OpenStack that can be tailored for key applications, such as database support. Database as a Service (DBaaS), is not a new technology, but it is one of the key elements in what some are calling the new app-centric economy. As mobile applications in particular become more sophisticated, they require database support that is both more flexible and scalable than can be mounted in a traditional data center. DBaaS meets these requirements by virtue of its rapid deployment characteristics and ease of management, plus the ability to handle multiple data streams and provide instances that are optimized for particular use cases.

It is important to realize, however, that many organizations are converting legacy data systems into internal, private clouds, some of which will be used for database support. But as research house Celent notes, this is not without its challenges. While large enterprises in particular have the ability to acquire cloud technology, gathering the skillsets to effectively manage them is another matter, and the same issues like data migration and security that are present on the public cloud exist in private as well.

Small and medium-sized cloud providers, therefore, are in a unique position to steal the march on the top-tier firms by offering advanced, customizable services for the enterprise. DBaaS on OpenStack is one such opportunity, but it should be accompanied by a wide range of Platform- and Software-as-a-Service offerings as well.

But they’ll have to move quickly: the Amazons and Googles of the world are eager to shore up the professional market as well, accompanied by the likes of Microsoft and IBM, who are no strangers to the particular needs of the enterprise.

And once this opportunity is past, there won’t be another one like it for a generation.

by 1521 at December 15, 2014 03:35 PM

Opensource.com

Best open source tutorials of 2014

A look back at the best open source tutorials and how-tos on Opensource.com in 2014.

by ScottNesbitt at December 15, 2014 12:30 PM

10,000 OpenStack questions, debunking myths, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at December 15, 2014 08:00 AM

December 14, 2014

Michael Still

How are we going with Nova Kilo specs after our review day?

Time for another summary I think, because announcing the review day seems to have caused a rush of new specs to be filed (which wasn't really my intention, but hey). We did approve a fair few specs on the review day, so I think overall it was a success. Here's an updated summary of the state of play:



API



API (EC2)

  • Expand support for volume filtering in the EC2 API: review 104450.
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).


Administrative

  • Actively hunt for orphan instances and remove them: review 137996 (abandoned); review 138627.
  • Check that a service isn't running before deleting it: review 131633.
  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Implement a daemon version of rootwrap: review 105404.
  • Log request id mappings: review 132819 (fast tracked).
  • Monitor the health of hypervisor hosts: review 137768.
  • Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.


Block Storage

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.
  • Cache data from volumes on local disk: review 138292 (abandoned); review 138619.
  • Enhance iSCSI volume multipath support: review 134299.
  • Failover to alternative iSCSI portals on login failure: review 137468.
  • Give additional info in BDM when source type is "blank": review 140133.
  • Implement support for a DRBD driver for Cinder block device access: review 134153.
  • Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721 (approved).
  • StorPool volume attachment support: review 115716.
  • Support Cinder Volume Multi-attach: review 139580 (approved).
  • Support iSCSI live migration for different iSCSI target: review 132323 (approved).


Cells



Containers Service



Database



Hypervisor: Docker



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V



Hypervisor: Ironic



Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).


Hypervisor: libvirt



Instance features



Internal

  • A lock-free quota implementation: review 135296.
  • Automate the documentation of the virtual machine state transition graph: review 94835.
  • Fake Libvirt driver for simulating HW testing: review 139927 (abandoned).
  • Flatten Aggregate Metadata in the DB: review 134573 (abandoned).
  • Flatten Instance Metadata in the DB: review 134945 (abandoned).
  • Implement a new code coverage API extension: review 130855.
  • Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
  • Move to polling for cinder operations: review 135367.
  • PCI test cases for third party CI: review 141270.
  • Transition Nova to using the Glance v2 API: review 84887.
  • Transition to using glanceclient instead of our own home grown wrapper: review 133485 (approved).


Internationalization

  • Enable lazy translations of strings: review 126717 (fast tracked).


Networking



Performance

  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


Scheduler

  • A nested quota driver API: review 129420.
  • Add a filter to take into account hypervisor type and version when scheduling: review 137714.
  • Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow extra spec to match all values in a list by adding the ALL-IN operator: review 138698 (fast tracked, approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Allow the remove of servers from server groups: review 136487.
  • Convert get_available_resources to use an object instead of dict: review 133728 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610 (approved).
  • Decouple services and compute nodes in the SQL database: review 126895 (approved).
  • Enable adding new scheduler hints to already booted instances: review 134746.
  • Fix the race conditions when migration with server-group: review 135527 (abandoned).
  • Implement resource objects in the resource tracker: review 127609.
  • Improve the ComputeCapabilities filter: review 133534.
  • Isolate Scheduler DB for Filters: review 138444.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
  • Move select_destinations() to using a request object: review 127612 (approved).
  • Persist scheduler hints: review 88983.
  • Refactor allocate_for_instance: review 141129.
  • Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
  • Stop direct lookup for instance groups in the Nova database: review 131553 (abandoned).
  • Support scheduling based on more image properties: review 138937.
  • Trusted computing support: review 133106.


Scheduling



Security

  • Make key manager interface interoperable with Barbican: review 140144 (fast tracked, approved).
  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Service Groups



Sheduler

  • Add soft affinity support for server group: review 140017 (approved).


Tags for this post: openstack kilo blueprint spec nova
Related posts: Specs for Kilo; One week of Nova Kilo specifications; Compute Kilo specs are open; Specs for Kilo; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Comment

December 14, 2014 11:15 PM

Soft deleting instances and the reclaim_instance_interval in Nova

I got asked the other day how the reclaim_instance_interval in Nova works, so I thought I'd write it up here in case its useful to other people.

First off, there is a periodic task run the nova-compute process (or the computer manager as a developer would know it), which runs every reclaim_instance_interval seconds. It looks for instances in the SOFT_DELETED state which don't have any tasks running at the moment for the hypervisor node that nova-compute is running on.

For each instance it finds, it checks if the instance has been soft deleted for at least reclaim_instance_interval seconds. This has the side effect from my reading of the code that an instance needs to be deleted for at least reclaim_instance_Interval seconds before it will be removed from disk, but that the instance might be up to approximately twice that age (if it was deleted just as the periodic task ran, it would skip the next run and therefore not be deleted for two intervals).

Once these conditions are met, the instance is deleted from disk.

Tags for this post: openstack nova instance delete
Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment

December 14, 2014 09:51 PM

IBM OpenStack Team

A guide to testing in OpenStack

As you begin creating patches for OpenStack, you have two choices: you can run some unit tests yourself and try to figure out your errors before taking them to the community, or you can write the code and then just throw it out there, hoping that reviewers and Jenkins will catch all your bugs. I highly recommend the first option. The second option will only make community members annoyed with you because they have to read your buggy code!

So, where should you start when running tests?

The OpenStack code base is written in Python. You could, of course, write your own automation to kick off tests, but let’s make it a little easier. There are two types of OpenStack tests: the OpenStack integration test suite, called Tempest, and unit tests in each particular project (Nova, Neutron, Swift and so forth).

Before you start, you’ll want to set up a virtual environment. This will isolate your working environment. If you’re using DevStack, the good news is this sets things up for you. If not, you might have to do more on your own. See this doc for a good guide.

Now you’re ready to run your tests. It’s a good idea to run the project’s unit tests first, especially the ones that pertain to your code. Then you can run some of the Tempest tests that pertain to your patch. The next two sections are various ways I’ve found to run these types of tests. At the end of the post I’ve put some tips and tricks I’ve learned and links to more information about testing within projects.

Running unit tests

Tox
First, install tox:
pip install tox

To configure, you’ll need to set up a tox.ini file. For most OpenStack projects, this is already created. It covers what the available environments are and what commands to use to actually run the tests.

Before tox would run in my environment (Ubuntu 14.04), I had to do:
sudo apt-get install libmysqlclient-dev
sudo apt-get install pkg-config
sudo apt-get install libvirt-dev

To run tests against the Python 2.7 level:
tox -epy27

You can run a single test or let tox run the full suite. At the end of the test, you’ll get a nice printout of what exactly was run. Note that you don’t have to specify the full file path.

Hugenbruch_1
Example of a tox test.

Test Repository

Test Repository (testr) manages the running of your tests. Tox can also use it to run tests. Note that before you run testr for the first time, you’ll need to run tox to initialize the virtual environment properly.

To configure, you’ll need to first set up your virtual environment:
source .tox/py27/bin/activate

You’ll then notice that your command line prompt changes:
ubuntu@emily:/opt/stack/nova$ source .tox/py27/bin/activate
(py27)ubuntu@emily:/opt/stack/nova$

You’ll also want to initialize testr to set up the .testrepository directory:
testr init

To run all tests:
testr run --parallel

To run a single test file:
Hugenbruch_2
Example of a testr test.

Note that you should use “.” instead of “/” to specify the file path. You also don’t add the “.py” at the end of the file name.

The detailed runlog is available under the .testrepository directory and the runlog ID.
You can also use regular expressions to choose a subset of tests to run:
testr run --parallel test_name_regex

If you use testr to run a bunch of tests, you can then only re-run the tests that failed:
testr run --failing

When done, you can deactivate your virtual environment:
deactivate

Testtools

Testtools is a set of extensions to the unit testing framework, which you can use to run tests.

It’s especially useful when you’re using a debugger.

To configure, set up your virtual environment:
source .tox/py27/bin/activate

To run:
Hugenbruch_3
Example of a testtools test.

When done, you can deactivate your virtual environment:
deactivate

run_tests.sh

This is a test running script available in some projects, including Tempest, Horizon, Nova and Cinder. It sets up the virtual environment and runs the test for you.

To configure, you should remove the .venv, .testrepository and .tox folders, if you’ve run other tests too.

The first time through, it will do a lot of installation of dependencies and setup; subsequent runs will go faster.

To run a test file:

Hugenbruch_4
Example of a run_tests test.

Note that, like tox, you can just specify the file name instead of the whole file path.

Running Tempest tests

The main functional and integration test suite for OpenStack is the Tempest project. It offers you a few more options for running tests.

run_tempest.sh

This script is like run_tests, but for the tests in the tempest repository. Note that if you want to run tests that test tempest itself, there is a run_tests available too.

To run a test file:

Hugenbruch_5
Example of a run_tempest test.

pretty_tox.sh

This runs tox (and under the covers, testr) but gives more information about how long the tests took to run and how they were distributed across workers.

To run a test file:

Hugenbruch_6
Example of a pretty_tox test.

pretty_tox_serial.sh

This is similar to pretty_tox, but will always run on one worker, so the tests are run serially.

To run a test file:
Hugenbruch_7
Example of a pretty_tox_serial test.

Tips and tricks for running tests
I like to start by running a style test. This ensures that my indents are correct, lines aren’t too long, and will make sure that there aren’t any glaring errors, like a missing parenthesis or quote. An easy way to run these is:
tox -epep8

It’s sometimes useful to step through a test line-by-line, especially when the failing output is vague, or so long you can’t find the error! For this you can use a debugger such as pdb.

To set up your test file, put this line right before what you want to trace:
from pdb import set_trace; set_trace()

Then call your test using the testtools command above in your virtual environment and use the pdb commands to step through it.
Hugenbruch_8

Always be a little careful when running debuggers in time-sensitive code, they can cause timeout errors that weren’t there before.

One other debugger to note is pudb. It is is a more GUI-ish (that’s a technical term) debugger. To
install:
sudo apt-get install python-pudb

Then you can call your test (outside of a virtual environment this time) and use the pdb commands to step through it.
python -m pudb.run nova/tests/test_hooks.py
Hugenbruch_9

 

I’d love to hear about your progress, or any tips or tricks that you’ve come up with on your own. Leave a comment below to start the conversation, and happy testing!

Resources

Testing in Python

VirtualEnvironments

OpenStack testing wiki

Using testr with OpenStack

Questions about using run _tests in Nova

The pep8 style guide

Links to test instructions for specific projects

Testing Nova

Testing Keystone

Testing Horizon

Testing Cinder

Testing Ceilometer

Testing Trove

Testing Sahara

Testing Ironic

Testing Marconi

Running Tempest tests

The post A guide to testing in OpenStack appeared first on Thoughts on Cloud.

by Emily Hugenbruch at December 14, 2014 01:00 PM

December 13, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (Dec 6 – 13)

OpenStack DefCore Enters Execution Phase. Help!

OpenStack DefCore Committee has established the principles and first artifacts required for vendors using the OpenStack trademark. Over the next release cycle, we will be applying these to the Icehouse and Juno releases. Rob will be doing two sessions about DefCore next week (will be recorded): Tues Dec 16 at 9:45 am PST- OpenStack Podcast #14 with Jeff Dickey and Thurs Dec 18 at 9:00 am PST – Online Meetup about DefCore with Rafael Knuth.

Keeping up to date with OpenStack Blueprints

OpenStack is a living product – and because it is community driven – changes are being proposed almost constantly. The specifications approved in each project are published on http://specs.openstack.org as html pages and RSS feeds. Maish Saidel-Keesing compiled an OPML file with all the current projects that you can add to your favorite RSS reader.

Korea User Group got “The community of the Year” award

Great news for the whole OpenStack community: OpenStack Korea User Group received “The community of the Year” award from Korea Open Source Software Association. Congratulations to all for the amazing work you’ve been doing all these years. The Korean user group doesn’t rest on their laurels and they’re busy organizing the OpenStack Day Korea 2015: check the sponsorship opportunities.

Ask OpenStack passes 10,000 questions

Recently, our Q&A site, Ask OpenStack passed an important milestone – more than 10,000 questions. Since it was launched by community manager Stefano Maffulli with the help of our tame AskBot developer Evgeny Fadeev early last year, we’ve had great support from our entire community – users and vendor support staff alike getting on there and helping each other out. Everybody can give answers: read how to give answers and head out to answer questions on Ask OpenStack.

Relevant Conversations

Deadlines and Development Priorities

Security Advisories and Notices

Tips ‘n Tricks

Reports from Previous Events

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Chris Gacsal Ankur Rishi
Babu Shanmugam Alger Wang
Abhishek G M Tom Pollard
Bogun Dmitriy Dan Ritchie
Cosmin Poieana
wangtianfa
Roman Dobosz
Park
Moshe Levi
Matt Borland
Toshiaki Higuchi
Dexter Fryar
Chris Gacsal
Hironori Shiina

OpenStack Reactions

waiting-for-reviews

waiting for reviews on my spec

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at December 13, 2014 12:17 AM

December 12, 2014

OpenStack Blog

Ask OpenStack passes 10,000 questions

Recently, our Q&A site, Ask OpenStack passed an important milestone – more than 10,000 questions. Since it was launched by community manager Stefano Maffulli with the help of our tame AskBot developer Evgeny Fadeev early last year, we’ve had great support from our entire community – users and vendor support staff alike getting on there and helping each other out.

At points like this one, it is wise to reflect on whether the endeavour has been successful. Anecdotally, we hear good things, and there seems to be a reasonably self-sustaining community around the site.

We have pulled the data to compare the proportion of questions that were asked to those that were answered, and compared it to other forums to gauge where we stand. Although there were some fluctuations that occurred during some periods, the volunteer team was able to diagnose the problem and move forward.

During some months of this year – luckily unseen by most – we suffered from a serious spam problem. This is an example of something that can seriously turn members away, though the moderation tools and volunteer team was able to keep it in check. A potential problem as serious as a spam infestation is if user’s questions don’t get answered. So we did some quick math and googling to try and quantify how we’re doing.

There are about 1,700 unanswered questions out of over 10,000 in total. Based on the rough conclusions for similar sized/typed communities – we’re essentially in a best-of-field position. Now, let’s look at if we’re keeping it there.

This time series has been tracking our unanswered questions for 494 days. We started with 150 unanswered questions, and on average over this period we add 3.12 questions per day. This is already good, because it means that if we can push just 25 people to answer an extra question every week we’ll start going in the other direction.

Now, to look at whether the trend is going up. Dividing the space into two 247 day segments, we can calculate the relative average over these two chunks of time (basically 2/3rds of a year each). In the first chunk, from Jul 2013 to Mar 2014 the average is 2.6 new unanswered questions per day. In the second chunk it’s an average of 3.5 new unanswered questions per day.

So, the trend is increasing by about 23% over this 2/3rd of a year chunk. If it increases to 4/day, which will take about a year at the current rate of increase, that means that rather than getting 25 new people involved, we’d need to have 30+. Still not so bad.

Of course, this analysis doesn’t take into account that some of the unanswered questions are just unanswerable – but in theory the moderation team should be weeding those out.

 

Question posted Screen Shot 2014-12-02 at 11.55.17 AM

Final idea – which might put us firmly in the “best practice” bucket. What about a global 24 hour Ask OpenStack Question Answering day? Do you think such a thing would work? Everybody can give answers: read how to give answers and head out to answer questions on Ask OpenStack.

by OpenStack at December 12, 2014 07:29 PM

Mirantis

Tips on Overcoming Skeptics and Other Obstacles to Production OpenStack Clouds

Big changes don’t come easy, and as your organization embarks upon an OpenStack project, you will likely face a range of skeptics among internal stakeholders. You might find that other departments don’t think agile methodology is a fit for SLA-driven IT operations. Your organization’s operations team might be focused on hardware instead of user experience. Network engineers might worry that your new networking topologies conflict with existing security policies. Procurement teams accustomed to proprietary equipment might resist transitioning to commodity hardware. So what’s an OpenStack advocate to do?

<script charset="utf-8" src="https://js.hscta.net/cta/current.js"></script> <script type="text/javascript"> hbspt.cta.load(197500, '93ceabcb-c2ed-4aaa-91d9-1c7a792be327'); </script>

After helping to build some of the largest OpenStack clouds in the world, Mirantis has encountered a fair share of skeptics at diverse companies, as well as a range of other obstacles that can potentially delay or interfere with an OpenStack project from reaching production.

For example, you may find that key incumbent stakeholders express doubts about your OpenStack project, in terms of its approach, execution, effect on operations, etc. I mean, you know this can work, but they’re not on board.

There’s no easy answer. Skeptical stakeholders and their differing expectations need to be addressed at multiple levels. Some crucial actions you should take include:

  • Secure executive level buy-in from Day 1 to make sure you have a strong advocate with enough weight to pull in detractors.
  • Agree on a Minimum Viable Product (MVP) that consists of a tangible application with the minimum features that can handle core workloads and create value; it should be the first major milestone of your OpenStack project.
  • Plan for gradual organizational changes, especially when working with other departments less experienced with agile methodologies.
  • Identify and communicate risks throughout the project, so relevant plans and/or processes can be adjusted as needed.
  • Execute in an agile fashion but speak waterfall (this is easier than you may think).

OpenStack changes frequently. Since no two environments and their workloads are perfectly alike, it is inevitable that compatibility issues and unexpected bugs arise after deploying the latest OpenStack version.

While it may not be a popular position with the hard-working community engineers building the Kilo release, in production deployments, it’s our experience that you are better off to favor stability over the cutting-edge — don’t feel compelled to use the newest OpenStack recipe and upgrade every six months. It’s better to work with what’s tried and true, otherwise you potentially risk encountering expensive downtime and sleepless nights from unforeseen issues. By the way, many of our customers choose Mirantis OpenStack for exactly this reason; our distro pushes the upstream trunk code through a pretty significant battery of use cases and configuration tests to help prevent surprises.

There’s an old saying that good judgement comes from experience, and experience comes from the other kind of judgement. Since we want you to be successful with OpenStack, we are truly eager to share our experience, and help you go into your OpenStack deployments with eyes open.

Of course, there’s lots more. I will be discussing these topics and more in “The OpenStack Production Checklist,” a live webcast on December 18 where my colleague, Nick Chase, and I will share tips and recommend best practices from the point of view of executives, project managers, and engineers alike.

Register for the Webcast

The post Tips on Overcoming Skeptics and Other Obstacles to Production OpenStack Clouds appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Michelle Yakura at December 12, 2014 06:49 PM

Tesora Corp

Short Stack: OpenStack gains traction, unique opportunities, Percona announces initial speakers

short stack_b small_0_0_1.jpg

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Check out this week's articles:

OpenStack Gains Traction--Progress Being Made But Still Some Work To Be Done | Forbes

Ben Kepes from Forbes summarizes his observations from OpenStack Summit.  While there were more meaningful case studies featured at this event, he still raised questions about the scale and maturity of OpenStack for these users that include BMW, CERN and Comcast.

Perconca Announces Initial Speakers for OpenStack Live 2015 Conference | Percona

Percona shared the first batch of speakers for its upcoming OpenStack Live event that is taking place in April in Santa Clara, CA.  This event will focus on cloud strategies, cloud performance, and operational best practices for managing and optimizing OpenStack and MySQL databases. 

Self-Service Database Administration: Not as Scary as it Sounds | Tesora Blog

In the post by Arthur Cole, he explores the benefits and key considerations for deploying database as a service (DBaaS) in a private cloud environment like OpenStack.

Mirantis Chairman: OpenStack Creates Unique Opportunities For The Channel | CRN

While OpenStack presents serious challenges to the business models of software and hardware vendors, it will also create a host of new opportunities for The Channel.  Alex Freedland, co-founder and chairman of Mirantis suggests that OpenStack can free resellers from declaring allegiance to a specific vendor's channel, empower them to leverage a more diverse array of software for clients. 

Selling a Non-Product: The Multifaceted OpenStack | LinuxInsider

OpenStack is more than just a product.  In this summary of a panel discussion from OpenStack Summit, seven developers debate the best approach to deploying OpenStack (i.e. as a distribution or as software service).

 

by 1 at December 12, 2014 03:51 PM

Rafael Knuth

Recorded Online Meetup: Automating OpenStack clouds and beyond w/ StackStorm

Recording Deck   Please register for upcoming online meetup: StackStorm & MoogSoft -...

December 12, 2014 12:49 PM

December 11, 2014

Rob Hirschfeld

OpenStack DefCore Enters Execution Phase. Help!

OpenStack DefCore Committee has established the principles and first artifacts required for vendors using the OpenStack trademark.  Over the next release cycle, we will be applying these to the Ice House and Juno releases.

Like a rockLearn more?  Hear about it LIVE!  Rob will be doing two sessions about DefCore next week (will be recorded):

  1. Tues Dec 16 at 9:45 am PST- OpenStack Podcast #14 with Jeff Dickey
  2. Thurs Dec 18 at 9:00 am PST – Online Meetup about DefCore with Rafael Knuth (optional RSVP)

At the December 2014 OpenStack Board meeting, we completed laying the foundations for the DefCore process that we started April 2013 in Portland. These are a set of principles explaining how OpenStack will select capabilities and code required for vendors using the name OpenStack. We also published the application of these governance principles for the Havana release.

  1. The OpenStack Board approved DefCore principles to explain
    the landscape of core including test driven capabilities and designated code (approved Nov 2013)
  2. the twelve criteria used to select capabilities (approved April 2014)
  3. the creation of component and framework layers for core (approved Oct 2014)
  4. the ten principles used to select designated sections (approved Dec 2014)

To test these principles, we’ve applied them to Havana and expressed the results in JSON format: Havana Capabilities and Havana Designated Sections. We’ve attempted to keep the process transparent and community focused by keeping these files as text and using the standard OpenStack review process.

DefCore’s work is not done and we need your help!  What’s next?

  1. Vote about bylaws changes to fully enable DefCore (change from projects defining core to capabilities)
  2. Work out going forward process for updating capabilities and sections for each release (once authorized by the bylaws, must be approved by Board and TC)
  3. Bring Havana work forward to Ice House and Juno.
  4. Help drive Refstack process to collect data from the field

by Rob H at December 11, 2014 10:56 PM

Sébastien Han

DevStack and remote Ceph cluster

Introducing the ability to connect DevStack to a remote Ceph cluster. So DevStack won’t bootstrap any Ceph cluster, it will simply connect to a remote one.

The patch is currently under review. Sometimes we want to run some benchmarks on virtual machines that will be backed by a Ceph cluster. The first idea that comes in our mind is to use DevStack to quickly get an OpenStack up and running but what about the configuration of Devstack with this remote cluster? There is currently no way to automatically connect this DevStack to another cluster.

Thanks to the above patch it is now possible to use an existing Ceph cluster. In this case Devstack just needs two things:

  • the location of the Ceph config file (by default devstack will look for /etc/ceph/ceph.conf
  • the admin key of the remote ceph cluster (by default devstack will look for /etc/ceph/ceph.client.admin.keyring)

Devstack will then create the necessary pools, users, keys and will connect the OpenStack environment as usual. During the unstack phase every pools, users and keys will be deleted on the remote cluster while local files and ceph-common package will be removed from the current Devstack host.

To enable this mode simply add REMOTE_CEPH=True to your localrc file. To specify a different path for the admin key do REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring

December 11, 2014 10:05 PM

Rafael Knuth

Online Meetup: DefCore - making OpenStack standard and interoperable

Join OpenStack Board member, Rob Hirschfeld, talking about how the OpenStack community in working...

December 11, 2014 02:49 PM

Opensource.com

What OpenStack women are saying about the community

It's a problem that the tech industry struggles with in general, and OpenStack is no different: How do we create an environment that is open, inviting, and friendly to women, and how do we get more women involved?

by Jason Baker at December 11, 2014 08:00 AM

December 10, 2014

Mirantis

What’s new in Mirantis OpenStack 5.1.1

We are pleased to announce the general availability of Mirantis OpenStack 5.1.1. This maintenance release includes numerous defect fixes and improvements.

Key Improvements in Mirantis OpenStack 5.1.1

  • Icehouse: includes the latest OpenStack Icehouse 2014.1.3 release.
  • HA Deployment: numerous fixes and improvements to high availability (HA) deployments
  • Ceph Firefly: supports recent maintenance release of Ceph 0.80.7 “Firefly”
  • Fuel Master Upgrade: supports Fuel Master Node 5.1.1 upgrade and we’ve reduced the size of the upgrade download.

Additional improvements and fixes are described in our Release Notes.

For those of you who have already installed Mirantis OpenStack 5.1 and 5.01, you can upgrade your Fuel Master Node in place and retain management of your existing 5.0.1 and 5.1 environments. No need to redeploy! Just download the Upgrade Package and follow the instructions. After the upgrade, the Fuel Master Node can deploy a new Mirantis OpenStack 5.1.1 environment and manage existing 5.1 and 5.0.x environments.

We work openly in the community, and contribute our improvements; for more details about fixes you can visit our launchpad pages:

MOS 5.1.1 Launchpad bugs report (61 issues)
Fuel 5.1.1 Launchpad bugs report (19 issues)

The Mirantis QA team runs numerous test scenarios, derived from our customers and partners, across a broad variety of categories:

  • System tests. Confirm functionality and health of Mirantis OpenStack environment
    • Deployment Tests. Use Fuel to deploy OpenStack in a variety of different configurations and confirm functionality
    • Resiliency, Scalability and Operability Tests. Confirm cloud functionality (e.g. add a compute node), high-availability, reliability, scale, and load testing
    • Interoperability: Virtual deployment, bare-metal, hardware compatibility, partner integration testing
  • Scenario Tests. Corner cases and tricky scenarios
  • Defect fix verification. Confirm that issues with fixes are resolved.

We are constantly adding new categories and scenarios as we work closely with our customers and partners to deploy OpenStack in production.

Encounter a defect? Have a suggestion? We want to know about it!

PS: If you are new to Mirantis OpenStack 5.1, you can read about What’s New in 5.1 in the blog post that accompanied the 5.1 release.

The post What’s new in Mirantis OpenStack 5.1.1 appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nathan Trueblood at December 10, 2014 09:54 PM

The Official Rackspace Blog » OpenStack

Cloud Predictions For 2015

For the past few years I’ve used this holiday period as a chance to do some prognostication. It’s a fun exercise – looking back at trends from the previous 12 months and trying to predict what’s next.

Looking back at my 2014 cloud predictions, I’m fairly pleased with how I did. Wearable technologies took a huge step forward with the Apple Watch announcement, specialized clouds based on workload continued to enter the conversation, open source grew and container technology emerged as a force. Looking back, I’d give myself a B.

So what can we expect in 2015? Below are my predictions.

John Engates spoke more about his cloud predictions for 2015 in a live Google Hangout. Watch a full recording here:
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Aw90WOAhALo" width="640"></iframe>


Cloud Choices Abound
Businesses will be faced with more choices in the cloud than ever before. Do you go for cheap? Do you go for the most bells and whistles? And how do you know which one you’ll need in five years? 2015 sees the emergence of a multi-cloud world where you’ll be able to string together cloud services from various providers based on your specific workloads. But with more choices comes more responsibility. Price isn’t the only factor to consider. “Oh, it’s cheap!” will no longer sell cloud. Cloud providers must differentiate themselves in stronger ways. Customers will gravitate toward the solutions that best address their specific business needs, whether that’s based on support, a mix of bare metal and public cloud, private clouds or myriad other options. For example, Rackspace now offers three distinct types of private clouds – OpenStack, Microsoft and VMware – to give customers the choice of how they proceed with private cloud.


It’s Not The Cost That Counts
Along with the vast number of cloud choices, the focus on what drives the most value will be a key theme for 2015. Value isn’t just about cost – it’s also about the time and energy you spend managing and scaling your environment. So, while providers will become more agnostic, the importance of a trusted partner will grow stronger, whether you rely on that partner to manage your public or private cloud, automate your DevOps or keep tabs on your apps, like Google Apps for Work. Companies will have to ask themselves if they want to swell their payrolls hiring the resources needed to manage all of their tools and technologies. This will force companies to determine what matters most to them – focusing on IT management or on their business – and decide who they can partner with to deliver the most value.


OpenStack Gets Boring
In 2015, OpenStack will celebrate its fifth birthday. And that birthday will be boring. That’s a good thing. When a technology matures, it becomes less and less exciting. That’s where we see OpenStack going. Forrester research agrees. In a Quick Take from OpenStack Summit Paris, Forrester wrote: “At its Paris summit, the OpenStack Foundation celebrated the 10th release of the platform (code name: Juno). What stood out about this latest iteration and the progress of its ever-growing ecosystem of vendors, users, and service providers was the lack of excitement that comes with maturity. The Juno release addressed many challenges holding back enterprise adoption to this point and showed signs that 2015 may be the year its use shifts over from mostly test and development to mainstream production deployments.” We hope that maturity brings with it simplification – if we make OpenStack as easy as possible to use, manage and scale, more and more users will adopt it.


Bigger Data
For the past few years we’ve predicted a major upswing in the importance of big data and the tools and skills needed to manage it at scale. In 2015, that need for big data will get, well, bigger. Enterprises are catching on to the big data wave, but they don’t have the skills needed in house to pull necessary value out of that data. The explosion of connected devices will further fuel the flames, creating more data that needs to be extracted and analyzed. Data scientists will be a hot commodity, but much like the DevOps revolution of 2014, top data talent will be difficult to come by and expensive to hire. But big data won’t slow down – look for as many as five disruptive new big data technologies to emerge in the coming year. At the same time, current technologies like Hadoop will become even more ingrained into your business. To navigate the increasingly complex big data landscape, companies will outsource their big data needs to remove the burden and cost of doing big data internally. Enterprises will demand a shortcut.


IT Gets Sensored
Cars, smartwatches, tablets, smartphones – sensors will be embedded in EVERYTHING! (Well, maybe not every single thing, but most things.) If we so choose, we can be connected to the Internet 24x7x365, and every product we buy will have some sort of embedded sensor to collect, transmit or distribute data. Look how Apple Pay is already disrupting the payment industry by giving users zero-click buying power in their pocket. And Apple Watch is going to take that one step further. This goes well beyond the BYOD conversations of yesteryear. Embedded devices and sensors are taking us into uncharted territory. With all of these devices generating all of this data, IT will have to exact some level of control to ensure that security and data integrity are not compromised. But it’s a delicate balance, as end users won’t want to be restricted in what devices and solutions they use.


Containers, Containers, Containers
Next year, container technologies and adoption will grow immensely. With Docker leading the pack, container use in production environments will continue to grow. Docker won’t be alone. A number of container-focused alternatives will emerge in an attempt to knock Docker out of the water. The technology big dogs themselves will likely look to launch their own container solutions – either through acquisition or homegrown technology. Containers’ speed and portability took hold in 2014. 2015 will be the year the ship really sets sail.


The CMO Turns Into A CMT
Gartner predicted that by 2017, marketing will spend more on technology than on IT. Meanwhile, according to Forrester Research, roughly 40 percent of marketing leaders rank technology as the No. 1 area for improvement in their departments. Marketers will rely more heavily on tools like collaboration software, CRM, automation, CMS and social. In 2015, marketers will be increasingly called upon to make decisions about how to adopt these new technologies. They’ll have to add more technology chops as the demand for tech know-how continually increases, thus forcing the Chief Marketing Officer (CMO) to become the Chief Marketing Technologist (CMT).


Secure It
Nearly every company I’ve met with in the past year mentions security. In 2015, security will evolve more into a service than a series of products daisy-chained together to build a solution. The NSA spying on citizens has caused businesses to rethink how they go about security and reconsider where their data lives. In 2014, more options emerged, whether it was the reemergence of single-tenant bare metal servers or a boom in interest in private clouds to secure sensitive information and applications. Security has to be always on, and delivering security as a service is something we’ll see bubble up to the surface next year.


Alternative Silicon Rising
People have long predicted that new processing architectures were just about to take off in the hyper-scale datacenter world. We’ve heard buzz around ARM-based solutions for years, but have seen products that only fit niche applications. Low performance and a fragmented software ecosystem were a big problem. That changed this year. 2014 saw big news around Google’s involvement with IBM and OpenPOWER, and Cavium bringing out a server-grade ARM platform. 2015 will be the year that alternative silicon really begins to rise. Watch this space.

Want to hear more from John Engates about his cloud predictions for 2015? Tune in Thursday, December 11 at 1 p.m. CST for a live Google Hangout during which John will look into the crystal ball and predict the cloud’s future. RSVP and watch live on Google+: https://plus.google.com/events/cd7irvp96tcu9ebsh4q0eo4q8c0 or view it live on YouTube: http://youtu.be/Aw90WOAhALo.

by John Engates at December 10, 2014 02:35 PM

Rafael Knuth

Online Meetup: StackStorm & MoogSoft - Automated Remediation

Join StackStorm and MoogSoft as they demonstrate end-to-end identification and resolution of complex...

December 10, 2014 02:14 PM

Lars Kellogg-Stedman

Cloud-init and the case of the changing hostname

Setting the stage

I ran into a problem earlier this week deploying RDO Icehouse under RHEL 6. My target systems were a set of libvirt guests deployed from the RHEL 6 KVM guest image, which includes cloud-init in order to support automatic configuration in cloud environments. I take advantage of this when using libvirt by attaching a configuration drive so that I can pass in ssh keys and a user-data script.

Once the systems were up, I used packstack to deploy OpenStack onto a single controller and two compute nodes, and at the conclusion of the packstack run everything was functioning correctly. Running neutron agent-list showed all agents in good order:

+--------------------------------------+--------------------+------------+-------+----------------+
| id                                   | agent_type         | host       | alive | admin_state_up |
+--------------------------------------+--------------------+------------+-------+----------------+
| 0d51d200-d902-4e05-847a-858b69c03088 | DHCP agent         | controller | :-)   | True           |
| 192f76e9-a816-4bd9-8a90-a263a1d54031 | Open vSwitch agent | compute-0  | :-)   | True           |
| 3d97d7ba-1b1f-43f8-9582-f860fbfe50df | Open vSwitch agent | controller | :-)   | True           |
| 54d387a6-dca1-4ace-8c1b-7788fb0bc090 | Metadata agent     | controller | :-)   | True           |
| 92fc83bf-0995-43c3-92d1-70002c734604 | L3 agent           | controller | :-)   | True           |
| e06575c2-43b3-4691-80bc-454f501debfe | Open vSwitch agent | compute-1  | :-)   | True           |
+--------------------------------------+--------------------+------------+-------+----------------+

A problem rears its ugly head

After rebooting the system, I found that I was missing an expected Neutron router namespace. Specifically, given:

# neutron router-list
+--------------------------------------+---------+-----------------------------------------------------------------------------+
| id                                   | name    | external_gateway_info                                                       |
+--------------------------------------+---------+-----------------------------------------------------------------------------+
| e83eec10-0de2-4bfa-8e58-c1bcbe702f51 | router1 | {"network_id": "b53a9ecd-01fc-4bee-b20d-8fbe0cd2e010", "enable_snat": true} |
+--------------------------------------+---------+-----------------------------------------------------------------------------+

I expected to see:

# ip netns
qrouter-e83eec10-0de2-4bfa-8e58-c1bcbe702f51

But the qrouter namespace was missing.

The output of neutron agent-list shed some light on the problem:

+--------------------------------------+--------------------+------------------------+-------+----------------+
| id                                   | agent_type         | host                   | alive | admin_state_up |
+--------------------------------------+--------------------+------------------------+-------+----------------+
| 0832e8f3-61f9-49cf-b49c-886cc94d3d28 | Metadata agent     | controller.localdomain | :-)   | True           |
| 0d51d200-d902-4e05-847a-858b69c03088 | DHCP agent         | controller             | xxx   | True           |
| 192f76e9-a816-4bd9-8a90-a263a1d54031 | Open vSwitch agent | compute-0              | :-)   | True           |
| 3be34828-ca8d-4638-9b3a-4e2f688a9ca9 | L3 agent           | controller.localdomain | :-)   | True           |
| 3d97d7ba-1b1f-43f8-9582-f860fbfe50df | Open vSwitch agent | controller             | xxx   | True           |
| 54d387a6-dca1-4ace-8c1b-7788fb0bc090 | Metadata agent     | controller             | xxx   | True           |
| 87b53741-f28b-4582-9ea8-6062ab9962e9 | Open vSwitch agent | controller.localdomain | :-)   | True           |
| 92fc83bf-0995-43c3-92d1-70002c734604 | L3 agent           | controller             | xxx   | True           |
| e06575c2-43b3-4691-80bc-454f501debfe | Open vSwitch agent | compute-1              | :-)   | True           |
| e327b7f9-c9ce-49f8-89c1-b699d9f7d253 | DHCP agent         | controller.localdomain | :-)   | True           |
+--------------------------------------+--------------------+------------------------+-------+----------------+

There were two sets of Neutron agents registered using different hostnames -- one set using the short name of the host, and the other set using the fully qualified hostname.

What's up with that?

In the cc_set_hostname.py module, cloud-init performs the following operation:

(hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
try:
    log.debug("Setting the hostname to %s (%s)", fqdn, hostname)
    cloud.distro.set_hostname(hostname, fqdn)
except Exception:
    util.logexc(log, "Failed to set the hostname to %s (%s)", fqdn,
                hostname)
    raise

It starts by retrieving the hostname (both the qualified and unqualified version) from the cloud environment, and then calls cloud.distro.set_hostname(hostname, fqdn). This ends up calling:

def set_hostname(self, hostname, fqdn=None):
    writeable_hostname = self._select_hostname(hostname, fqdn)
    self._write_hostname(writeable_hostname, self.hostname_conf_fn)
    self._apply_hostname(hostname)

Where, on a RHEL system, _select_hostname is:

def _select_hostname(self, hostname, fqdn):
    # See: http://bit.ly/TwitgL
    # Should be fqdn if we can use it
    if fqdn:
        return fqdn
    return hostname

So:

  • cloud-init sets writeable_hostname to the fully qualified name of the system (assuming it is available).
  • cloud-init writes the fully qualified hostname to /etc/sysconfig/network.
  • cloud-init sets the hostname to the unqualified hostname

The result is that your system will probably have a different hostname after your first reboot, which throws off Neutron.

And they all lived happily ever after?

It turns out this bug was reported upstream back in October of 2013 as bug 1246485, and while there are patches available the bug has been marked as "low" priority and has been fixed. There are patches attached to the bug report that purport to fix the problem.

by Lars Kellogg-Stedman at December 10, 2014 05:00 AM

December 09, 2014

Rob Hirschfeld

Ironic + Crowbar: United in Vision, Complementary in Approach

This post is co-authored by Devanda van der Veen, OpenStack Ironic PTL, and Rob Hirschfeld, OpenCrowbar Founder.  We discuss how Ironic and Crowbar work together today and into the future.

Normalizing the APIs for hardware configuration is a noble and long-term goal.  While the end result, a configured server, is very easy to describe; the differences between vendors’ hardware configuration tools are substantial.  These differences make it impossible challenging to create repeatable operations automation (DevOps) on heterogeneous infrastructure.

Illustration to show potential changes in provisioning control flow over time.

Illustration to show potential changes in provisioning control flow over time.

The OpenStack Ironic project is a multi-vendor community solution to this problem at the server level.  By providing a common API for server provisioning, Ironic encourages vendors to write drivers for their individual tooling such as iDRAC for Dell or iLO for HP.

Ironic abstracts configuration and expects to be driven by an orchestration system that makes the decisions of how to configure each server. That type of orchestration is the heart of Crowbar physical ops magic [side node: 5 ways that physical ops is different from cloud]

The OpenCrowbar project created extensible orchestration to solve this problem at the system level.  By decomposing system configuration into isolated functional actions, Crowbar can coordinate disparate configuration actions for servers, switches and between systems.

Today, the Provisioner component of Crowbar performs similar functions as Ironic for operating system installation and image lay down.  Since configuration activity is tightly coupled with other Crowbar configuration, discovery and networking setup, it is difficult to isolate in the current code base.  As Ironic progresses, it should be possible to shift these activities from the Provisioner to Ironic and take advantage of the community-based configuration drivers.

The immediate synergy between Crowbar and Ironic comes from accepting two modes of operation for OpenStack: bootstrapping infrastructure and multi-tenant server allocation.

Crowbar was designed as an operational platform that seeds an OpenStack ready environment.  Once that environment is configured, OpenStack can take over ownership of the resources and allow Ironic to manage and deliver “hypervisor-free” servers for each tenant.  In that way, we can accelerate the adoption of OpenStack for self-service metal.

Physical operations is messy and challenging, but we’re committed to working together to make it suck less.  Operators of the world unite!


by Rob H at December 09, 2014 06:00 PM

Cloudscaling Corporate Blog

The EMC Federation Joins the OpenStack Foundation

Last week a major set of milestones was reached for the EMC Federation’s involvement with OpenStack. First, EMC and it’s affiliated companies and brands (VMware, VCE, Pivotal, RSA, Cloudscaling) determined a cohesive strategy for engagement with the OpenStack Foundation Board. Second, EMC appointed a VMware employee, Sean Roberts (@sarob), as the official representative of EMC and hence the EMC Federation generally. This means that I am no longer the EMC (Cloudscaling) OpenStack Foundation Gold Director.

The why of this may be confusing so I will briefly explain the background and then provide some more details on what exactly transpired.

Background
By and large the OpenStack bylaws have stood the test of time quite well at this point. Most of the upcoming proposed changes are simply things we could only have known in hindsight. One area that I think the bylaws got right are the articles that limit participation by “Affiliated” companies:

2.5 Affiliation Limits. Gold Members and Platinum Members may not belong to an Affiliated Group. An Affiliated Group means that for Members that are business entities, one entity is “Controlled” by the other entity. “Controlled” or “Control” means one entity owns, directly or indirectly, more than 50% of the voting securities of the Controlled entity which vote for the election of the board of directors or other managing body of an entity, or which is under common control with the Controlled entity. An Affiliated Group does not apply to government agencies, academic institutions or individuals.

What this means, in essence, is that if there are two companies with a relationship like parent/child or joint venture, in which one owns more than 50% of the other, only ONE of the companies can join the OpenStack Foundation as a Gold or Platinum Member. This is a good measure to prevent a group of companies from “stacking the deck” within the OpenStack Foundation and using that as leverage to control or dominate OpenStack, which is something no one wants. I also need to note that any company may also have one to two Individual Members represent them. Two Directors from any single affiliated group is the maximum representation on the OpenStack Board of Directors. This works out to one Gold or Platinum Director plus one Individual Director OR two Individual Directors. This is why I am allowed to run as an Individual Director in 2015. Of course, I would very much appreciate your support in this endeavour!

So, things became very interesting upon EMC’s acquisition of Cloudscaling as it inherited the Gold Member status of Cloudscaling while VMware also retained their Gold Member status, creating an edge case where the Bylaws were technically in violation. This required EMC and VMware to work closely with the Foundation staff to resolve the situation.

This is why VMware resigned their Gold Member status and why EMC appointed a VMware employee as a representative for EMC and hence the EMC Federation.

Which means we should quickly explain what the EMC Federation is.

EMC Federation
The EMC Federation is composed of a number of different entities, from security companies, to storage, to Platform-as-a-Service, big data, virtualization, converged infrastructure, and now OpenStack via the Cloudscaling acquisition. Members of the EMC Federation are already representatives on the OpenStack Foundation Board of Directors, OpenStack Foundation Gold Members, OpenStack Foundation Corporate Sponsors, and have deepening ties to OpenStack generally.

In April of 2013, EMC and VMware launched Pivotal and created a federation of its businesses. EMC is the majority owner, by a large margin of VMware, Pivotal, and RSA is a wholly owned subsidiary. Recently, VCE, the leader in converged infrastructure joined the Federation. Federation messaging and joint solutions were prominent during EMC World 2014. The following diagram gives you some idea of how the Federation is organized.

EMCFederation-and-OpenStack-Diagram

When asked about why the Federation model is needed and what differentiates the companies from competitors, the answer is “choice”. While VMware is the leading hypervisor, EMC also desires the opportunity to forge alliances and solutions with Microsoft, Citrix, and others. Conversely, VMware desires to support and work with a variety of storage and security solutions.

Similarly, members of the Federation desire to operate and support OpenStack’s mission in different manners (converged infrastructure, appliance models, and software distributions) while also supporting the joint goals of empowering and promoting OpenStack within the enterprise.

Wikibon covers the EMC Federation Model extensively here:

http://wikibon.org/wiki/v/Primer_on_the_EMC_Federation

The EMC Federation OpenStack Strategy
As a group, the EMC Federation strongly desires to play by the rules of the OpenStack community, while deepening our commitments and contributions. As a group we are already a #6 contributor to the latest release and we aspire to go even further. OpenStack is a critical strategy for the Federation as a whole, even for members like Pivotal who see a significant increase in the number of enterprises who wish to run CloudFoundry on top of OpenStack.

What this meant for us when resolving the Bylaws issue is that we wanted to have the entire EMC group represented as a whole, such that others like VMware, VCE, and Pivotal, could all be a part of the picture. The Bylaws however require that the Gold Member selected is an actual legal entity.

Our final resolution was then to have VMware resign their Gold Membership, EMC retains the Cloudscaling Gold Membership, and in order to show EMC Federation coordination, EMC is appointing Sean Roberts to represent EMC, and hence the entire Federation, as our Gold Member representative. Finally, all of the branding on the OpenStack Foundation website will be a Federation-oriented branding (EMC2).

Meanwhile, behind the scenes, I’m working closely with Sean Roberts of VMware, Josh McKenty of Pivotal, Jay Cuthrell of VCE, and others to make sure that we have cohesion across the Federation.

Hopefully this helps explain these recent changes.

by Randy Bias at December 09, 2014 04:00 PM

Steve Dake

Isn’t it Atomic on OpenStack Ironic, don’t you think?

OpenStack Ironic is a bare metal as a service deployment tool.  Fedora Atomic is a µOS consisting of a very minimal installation of Linux, kernel.org, Kubernetes and Docker.  Kubernetes is an endpoint manager and container scheduler, while Docker is a container manager.  The basic premise of Fedora Atomic using Ironic is to present a lightweight launching mechanism for OpenStack.

The first step in launching Atomic is to make Ironic operational.  I used devstack for my deployment.  The Ironic developer documentation is actually quite good for a recently Integrated OpenStack project.  I followed the instructions for devstack.  I used pxe+ssh, rather then the agent+ssh.  The pxe+ssh driver virtualizes bare-metal deployment for testing purposes, so only one machine is needed.  The machine should have 16GB+ of RAM.  I find 16GB a bit tight, however.

I found it necessary to hack devstack a bit to get Ironic to operate.  The root cause of the issue is that libvirt can’t write the console log to the home directory as specified in the localrc. To solve the problem I just hacked devstack to write the log files to /tmp. I am sure there is a more elegant way to solve this problem.

The diff of my devstack hack is:

[sdake@bigiron devstack]$ git diff
diff --git a/tools/ironic/scripts/create-node b/tools/ironic/scripts/create-node
index 25b53d4..5ba88ce 100755
--- a/tools/ironic/scripts/create-node
+++ b/tools/ironic/scripts/create-node
@@ -54,7 +54,7 @@ if [ -f /etc/debian_version ]; then
fi
if [ -n "$LOGDIR" ] ; then
- VM_LOGGING="--console-log $LOGDIR/${NAME}_console.log"
+ VM_LOGGING="--console-log /tmp/${NAME}_console.log"
else
VM_LOGGING=""
fi

My devstack localrc contains:

SCREEN_LOGDIR=$DEST/logs
DATABASE_PASSWORD=123456
RABBIT_PASSWORD=123456
SERVICE_TOKEN=123456
SERVICE_PASSWORD=123456
ADMIN_PASSWORD=123456

disable_service horizon
disable_service rabbit
disable_service quantum
enable_service qpid
enable_service magnum

# Enable Ironic API and Ironic Conductor
enable_service ironic
enable_service ir-api
enable_service ir-cond

# Enable Neutron which is required by Ironic and disable nova-network.
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron

# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=5
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True

# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1024
IRONIC_VM_SPECS_DISK=20

# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
IRONIC_VM_EPHEMERAL_DISK=0
VIRT_DRIVER=ironic

# By default, DevStack creates a 10.0.0.0/24 network for instances.
# If this overlaps with the hosts network, you may adjust with the
# following.
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256

# Log all output to files
LOGFILE=$HOME/devstack.log
SCREEN_LOGDIR=$HOME/logs

It took me two days to sort out the project in this blog post, and during the process, I learned a whole lot about how Ironic operates by code inspection and debugging.  I couldn’t find much documentation about the deployment process so I thought I’d share a nugget of information about the deployment process:

  • Nova contacts Ironic to allocate an Ironic node providing the image to boot
  • Ironic pulls the image from glance and stores it on the local hard disk
  • Ironic boots a virtual machine via SSH with a PXE-enabled seabios BIOS
  • The seabios code asks Ironic’s tftpserver for a deploy ramdisk and kernel
  • The deployed node starts the deploy kernel and ramdisk
  • The deploy ramdisk does the following:
    1. Starts tgtd to present the root device as an iSCSI disk on the network
    2. Contacts the Ironic ReST API to initiate iSCSI transfer of the image
    3. Waits on port 10000 for a network connection to indicate the iSCSI transfer is complete
    4. Reboots the node once port 10000 has been opened and closed by a process
  • Once the deploy ramdisk contacts Ironic to initiate iSCSI transfer of the image Ironic does the following:
    1. uses iscsiadm to connect to the ISCSI target on the deploy hardware
    2. spawns several dd processes to copy the local disk image to the iSCSI target
    3. Once the dd processes exit successfully, Ironic contacts port 10000 on the deploy node
  • Ironic changes the PXEboot configuration to point to the user’s actual desired ramdisk and kernel
  • The deploy node reboots into SEABIOS again
  • The node boots the proper ramdisk and kernel, which load the disk image that was written via iSCSI

Fedora Atomic does not ship images that are suitable for use with the Ironic model.  Specifically what is needed is a LiveOS image, a ramdisk, and a kernel.  The LiveOS image that Fedora Cloud does ship is not the Atomic version.  Clearly it is early days for Atomic and I expect these requirements will be met as time passes.

But I wanted to deploy Atomic now on Ironic, so I sorted out making a PXE-bootable Atomic Live OS image.

First a bit about how the Atomic Cloud Image is structured:

[sdake@bigiron Downloads]$ guestfish

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell

><fs> add-ro Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2
><fs> run
><fs> list-filesystems
/dev/sda1: ext4
/dev/atomicos/root: xfs

The Atomic cloud image has /dev/sda1 containing the contents of the /boot directory.  The /dev/sda2 partition contains a LVM partition.  There is a logical volume called atomicos/root which contains the root filesystem.

Building the Fedora Atomic images for Ironic is as simple as extracting the ramdisk and kernel from /dev/sda1 and extracting /dev/sda2 into an image for Ironic to dd to the iSCSI target.  A bit complicating is that the fstab must have the /boot entry removed.  Determining how to do this was a bit of a challenge, but I wrote a script to automate the Ironic image generation process.

The first step is to test that Ironic actually installs via devstack using the above localrc:

[sdake@bigiron devstack]$ ./stack.sh
bunch of output from devstack ommitted
Keystone is serving at http://192.168.1.124:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: 123456
This is your host ip: 192.168.1.124

Next, take a look at the default image list which should look something like:

[sdake@bigiron devstack]$ source ./openrc admin admin
[sdake@bigiron devstack]$ glance image-list
+---------------------------------+-------------+------------------+-----------+
| Name                            | Disk Format | Container Format | Size      |
+---------------------------------+-------------+------------------+-----------+
| cirros-0.3.2-x86_64-disk        | qcow2       | bare             | 13167616  |
| cirros-0.3.2-x86_64-uec         | ami         | ami              | 25165824  |
| cirros-0.3.2-x86_64-uec-kernel  | aki         | aki              | 4969360   |
| cirros-0.3.2-x86_64-uec-ramdisk | ari         | ari              | 3723817   |
| Fedora-x86_64-20-20140618-sda   | qcow2       | bare             | 209649664 |
| ir-deploy-pxe_ssh.initramfs     | ari         | ari              | 95220206  |
| ir-deploy-pxe_ssh.kernel        | aki         | aki              | 5808960   |
+---------------------------------+-------------+------------------+-----------+

In this case, we want to boot the UEC image. Ironic expects properties attached to the image ramdisk_id and kernel_id which are the UUIDs of cirros-0.3.2-x86_64-uec-kernel and cirros-0.3.2-x86_64-uec-ramdisk.

Running image-show, we can see these properties:

[sdake@bigiron devstack]$ glance image-show cirros-0.3.2-x86_64-uec 
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| Property 'kernel_id'  | c11bd198-227f-4156-9195-40b16278b65c |
| Property 'ramdisk_id' | 5e6839ef-daeb-4a1c-be36-3906ed4d7bd7 |
| checksum              | 4eada48c2843d2a262c814ddc92ecf2c     |
| container_format      | ami                                  |
| created_at            | 2014-12-09T14:56:05                  |
| deleted               | False                                |
| disk_format           | ami                                  |
| id                    | 259ca231-66ad-439d-900b-3dc9e9408a0c |
| is_public             | True                                 |
| min_disk              | 0                                    |
| min_ram               | 0                                    |
| name                  | cirros-0.3.2-x86_64-uec              |
| owner                 | 4b798efdcd5142509fe87b12d89d5949     |
| protected             | False                                |
| size                  | 25165824                             |
| status                | active                               |
| updated_at            | 2014-12-09T14:56:06                  |
+-----------------------+--------------------------------------+

Now that we have validated the cirros image is available, the next step is to launch one from the demo user:

[sdake@bigiron devstack]$ source ./openrc demo demo
[sdake@bigiron devstack]$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub steak
[sdake@bigiron devstack]$ nova boot --flavor baremetal --image cirros-0.3.2-x86_64-uec --key-name steak cirros_on_ironic
[sdake@bigiron devstack]$ nova list
+--------------------------------------+------------------+--------+------------+-------------+------------------+
| ID                                   | Name             | Status | Task State | Power State | Networks         |
+--------------------------------------+------------------+--------+------------+-------------+------------------+
| 9e64804d-264d-40d2-88f4-e858efe69557 | cirros_on_ironic | ACTIVE | -          | Running     | private=10.1.0.4 |
+--------------------------------------+------------------+--------+------------+-------------+------------------+
[sdake@bigiron devstack]$ ssh cirros@10.1.0.4
$ uname -a
Linux cirros-on-ironic 3.2.0-60-virtual #91-Ubuntu SMP Wed Feb 19 04:13:28 UTC 2014 x86_64 GNU/Linux

If this part works, that means you have a working Ironic devstack setup. The next step is to get the Atomic images and convert them for use with Ironic.

[sdake@bigiron fedora-atomic-to-liveos-pxe]$ ./convert.sh
Mounting boot and root filesystems.
Done mounting boot and root filesystems.
Removing boot from /etc/fstab.
Done removing boot from /etc/fstab.
Extracting kernel to fedora-atomic-kernel
Extracting ramdisk to fedora-atomic-ramdisk
Unmounting boot and root.
Creating a RAW image from QCOW2 image.
Extracting base image to fedora-atomic-base.
cut: invalid byte, character or field list
Try 'cut --help' for more information.
sfdisk: Disk fedora-atomic.raw: cannot get geometry
sfdisk: Disk fedora-atomic.raw: cannot get geometry
12171264+0 records in
12171264+0 records out
6231687168 bytes (6.2 GB) copied, 29.3357 s, 212 MB/s
Removing raw file.

The sfdisk: cannot get geometry warnings can be ignored.

After completion you should have fedora-atomic-kernel, fedora-atomic-ramdisk, and fedora-atomic-base files. Next we register these with glance:

[sdake@bigiron fedora-atomic-to-liveos-pxe]$ ls -l fedora-*
-rw-rw-r-- 1 sdake sdake 6231687168 Dec  9 08:59 fedora-atomic-base
-rwxr-xr-x 1 root  root     5751144 Dec  9 08:59 fedora-atomic-kernel
-rw-r--r-- 1 root  root    27320079 Dec  9 08:59 fedora-atomic-ramdisk
[sdake@bigiron fedora-atomic-to-liveos-pxe]$ glance image-create --name=fedora-atomic-kernel --container-format aki --disk-format aki --is-public True --file fedora-atomic-kernel
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 220c2e9d97c3f775effd2190199aa457     |
| container_format | aki                                  |
| created_at       | 2014-12-09T16:47:12                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | aki                                  |
| id               | b8e08b02-5eac-467d-80e1-6c8138d0bf57 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | fedora-atomic-kernel                 |
| owner            | a28b73a4f29044f184b854ffb7532ceb     |
| protected        | False                                |
| size             | 5751144                              |
| status           | active                               |
| updated_at       | 2014-12-09T16:47:12                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[sdake@bigiron fedora-atomic-to-liveos-pxe]$ glance image-create --name=fedora-atomic-ramdisk --container-format ari --is-public True --disk-format ari --file fedora-atomic-ramdisk
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 9ed72ddc0411e2f30d5bbe6b5c2c4047     |
| container_format | ari                                  |
| created_at       | 2014-12-09T16:48:31                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | ari                                  |
| id               | a62f6f32-ed66-4b18-8625-52d7262523f6 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | fedora-atomic-ramdisk                |
| owner            | a28b73a4f29044f184b854ffb7532ceb     |
| protected        | False                                |
| size             | 27320079                             |
| status           | active                               |
| updated_at       | 2014-12-09T16:48:31                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[sdake@bigiron fedora-atomic-to-liveos-pxe]$ glance image-create --name=fedora-atomic --container-format ami --disk-format ami --is-public True --property ramdisk_id=b2f60f33-9c8e-4905-a64b-90997d3dcb92 --property kernel_id=0e687b76-31d0-4351-a92a-a2d348482d42 --file fedora-atomic-base
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| Property 'kernel_id'  | 0e687b76-31d0-4351-a92a-a2d348482d42 |
| Property 'ramdisk_id' | b2f60f33-9c8e-4905-a64b-90997d3dcb92 |
| checksum              | 6a25f8bf17a94a6682d73b7de0a13013     |
| container_format      | ami                                  |
| created_at            | 2014-12-09T16:52:45                  |
| deleted               | False                                |
| deleted_at            | None                                 |
| disk_format           | ami                                  |
| id                    | d4ec78d7-445a-473d-9b7d-a1a6408aeed2 |
| is_public             | True                                 |
| min_disk              | 0                                    |
| min_ram               | 0                                    |
| name                  | fedora-atomic                        |
| owner                 | a28b73a4f29044f184b854ffb7532ceb     |
| protected             | False                                |
| size                  | 6231687168                           |
| status                | active                               |
| updated_at            | 2014-12-09T16:53:16                  |
| virtual_size          | None                                 |
+-----------------------+--------------------------------------+

Next we configure Ironic’s PXE boot config options and restart the ironic conductor in devstack. To restart Ironic conductor use screen -r, find the appropriate conductor screen, press CTRL-C, up arrow, ENTER. This will reload the configuration.

/etc/ironic/ironic.conf should be changed to have this config option:

pxe_append_params = nofb nomodeset vga=normal console=ttyS0 no_timer_check rd.lvm.lv=atomicos/root root=/dev/mapper/atomicos-root ostree=/ostree/boot.0/fedora-atomic/a002a2c2e44240db614e09e82c7822322253bfcaad0226f3ff9befb9f96d315f/0

Next we launch the fedora-atomic image using Nova’s baremetal flavor:

[sdake@bigiron ~]$ source /home/sdake/repos/devstack/openrc demo demo
[sdake@bigiron Downloads]$ nova boot --flavor baremetal --image fedora-atomic --key-name steak fedora_atomic_on_ironic
[sdake@bigiron Downloads]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+----------+
| ID                                   | Name                    | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+----------+
| e7f56931-307d-45a7-a232-c2fa70898cae | fedora-atomic_on_ironic | BUILD  | spawning   | NOSTATE     |          |
+--------------------------------------+-------------------------+--------+------------+-------------+----------+

Finally login to the Atomic Host:

[sdake@bigiron ironic]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks         |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------+
| d061c0ef-f8b7-4fff-845b-8272a7654f70 | fedora-atomic_on_ironic | ACTIVE | -          | Running     | private=10.1.0.5 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------+
[sdake@bigiron ironic]$ ssh fedora@10.1.0.5
[fedora@fedora-atomic-on-ironic ~]$ uname -a
Linux fedora-atomic-on-ironic.novalocal 3.17.4-301.fc21.x86_64 #1 
SMP Thu Nov 27 19:09:10 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I found determining how to create the images from the Fedora Atomic Cloud images a bit tedious. The diskimage builder tool would likely make this easier, if it supported RPM-ostree and Atomic.

Ironic needs some work to allow the pxe options to override the “root” initrd parameter. Ideally a glance image property would be allowed to be specified to override and extend the boot options. I’ve filed an Ironic blueprint for such an improvement.


by sdake at December 09, 2014 03:34 PM

Sébastien Han

OpenStack: import existing Ceph volumes in Cinder

This method can be useful while migrating from one OpenStack to another.


Imagine you have operating system instances configured with a legacy application that can only run once. Imagine that you want to run them in Ceph using Cinder booting from volumes. Then this is probably how you will import them.


  1. If you only need a single instance of that virtual machine then you should not bother to convert it in this first place. No matter the original format, keep it like this. For Ceph RAW is recommended while doing COW clone but not mandatory.
  2. Evaluate the size of your image (du)
  3. Create a Cinder volume with the corresponding size
  4. Get the uuid of the volume
  5. Import the volume with rbd -p volumes --image-format 2 import <your-image-file> <volume-uuid>
  6. Flag the volume as bootable: cinder set-bootable <volume> True
  7. Boot from volume

December 09, 2014 03:14 PM

December 08, 2014

Craige McWhirter

Managing KVM Console Logs for Nova

Update

It turns out that this DOES NOT work around bug 832507. Please do not use this thinking that it does.

There's a problem with console logs. That was a hard sentence to word. I wanted to say a number of versions of it but each made it sound like the problem was with either OpenStack, Nova, libvirt / qemu-kvm. I didn't particularly feel like pointing the finger as the solution appears to need to come from a number of directions...

So the problem is that it is entirely possible when running KVM hypervisors with OpenStack Nova to have the compute nodes disks fill up when instance(s) console logs get a little chatty.

It's not a desirable event and the source will catch you by surprise.

There's currently no way to manage these KVM console logs via either qemu-kvm or via OpenStack / Nova so I wrote manage_console_logs.sh (github) (bitbucket) to do this.

manage_console_logs.sh operates as follows:

  • Creates a lock file using flock to ensure that the script is not already running.
  • Checks the size of each console log
  • If it's greater than the nominated size it's truncated using tail.

That's it. A pretty straight forward method for ensuring your compute node disks do not fill up.

You should schedule this via cron at you desired frequency and add a monitoring check to ensure it's doing it's job as expected.

by Craige McWhirter at December 08, 2014 11:02 PM

Maish Saidel-Keesing

Keeping up to date with OpenStack Blueprints

OpenStack is a living product – and because it is community driven - changes are being proposed almost constantly.

So how do you keep up with all of these proposed changes? And even more so why would you?

The answer to the second question is because if you are interested in the projects then you should be following what is going on. In addition there could be cases where you see that the proposed blueprint could break something that you currently use or is in directly contradiction to what you are trying to do – and you should leave your feedback.

OpenStack wants you to leave your feedback – so please do!

About the first question - the answer is here – http://specs.openstack.org. This is an aggregate of the new blueprints (specs) for each of the projects as they are approved.

I use RSS feeds available for the blueprints which helps me keep up to date as soon as a new blueprint is added.

I have compiled an OPML file with all the current projects that you can add to your favorite RSS reader.
You can download it in the link below.

file

I hope this will be as useful to you as it is to me.

As always, comments, suggestions and thoughts are always welcome.

by Maish Saidel-Keesing (noreply@blogger.com) at December 08, 2014 01:10 PM

Opensource.com

Landing your first contribution, OpenStack dominates open source cloud, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at December 08, 2014 08:00 AM

December 06, 2014

OpenStack @ NetApp

Using Packstack to install Cinder with NetApp Storage Backends

It is now possible to have Packstack configure Cinder to use NetApp storage devices as backends for providing block storage capabilities to OpenStack users. The NetApp driver for Cinder supports:

  • NetApp Clustered Data ONTAP (NFS/iSCSI)
  • NetApp Data ONTAP in 7-Mode (NFS/iSCSI)
  • NetApp E-Series (iSCSI)

If Packstack is not already installed, follow the RDO Quickstart guide to get it installed.

Important: For information regarding best practices using NetApp storage with Cinder and other OpenStack services, see the Deployment and Operations Guide. If you have any questions, you can get in touch with us on the NetApp OpenStack Community page or join us on IRC in the #openstack-netapp channel on Freenode!

Configure via answer file

Generate a Packstack answer file:

packstack --gen-answer-file=~/packstack-answer.txt

Help text is provided above each parameter telling whether it's required for a given NetApp configuration. For more information about the specific parameters and examples of their use, see the NetApp unified driver docs and choose the page detailing your storage family and protocol.

The Packstack parameters are the same as those found in the docs, but in all caps and with CONFIG_CINDER_ prepended. For example, entering the following into the answer file:

...
CONFIG_CINDER_BACKEND=netapp
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_VSERVER=openstack-vserver
CONFIG_CINDER_NETAPP_HOSTNAME=myhostname
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_LOGIN=username
CONFIG_CINDER_NETAPP_PASSWORD=password
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
...

generates the following in cinder.conf:

volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
netapp_vserver=openstack-vserver
netapp_server_hostname=myhostname
netapp_server_port=80
netapp_login=username
netapp_password=password
nfs_shares_config=/etc/cinder/shares.conf

When the answer file has been edited with your specific environment variables, run Packstack with:

packstack --answer-file=~/packstack-answer.txt

Packstack will alert you if any required parameters are missing or have incorrect values and then start the install.

If you have any trouble creating Cinder volumes after the installation has finished, the log file found at /var/log/cinder/volume.log is a good place to begin troubleshooting. If you have any questions, drop us a line!

Note: If you're using the NFS protocol, the file specified in the nfs_shares_config parameter must be created before running Packstack. The contents of the file should define each NFS export on a new line, following this format:

[ip address]:/[export name]
[hostname]:/[export name]

Configure via command line arguments

Add NetApp arguments when installing:

packstack --arg1=... --argN=... --cinder-netapp-storage-family=ontap_cluster

Again, refer to the NetApp unified driver docs for details about specific parameters.

The Packstack parameters are the same as those found in the docs, but with cinder- prepended and all underscores turned into hyphens. For example, running Packstack with these arguments:

packstack --cinder-backend=netapp --cinder-netapp-storage-family=ontap_cluster --cinder-netapp-storage-protocol=nfs --cinder-netapp-vserver=openstack-vserver --cinder-netapp-hostname=myhostname --cinder-netapp-server-port=80 --cinder-netapp-login=username --cinder-netapp-password=password --cinder-netapp-nfs-shares-config=/etc/cinder/nfs_shares

is functionally equivalent to the answer file in the previous section.

December 06, 2014 03:00 PM

IBM OpenStack Team

Jumpgate: SoftLayer’s free library for OpenStack cloud compatibility

Nathan Beittenmiller explains in a recent blog entry that Jumpgate, a free and open source library, provides a compatibility layer between the OpenStack Application Programming Interface (API) and a proprietary API.

OpenStack SoftLayerJumpgate does a decent job of translating the IBM SoftLayer proprietary API as OpenStack API, which means that now SoftLayer can be easily used as a cloud provider for products and solutions with an OpenStack API.

IBM Cloud Orchestrator provides an orchestration layer for cloud environments supporting OpenStack API. By using Jumpgate, we can configure SoftLayer as a cloud region in IBM Cloud Orchestrator. That opens up opportunities to explore the value of IBM Cloud Orchestrator cloud management capabilities on top of SoftLayer:

Patterns: IBM Cloud Orchestrator allows for easy identification of workload patterns. For example, Edwin Schouten explains the different types of patterns in his blog post.

Workflows: This is the key differentiator of IBM Cloud Orchestrator from other IBM products, like PureApp or PureSystem. As IBM Cloud Orchestrator ships a business process layer, the user can either write his or her own workflows or re-use the workflows available in the IBM Cloud Marketplace.

Hybrid cloud: IBM Cloud Orchestrator supports many cloud environments, like VMware or KVM. Now with Jumpgate, we can create a hybrid cloud solution, combining clouds running on-premises with SoftLayer, as an off-prem, public cloud provider.

Now, you might think, “Why do I need another API to access SoftLayer?” SoftLayer exposes all its functionalities in a comprehensive proprietary API, so in theory there is no need for a layer like Jumpgate. The key aspect here is proprietary, as users would need to know the details of the SoftLayer API.

The OpenStack value comes as many providers expose their compute resources as a pre-defined API. This way, similar compute resources can be requested and managed uniformly. For example, with the OpenStack nova API, a user can request KVM, Xen, VMware, Hyper-V and now SoftLayer resources in the same way without worrying about the particularities of each hypervisor.

Ultimately, Jumpgate allows users to combine the benefits of using SoftLayer within OpenStack, enabling products like IBM Cloud Orchestrator to be integrated with SoftLayer.

Jumpgate is free, open source and available on GitHub. You can find more information here.

The post Jumpgate: SoftLayer’s free library for OpenStack cloud compatibility appeared first on Thoughts on Cloud.

by Eduardo Patrocinio at December 06, 2014 01:00 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Nov 28 – Dec 6)

December 2014 OpenStack Infrastructure User Manual Sprint

During this week the Infrastructure team released a significant milestone for the Infrastructure User Manual. The manual consolidates documentation for Developers, Core Reviewers and Project Drivers, which was spread across wiki pages, project-specific documentation files and general institutional knowledge. The manual is starting to look great and a lot of wiki pages have been declared obsolete meanwhile.

Third-party CI account creation is now self-serve

This means that new third-party account will need to follow a new, leaner process. There are some changes for existing accounts. Read the full announcement.

My Key Learning from Paris

Matt Fischer “learned a bunch of things in Paris: French food is amazing, always upgrade OVS” and “All operators have the same problems.” The result of his and others’ realization is the beginnings of an Operations Project.

Relevant Conversations

Deadlines and Development Priorities

Stable Release

Tips ‘n Tricks

Reports from Previous Events

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Kai Qiang Wu Midun Kumar
Jing Zeng Jukka Lehtniemi
Marco Fargetta Sunil Kumar
Sam Yaple chenglch
Qi Zhang Yoni
Richard Hedlind Kyle Rockman
Hironori Shiina qiaomin032
yoan desbordes Stefano Canepa
Surojit Pathak Sam Yaple
Rohit Jaiswal Chris Cannon

OpenStack Reactions

rev

Starting a VM on my newly deployed OpenStack cloud

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at December 06, 2014 12:26 AM

December 05, 2014

Opensource.com

Who are the OpenStack Ambassadors?

Learn about the OpenStack Ambassador program, which was designed to create a global framework of community leaders who can help expand the reach of OpenStack.

by Jason Baker at December 05, 2014 10:00 AM

December 04, 2014

Percona

MySQL and OpenStack deep dive: Dec. 10 webinar

Fact: MySQL is the most commonly used database in OpenStack deployments. Of course that includes a number of MySQL variants – standard MySQL by Oracle, MariaDB, Percona Server, MySQL Galera, Percona XtraDB Cluster, etc.

MySQL and OpenStack deep dive: Dec. 10 webinar with Peter Boros and Jay PipesHowever, there are many misconceptions and myths around the pros and cons of these MySQL flavors. Join me and my friend Jay Pipes of Mirantis next Wednesday (Dec. 10) at 10 a.m. Pacific and we’ll dispel some of these myths and provide a clearer picture of the strengths and weaknesses of each of these flavors.

This free Percona webinar, titled “MySQL and OpenStack Deep Dive,” will also illuminate the pitfalls to avoid when migrating between MySQL flavors – and what architectural information to take into account when planning your OpenStack MySQL database deployments.

We’ll also discuss replication topologies and techniques, and explain how the Galera Cluster variants differ from standard MySQL replication.

Finally, in the latter part of the session, we’ll take a deep dive into MySQL database performance analysis, diving into the results of a Rally run showing a typical Nova workload. In addition, we’ll use Percona Toolkit’s famed pt-query-digest tool to determine if a synchronously replication database cluster like the free Percona XtraDB Cluster is a good fit for certain OpenStack projects.

The webinar is free but I encourage you to register now to reserve your spot. See you Dec. 10! In the meantime, learn more about the new annual OpenStack Live Conference and Expo which debuts April 13-14 in the heart of Silicon Valley. If you register now you’ll save with Early Bird pricing. However, one lucky webinar attendee will win a full pass! So be sure to register for next week’s webinar now for your chance to win! (Click here!) The winner will be announced at the end of the webinar.

The post MySQL and OpenStack deep dive: Dec. 10 webinar appeared first on MySQL Performance Blog.

by Peter Boros at December 04, 2014 09:59 PM

Tesora Corp

Self-Service Database Administration: Not as Scary as it Sounds

For the past half-century or more, the overriding purpose of IT has been control and management of the data environment. Provisioning new resources, integrating disparate data environments, overseeing network flow and client connectivity: these are all important tasks, but frankly they have become repetitive and mundane – doing a disservice to the skillsets that modern data center technicians have acquired and the salaries they command.canstockphoto9423670_0.jpg

This is all starting to change with the proliferation of cloud platforms like OpenStack, however, as self-service portals and new generations of intuitive software remove much of the complexity involved in resource and infrastructure management. This is particularly true in the case of database management, with new Database as a Service (DBaaS) platforms promising to both streamline infrastructure and vastly reduce the cost of database support, even when resources are scaled up and out in the drive to capitalize on Big Data and the Internet of Things.

While the benefits of self-service are becoming well known, it can still be a little unnerving for the front office. If everyone is going around provisioning their own resources and overseeing their own database environments, how can the enterprise maintain compatibility between data sets and applications? And how can it prevent costs from spinning out of control as workers take it upon themselves to buy whatever resources they need whenever they need them?

The truth of the matter, however, is that with proper safeguards and governance in place, self-service can not only improve current database management, but it can overcome many of the incompatibilities that are endemic in today’s silo-based data center infrastructure and provide better, more centralized security and oversight.

One of the key advantages to DBaaS is that it improves the application development and deployment process. By allowing the creation of dev/test environments that virtually mirror actual production environments, new apps can be created, tested and deployed in record time. Self-service plays a key role on this process by improving data flow throughout the application lifecycle and enabling developers to make rapid changes to the work environment in order to test new code. And perhaps even more importantly, it allows for rapid decommissioning of resources, so the enterprise doesn’t end up paying for what it no longer uses.

Even setting up the self-service component of the DBaaS environment is becoming easier with technologies like OpenStack Trove. When attempted through a traditional database platform, the process involves templates, schemas, various plug-ins and clone management functions and a number of other steps . However, with integrated self-service capabilities starting to crop up in leading DBaaS platforms, the establishment of a service-based environment is vastly simplified and can be tailored to suit the legacy environment of the enterprise’s choice.

A key question in all of this, though, is what will happen to the database administrator (DBA) if users begin building and managing their own environments? According to MongoDirector.com founder Dharshan Rangegowda, the days of the “point-and-click” administrator are over. In the self-service DBaaS world, mundane tasks like setting up the DB, writing scripts, monitoring the environment and the like will be taken over by software. This means today’s DBA needs to focus higher up the stack to functions that are more difficult to perform on an automated basis– things like performance and query analysis, as well as overarching policy and governance issues.

As for cost containment, one of the key tools in this endeavor is the charge-back. As long as those who deploy the service – whether they be developers, team leaders or the heads of business units – possess adequate budgeting and management skills, there is no reason why database and infrastructure support costs cannot be treated as another line item in the annual budget. And in fact many cloud platforms feature alerts that notify higher ups if users are starting to push cost guidelines.

Self-service, then, is nothing to be afraid of. In fact, it is fair to say that organizations that do not employ a healthy dose of self-service database functionality using a technology like OpenStack will find themselves behind the curve when it comes to developing and implementing the proper tools to handle forthcoming workloads.

by 1521 at December 04, 2014 09:11 PM

Zane Bitter

Three Flavours of Infrastructure Cloud

A curious notion that seems to be doing the rounds of the OpenStack traps at the moment is the idea that Infrastructure-as-a-Service clouds must by definition be centred around the provisioning of virtual machines. The phrase ‘small, stable core’ keeps popping up in a way that makes it sound like a kind of dog-whistle code for the idea that other kinds of services are a net liability. Some members of the Technical Committee have even got on board and proposed that the development of OpenStack should be reorganised around the layering of services on top of Nova.

Looking at the history of cloud computing reveals this is as a revisionist movement. OpenStack itself was formed as the merger of Nova and the object storage service, Swift. Going back even further, EC2 was the fourth service launched by Amazon Web Services. Clearly at some point we believed that a cloud could mean something other than virtual machines.

Someone told me a few weeks ago that Swift was only useful as an add-on to Nova; a convenience exploited only by the most sophisticated modern web application architectures. This is demonstrably absurd: you can use Swift to serve an entire static website, surely the least sophisticated web application architecture possible (though no less powerful for it). Not to mention all the other potential uses that revolve around storage and not computation, like online backups. Entire companies, including SwiftStack, exist only to provide standalone object storage clouds.

You could in theory tell a similar story for an asynchronous messaging service. Can you imagine an application in which two devices with intermittent network connectivity might want to communicate in a robust way? (Would it help if I said one was in your pocket?) I can, and in case you didn’t get the memo, the ‘Internet of Things’ is the new ‘Cloud’—in the sense of being a poorly-defined umbrella term for a set of loosely-related technologies whose importance stems more from the diversity of applications implemented with them than from any commonality between them. You heard it here first. What you need here is a cloud in the original sense of the term: an amorphous blob that is always available and abstracts away the messier parts of end-to-end communication and storage. A service like Zaqar could be a critical piece of infrastructure for some of these applications. I am not aware of a company which has been successful deploying a service of this type standalone, though there have certainly been attempts (StormMQ springs to mind). Perhaps for a reason, or perhaps they were just ahead of their time.

Of course things get even better when you can start combining these services, especially within the framework of an integrated IaaS platform like OpenStack, where things like Keystone authentication are shared. Have really big messages to send? Drop them into object storage and include a pointer in the message. Want to process a backlog of messages? Fire up some short-lived virtual machines to churn through them. Want tighter control of access to your stored objects? Proxy the request through a custom application running on a Nova server.

Those examples are just the tip of the iceberg of potential use cases that can be solved without even getting into the Nova-centric ones. Obviously the benefits to modern, cloud-native applications of accessing durable, scalable, multi-tenant storage and messaging as services are potentially huge as well.


Nova, Zaqar and Swift are the Peanut Butter, Bacon and Bananas of your OpenStack cloud sandwich: each is delicious on its own, or in any combination. The 300 pound Elvis of cloud will naturally want all three, but expect to see every possible permutation deployed in some organisation. Part of the beauty of open source is that one size does not have to fit all.

Of course providing stable infrastructure to move legacy applications to a shared, self-service model is important, and it is no surprise to see users clamouring for it in this still-early stage of the cloud transition. However if the cloud-native applications of the future are written against proprietary APIs then OpenStack will have failed to achieve its mission. Fortunately, I do not believe those goals are in opposition. In fact, I think they are complementary. We can, and must, do both. Stop the madness and embrace the tastiness.

by Zane Bitter at December 04, 2014 04:14 PM

Mirantis

Mirantis Releases Free Developer Edition of Mirantis OpenStack Express

Accelerates Developer Adoption with OpenStack Tutorials

MOUNTAIN VIEW, CA – December 4, 2014 – To support developer adoption of OpenStack, Mirantis, the pure-play OpenStack company, today released a free Developer Edition of Mirantis OpenStack Express and a dozen new free OpenStack tutorials. Mirantis OpenStack Express is the fastest and easiest way to get an OpenStack cloud. Its Developer Edition is tailored for solo developers and solution providers looking to try OpenStack, and offers an on-demand, hosted development environment for Mirantis OpenStack, along with full 24/7 support, at no cost for the first year.

To enable OpenStack newcomers to quickly become productive with OpenStack, Mirantis’ Developer Edition comes with a dozen of new, online tutorials based on Mirantis Training for OpenStack. The tutorials cover common use cases such as adding images, launching VMs, and using the Murano OpenStack application catalogue, all of which are available in the Developer Edition.

Mirantis is deeply invested in the OpenStack community. It is a top three contributor to OpenStack, offers the leading OpenStack distribution, and provides the most popular OpenStack training on the market. This year, Mirantis’ Driverlog initiative catalogued 106 vendor solutions that plug in underneath OpenStack, giving customers a single, consolidated list of compatible hardware.

“The success of OpenStack is all about the solutions running on top of it,” said Mirantis CEO Adrian Ionel. “Hundreds of thousands of developers are now familiar with cloud services and we want to enable them to ride the wave of OpenStack adoption. The free Developer Edition is their invitation to learn OpenStack and see its benefits, at no cost or risk.”

A number of solutions vendors have already started using Mirantis OpenStack Express for application development, including RealStatus, Tata Consultancy Services and Avi Networks. “With Mirantis OpenStack Express, Avi Networks can bring up a robust application delivery solution in minutes, not hours or days,” said Guru Chahal, VP Product Management, Avi Networks. “The technology simplifies OpenStack setup and configuration, saving us valuable engineering resources, and bringing instant value to our customers.”

The Developer Edition of Mirantis OpenStack Express will be free for the first 12 months, and $39.99 per month thereafter. It includes an OpenStack tenant with a quota of 4 virtual CPUs, 4 GB RAM, 100 GB of storage, and two floating IP addresses, plus access to OpenStack APIs. Developers can add resources on demand, starting with quotas of 2 vCPU, 2 GB RAM, and 50 GB of storage at $19.99 per month.

In addition to the Developer Edition, Mirantis offers an affordable Team Edition of Mirantis OpenStack Express for mid-sized teams looking to develop, test and deploy their applications on OpenStack, and a customizable Enterprise Edition of Mirantis OpenStack Express for large enterprises looking to migrate enterprise workloads to OpenStack. To learn more about each edition of Mirantis OpenStack Express, visit: https://express.mirantis.com/.

About Mirantis
Mirantis is the world’s leading OpenStack company. Mirantis delivers all the software, services, training and support needed for running OpenStack. More customers rely on Mirantis than any other company to get to production deployment of OpenStack at scale. Among the top three companies worldwide in contributing open source software to OpenStack, Mirantis has helped build and deploy some of the largest OpenStack clouds in the world at companies such as Cisco, Comcast, Ericsson, Expedia, NASA, NTT Docomo, PayPal, Symantec, Samsung, and WebEx.

Mirantis is venture-backed by Insight Venture Partners, August Capital, Ericsson, Red Hat, Intel Capital, Sapphire Ventures and WestSummit Capital, with headquarters in Mountain View, California. Follow us on Twitter at @mirantisit.

Learn more:
Mirantis: www.mirantis.com
Mirantis OpenStack: https://software.mirantis.com/
Mirantis OpenStack Express: https://express.mirantis.com/
Mirantis Training for OpenStack: http://training.mirantis.com/

Contact Information:
Sarah Bennett
PR Manager, Mirantis
sbennett@mirantis.com

The post Mirantis Releases Free Developer Edition of Mirantis OpenStack Express appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Sarah Bennett at December 04, 2014 02:00 PM

OpenStack Blog

Studying Midcycle Sprints and Meetings

Hard to believe that we gathered in Paris just a month ago for the Design Summit. I’m still snacking on the chocolate and cheese from that fine place.

Since the Summit we’ve had questions and posts about midcycle meeting planning, so I started gathering information from the various teams and we discussed at a recent cross-project meeting. Here are the findings about patterns, guidelines, and pitfalls for midcycle meetings. I’ve posted these to the OpenStack wiki also so we can continue to evolve and refine as our teams self-organize.

Patterns we’re seeing for current midcycle meetings:

  • About 10-30 people attend.
  • Feedback by attendees is positive; midcycle meetings are generally considered productive events.
  • Only a handful of groups tend to hold them; for this cycle we’re seeing some teams choose not to. Often subteams form around a specific tasks to complete.
  • Meeting space is donated by companies, so often location is based on available free spaces and dates are then partial to the location availability.
  • Travel costs are the responsibility of the individual attending (their employer most likely).
  • Sometimes companies will host meals (not often).

Guidelines we want teams to stick to:

  • Highest return on investment is in early days of joining or forming the team, when social bonds and trust need to be established.
  • Best quality meetings tend to be sprint-oriented with real work getting done and a specific outcome in mind; hence the mid-cycle sprint name.
  • Organization responsibility lands on PTL or a designated delegate.
  • Multiple sprints can be co-hosted. However, productivity also lies in the small, quiet environment, and social bonding is easier in smaller groups.
  • Unless there are good cross-pollination opportunities between co-hosted teams, teams should favor separate sprints.
  • The Design Summit should be the first choice for gathering the whole team for decisions and roadmap alignment.

Known risks to be aware of:

  • Pitfall can be a feeling of “required” attendance (social or actual) causing hard feelings; also some types of contributors will be marginalized despite best efforts, such as those without a corporate sponsor, family caretakers, and people who need Visas to travel.
  • Often there’s a choice implied in choosing what an individual travels to; adding a midcycle sprint means a choice has to be made.
  • Virtual aspects of a midcycle require additional support such as open source tooling, or if using non-open source tooling, you must get agreement from participants.

From my own experience, the docs team had a boot camp over a year ago where we had a great time focusing on our team, but since then we haven’t needed to meet separately from the Design Summit. Here’s Michael Still giving his “I love you” sign at our docs team meetup.
Doc boot camp

If you have any questions about midcycle sprints, please ask Stefano Maffulli. To get a list of upcoming sprints, both in person and virtual, go to the Sprints wiki page.

by Anne Gentle at December 04, 2014 05:46 AM

December 03, 2014

Elizabeth K. Joseph

December 2014 OpenStack Infrastructure User Manual Sprint

Back in April, the OpenStack Infrastructure project create the Infrastructure User Manual. This manual sought consolidate our existing documentation for Developers, Core Reviewers and Project Drivers, which was spread across wiki pages, project-specific documentation files and general institutional knowledge that was mostly just in our brains.

Books

In July, at our mid-cycle sprint, Anita Kuno drove a push to start getting this document populated. There was some success here, we had a couple of new contributors. Unfortunately, after the mid-cycle reviews only trickled in and vast segments of the manual remained empty.

At the summit, we had a session to plan out how to change this and announced an online sprint in the new #openstack-sprint channel (see here for scheduling: https://wiki.openstack.org/wiki/VirtualSprints). We hosted the sprint on Monday and Tuesday of this week.

Over these 2 days we collaborated on an etherpad so no one was duplicating work and we all did a lot of reviewing. Contributors worked to flesh out missing pieces of the guide and added a Project Creator’s section to the manual.

We’re now happy to report, that with the exception of the Third Party section of the manual (to be worked on collaboratively with the broader Third Party community at a later date), our manual is looking great!

The following are some stats about our sprint gleaned from Gerrit and Stackalytics:

Sprint start

  • Patches open for review: 10
  • Patches merged in total repo history: 13

Sprint end:

  • Patches open for review: 3, plus 2 WIP (source)
  • Patches merged during sprint: 30 (source)
  • Reviews: Over 200 (source)

We also have 16 patches for documentation in flight that were initiated or reviewed elsewhere in the openstack-infra project during this sprint, including the important reorganization of the git-review documentation (source)

Finally, thanks to sprint participants who joined me for this sprint, sorted chronologically by reviews: Andreas Jaeger, James E. Blair, Anita Kuno, Clark Boylan, Spencer Krum, Jeremy Stanley, Doug Hellmann, Khai Do, Antoine Musso, Stefano Maffulli, Thierry Carrez and Yolanda Robla

by pleia2 at December 03, 2014 04:30 PM

Tesora Corp

Short Stack: New Surveys Show OpenStack Growth, Comcast Reflects on Summit

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

Accelerating the Pace of Open Source Innovation to Make the World Move Faster | Comcast Voices

At this past OpenStack in Paris, Comcast was a featured case study and finalist for the Superuser of Year award. In this post, Comcast's Victor Howard shares what he found exciting about the Summit.

Survey: OpenStack Dominates Open Source Cloud | Virtualization Review 

The "2014 State of the Open Source Cloud" survey by Zenoss supports what many of us have been seeing recently in the open source cloud space. OpenStack still has great momentum and continues to grow its marketshare at the expense of CloudStack and Eucalyptus.

The State Of OpenStack Video | Rackspace Blog!

Rackspace's Jim Curry and Van Lindberg share their points of view on OpenStack and its future.

Getting OpenStack Ready for the Enterprise | ZDNet

In this survey by Gigaom Research and CipherCloud, we see even more enterprises considering OpenStack with 64 percent of IT Managers including it on their technology roadmaps.

Delivering Public Cloud Functionality in #OpenStack | VMblog

This post explores how one can create public cloud-like packaging, metering and billing to the enterprise with RedHat Enteprise Linux and Talligent leveraging Ceilometer.

by 1 at December 03, 2014 01:51 PM

Arx Cruz

OpenStack 3rd Party CI - Part III - Configuring your puppet recipes

Last time, we talked about Puppetboard. Now let’s start to work with recipes to install our services.
For that I’ve created a github project called openstack-puppet-recipes

I will continue to update the github as we progress in this series of posts.

if you remember y first post, we have two environments: production and development, and we have a tree hierarchy that I'll assume you are following.

So, let’s connect in our puppet-master server, and clone openstack-puppet-recipes:

git clone https://github.com/arxcruz/openstack-puppet-recipes

Now just run the install_recipes.sh

./install_recipes.sh

The install_recipes.sh is a very simple script, just to automate the symbolic link to puppet-config, create the puppet.conf, among other things.

Okay, now you should have this in your /etc/puppet directory:

<script src="https://gist.github.com/arxcruz/d740d9e30775c92c1d74.js"></script>

Now let’s restart the puppetmaster service

sudo service puppetmaster restart

Or if you’re running puppet-passenger:

sudo service apache2 restart

And you’re done for now. Next post in this series we will create our first service: Zuul

by Arx Cruz at December 03, 2014 09:26 AM

Welcome to Postach.io!

Postach.io is the blogging platform that's powered by your Evernote documents.

Creating a Post

Postach.io creates blog posts and pages from your notes. To create a post, write a note and tag it as "published" in this notebook. Then click the "Sync" button. Bam! Your note is published on your site, just like that!

Creating a Page

Creating a page works the same, except you add an additional tag "page". This tells Postach.io to create your note as page instead of a post.

Editing a Post

Updating a note is just as easy. Try making a change to this note, and click the "Sync" button. Visit your site, and you'll see its been updated.

Deleting a Post

To remove a note from your site, simply remove the "published" tag and re-sync. You could also delete the note itself. Its that easy!

How do I change the profile photo on my site?

To upload a custom profile photo, click "Choose File" in your site settings and select the image you want to use. Thats it! For those who use Gravatar, we support that by default too.

Creating Additional Sites?

To create a site, click the "Create New Site" button above. You'll be taken to Evernote to authenticate your account. You'll then be able to select an Evernote notebook, a unique subdomain along with other details such as author, twitter username and Google analytics code.

How can I format the style of my posts and pages?

There are several ways to format your notes. The easiest is to use the Evernote browser version. Although Evernote has a desktop and mobile version, their browser app has more formatting options for fonts, resizing and aligning images and inserting other elements. The second option is to use Markdown. This is basically a way of using regular text elements to format your notes. You can learn more about Markdown here. The last, and more advanced approach is to use HTML directly. Postach.io supports basic HTML elements and styles.

How can I add comments to my blog posts?

Postach.io uses Disqus for comments. Its a widely used and excellent tool for adding threaded comments to your site quickly. You'll need to create an account, which is free, to start using it.

I still have questions!

We'd be glad to help! Hit us up by clicking the "?" icon at the bottom right hand of your Postach.io screen. Alternatively we post lots of tutorials and tips on our blog and Facebook Group.

Example Postach.io Blogs

Postach.io can be used to create many variations of blogs and documentation sites! Here's some ideas to get you started.

  • A tumble blog of your life and adventures.
  • The internet needs more funny cat photos. Go!
  • As an educator you can use it for teaching.
  • Start a food blog of your favourite places to eat, with a short review.
  • Documentation for your product or app.
  • Create a link blog of interesting articles.
  • Share your notebook with friends to make a collaborative wiki.
Moving Forward

We're really excited about Postach.io and are happy you've decided to join us for the ride! If you have any suggestions, questions or feedback, we're all ears! We want to make this thing awesome! We can't do it without you.

Don't forget to follow us on Twitter or Facebook

by Arx Cruz at December 03, 2014 09:26 AM

OpenStack 3rd Party CI - Part I

So, today I will start a set of posts about how to set your own 3rd party OpenStack CI.
To have a minimal infrastructure set, we need the following services:
  • Puppetmaster (not necessarily a requirement, but will make your life easier)
  • puppetboard (not necessarily also, but it’s good to keep track of all our changes)
  • Zuul
  • Nodepool
  • Jenkins
The coolest thing about it, is that you can use OpenStack itself to create your 3rd Party CI. For example, I use OpenStack to create the vm’s that provides these services to our 3rd Party CI, and these VM’s like nodepool, connect to this same OpenStack, to spawn VM’s that will test OpenStack projects (like nova, tempest, neutron, etc.). Also, it’s more easy to manage backups and snapshots, so if your zuul service VM stop work, just create a new one from a snapshot, or create a new VM, run puppet, and voilá! So yeah… Inception feelings here.

Today we will start with puppetmaster.

Why puppetmaster?

Well, puppet will enable us to keep up to date with OpenStack CI upstream, as well as make easy to deploy our services (zuul, nodepool and jenkins).

Before we start

I’m assuming you’re using Ubuntu 12.04. I had problems to use newer versions of Ubuntu. Feel free to test with latest version, and let me know if works.

Installing puppet-master

Well, after you have your VM or Machine up and running, install puppet-master is pretty easy, you just need to install the puppetmaster, but since our focus here is to create our 3rd Party CI, let’s do a few things different.
First of all, get the config from openstack-infra github:
git clone https://github.com/openstack-infra/config
cd config
./install_puppet.sh
./install_modules.sh

This will install all the modules necessary from OpenStack.

Now, you can install the puppetmaster itself:

sudo apt-get install puppetmaster

Now, let’s explain how this puppetmaster will work:

We will use the recipes from openstack-infra/config together with our own recipes. The great thing in puppet is that you can inherit a recipe or some times override some values in the classes/recipes. For example, in openstack-infra/config/manifests/site.pp we have this:

node 'jenkins.openstack.org' {
class { 'openstack_project::jenkins':
jenkins_jobs_password => hiera('jenkins_jobs_password', 'XXX'),
jenkins_ssh_private_key => hiera('jenkins_ssh_private_key_contents', 'XXX'),
ssl_cert_file_contents => hiera('jenkins_ssl_cert_file_contents', 'XXX'),
ssl_key_file_contents => hiera('jenkins_ssl_key_file_contents', 'XXX'),
ssl_chain_file_contents => hiera('jenkins_ssl_chain_file_contents', 'XXX'),
sysadmins => hiera('sysadmins', ['admins']),
zmq_event_receivers => ['logstash.openstack.org',
],
}
}

Which we can change to something that fits our jenkins server (don’t worry, I will show how to install your own jenkins server later):

node ‘myjenkins.mycompany.com' {
class { 'openstack_project::jenkins':
jenkins_jobs_password => hiera('jenkins_jobs_password', 'XXX'),
jenkins_ssh_private_key => hiera('jenkins_ssh_private_key_contents', 'XXX'),
ssl_cert_file_contents => hiera('jenkins_ssl_cert_file_contents', 'XXX'),
ssl_key_file_contents => hiera('jenkins_ssl_key_file_contents', 'XXX'),
ssl_chain_file_contents => hiera('jenkins_ssl_chain_file_contents', 'XXX'),
sysadmins => hiera('sysadmins', ['admins']),
zmq_event_receivers => [‘myzmq_events.mycompany.com',
‘mynodepool.mycompany.com',
],
}
}

And it “should" work. I say should, because although there’s an effort in the community to make these recipes as customizable as possible, sometimes there’s still some project specific hardcoded info in those recipes, like for example the puppetmaster server is hardcoded in all openstack-infra/config recipes, so you need to have to modify it manually since there’s no option to pass this information as argument.
Or you can always help the community writing a patch to enable this.

Okay, let’s back to our puppetmaster configuration

Configuring puppetmaster

So, what I normally do is have a git repository with my own puppet files. This is good, because I keep a cron job running every hour checking my git repository, so I don’t need to manually change anything in my puppetmaster later, as well as I can track all my changes. So, in this repository, I have the following structure:

puppet-config -> The root
puppet-config/manifests -> Where I have my site.pp containing the recipes for all my services (zuul, nodepool, status, jenkins, etc)
puppet-config/hieradata -> Contain my hiera data for both production and development
puppet-config/hieradata/production/common.yaml -> hiera data for production
puppet-config/hieradata/development/common.yaml -> hiera data for development
puppet-config/hiera.yaml -> hiera pointing to production and development
puppet-config/puppet.conf -> Yes, we gonna need our own puppet.conf and I will show you how to make this.

Normally I have this git repository with this structure, and what I normally do is put it on /etc/puppet and create symbolic links as follow:

cd /etc/puppet
rm -rf manifests
rm -rf puppet.conf
ln -sf puppet-config/manifests manifests
ln -sf puppet-config/hiera.yaml hiera.yaml
ln -sf puppet-config/puppet.conf puppet.conf

Now let’s see the content of the hiera.yaml:

:backends:
- yaml
:yaml:
:datadir: "/etc/puppet/puppet-config/hieradata/%{::environment}"
:hierarchy:
- "%{::clientcert}"
- "%{::custom_location}"
- common

notice the %{::environment} here. You can have as many environments as you want. In my case, I have two: production and development, so you just need one single puppetmaster to rule them all.

Now let’s see the puppet.conf (I’m adding comments where I think it’s necessary):

[main]
# this is the name of your puppetmaster
server = puppet-master.mycompany.com
[master]
# The maximum time to delay before runs. Defaults to being the same as the
# run interval.
# The default value is '$runinterval'.
splaylimit = 1800

# The log file for puppet agent. This is generally not used.
# The default value is '$logdir/puppetd.log'.
puppetdlog = /var/log/puppet/puppetd.log

# The directory in which client-side YAML data is stored.
# The default value is '$vardir/client_yaml'.
clientyamldir = /var/lib/puppet/client_yaml

# The server to which server puppet agent should connect
# The default value is 'puppet'.
server = puppet-master.mycompany.com

# The file in which puppet agent stores a list of the resources
# associated with the retrieved configuration.
# The default value is '$statedir/resources.txt'.
resourcefile = /var/lib/puppet/state/resources.txt

# The port to communicate with the report_server.
# The default value is '$masterport'.
report_port = 8140

# Where puppet agent stores the last run report summary in yaml format.
# The default value is '$statedir/last_run_summary.yaml'.
lastrunfile = /var/lib/puppet/state/last_run_summary.yaml

# Where to store dot-outputted graphs.
# The default value is '$statedir/graphs'.
graphdir = /var/lib/puppet/state/graphs

# Where puppet agent caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$statedir/localconfig'.
localconfig = /var/lib/puppet/state/localconfig

# The directory in which serialized data is stored on the client.
# The default value is '$vardir/client_data'.
client_datadir = /var/lib/puppet/client_data

# (Deprecated for 'report_server') The server to which to send transaction reports.
# The default value is '$server'.
reportserver = puppet-master.mycompany.com

# The server to send facts to.
# The default value is '$server'.
inventory_server = puppet-master.mycompany.com

# The server to use for certificate
# authority requests. It's a separate server because it cannot
# and does not need to horizontally scale.
# The default value is '$server'.
ca_server = puppet-master.mycompany.com

# Where puppet agent stores the last run report in yaml format.
# The default value is '$statedir/last_run_report.yaml'.
lastrunreport = /var/lib/puppet/state/last_run_report.yaml

# The explicit value used for the node name for all requests the agent
# makes to the master. WARNING: This setting is mutually exclusive with
# node_name_fact. Changing this setting also requires changes to the default
# auth.conf configuration on the Puppet Master. Please see
# The default value is '$certname'.
node_name_value = puppet-master.mycompany.com

# Where puppet agent and puppet master store state associated
# with the running configuration. In the case of puppet master,
# this file reflects the state discovered through interacting
# with clients.
# The default value is '$statedir/state.yaml'.
statefile = /var/lib/puppet/state/state.yaml

# Where FileBucket files are stored locally.
# The default value is '$vardir/clientbucket'.
clientbucketdir = /var/lib/puppet/clientbucket

# The file in which puppet agent stores a list of the classes
# associated with the retrieved configuration. Can be loaded in
# the separate `puppet` executable using the `--loadclasses`
# option.
# The default value is '$statedir/classes.txt'.
classfile = /var/lib/puppet/state/classes.txt

# The server to send transaction reports to.
# The default value is '$server'.
report_server = puppet-master.mycompany.com

# The port to communicate with the inventory_server.
# The default value is '$masterport'.
inventory_port = 8140

# The port to use for the certificate authority.
# The default value is '$masterport'.
ca_port = 8140

# A lock file to temporarily stop puppet agent from doing anything.
# The default value is '$statedir/puppetdlock'.
puppetdlockfile = /var/lib/puppet/state/puppetdlock

# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = /var/lib/puppet/ssl

# From where to retrieve plugins. The standard Puppet `file` type
# is used for retrieval, so anything that is a valid file source can
# be used here.
# The default value is 'puppet://$server/plugins'.
pluginsource = puppet://puppet/plugins

# The private key directory.
# The default value is '$ssldir/private_keys'.
privatekeydir = /var/lib/puppet/ssl/private_keys

# Where Puppet should look for facts. Multiple directories should
# be separated by the system path separator character. (The POSIX path separator is ':', and the Windows path separator is ';'.)
# The default value is '$vardir/lib/facter:$vardir/facts'.
factpath = /var/lib/puppet/lib/facter

# Where individual hosts store and look for their certificate requests.
# The default value is '$ssldir/csr_$certname.pem'.
hostcsr = /var/lib/puppet/ssl/csr_puppet-master.mycompany.com.pem

# Where individual hosts store and look for their public key.
# The default value is '$publickeydir/$certname.pem'.
hostpubkey = /var/lib/puppet/ssl/public_keys/puppet-master.mycompany.com.pem

# Should usually be the same as the facts terminus
# The default value is '$facts_terminus'.
inventory_terminus = yaml

# The public key directory.
# The default value is '$ssldir/public_keys'.
publickeydir = /var/lib/puppet/ssl/public_keys

# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet

# The directory where Puppet state is stored. Generally,
# this directory can be removed without causing harm (although it
# might result in spurious service restarts).
# The default value is '$vardir/state'.
statedir = /var/lib/puppet/state

# Where the client stores private certificate information.
# The default value is '$ssldir/private'.
privatedir = /var/lib/puppet/ssl/private

# Where Puppet should store facts that it pulls down from the central
# server.
# The default value is '$vardir/facts/'.
factdest = /var/lib/puppet/facts/

# Where individual hosts store and look for their certificates.
# The default value is '$certdir/$certname.pem'.
hostcert = /var/lib/puppet/ssl/certs/puppet-master.mycompany.com.pem

# The YAML file containing indirector route configuration.
# The default value is '$confdir/routes.yaml'.
route_file = /etc/puppet/routes.yaml

# Where each client stores the CA certificate.
# The default value is '$certdir/ca.pem'.
localcacert = /var/lib/puppet/ssl/certs/ca.pem

# The name to use when handling certificates. Defaults
# to the fully qualified domain name.
# The default value is 'puppet-master.mycompany.com'.
certname = puppet-master.mycompany.com

# Where the puppet agent web server logs.
# The default value is '$logdir/http.log'.
httplog = /var/log/puppet/http.log

# The certificate directory.
# The default value is '$ssldir/certs'.
certdir = /var/lib/puppet/ssl/certs

# Where Puppet should store plugins that it pulls down from the central
# server.
# The default value is '$libdir'.
plugindest = /var/lib/puppet/lib

# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet

# Where host certificate requests are stored.
# The default value is '$ssldir/certificate_requests'.
requestdir = /var/lib/puppet/ssl/certificate_requests

# An extra search path for Puppet. This is only useful
# for those files that Puppet will load on demand, and is only
# guaranteed to work for those cases. In fact, the autoload
# mechanism is responsible for making sure this directory
# is in Ruby's search path
# The default value is '$vardir/lib'.
libdir = /var/lib/puppet/lib

# Where puppet agent stores the password for its private key.
# Generally unused.
# The default value is '$privatedir/password'.
passfile = /var/lib/puppet/ssl/private/password

# From where to retrieve facts. The standard Puppet `file` type
# is used for retrieval, so anything that is a valid file source can
# be used here.
# The default value is 'puppet://$server/facts/'.
factsource = puppet://puppet/facts/

# Where individual hosts store and look for their private key.
# The default value is '$privatekeydir/$certname.pem'.
hostprivkey = /var/lib/puppet/ssl/private_keys/puppet-master.mycompany.com.pem

# The configuration file that defines the rights to the different
# namespaces and methods. This can be used as a coarse-grained
# authorization system for both `puppet agent` and `puppet master`.
# The default value is '$confdir/namespaceauth.conf'.
authconfig = /etc/puppet/namespaceauth.conf

# Where the host's certificate revocation list can be found.
# This is distinct from the certificate authority's CRL.
# The default value is '$ssldir/crl.pem'.
hostcrl = /var/lib/puppet/ssl/crl.pem

# Where the puppet master web server logs.
# The default value is '$logdir/masterhttp.log'.
masterhttplog = /var/log/puppet/masterhttp.log

# Where FileBucket files are stored.
# The default value is '$vardir/bucket'.
bucketdir = /var/lib/puppet/bucket

# The header containing the status
# message of the client verification. Only used with Mongrel. This header must be set by the proxy
# to 'SUCCESS' if the client successfully authenticated, and anything else otherwise.
# The default value is 'HTTP_X_CLIENT_VERIFY'.
ssl_client_verify_header = SSL_CLIENT_VERIFY

# Configure the backend terminus used for StoreConfigs.
# By default, this uses the ActiveRecord store, which directly talks to the
# database from within the Puppet Master process.
# The default value is 'active_record'.
storeconfigs_backend = puppetdb

# The search path for modules, as a list of directories separated by the system path separator character. (The POSIX path separator is ':', and the Windows path separator is ';'.)
# The default value is '$confdir/modules:/usr/share/puppet/modules'.
modulepath = /etc/puppet/modules:/usr/share/puppet/modules:/usr/share/puppet/config/modules:/etc/puppet/puppet-config/modules

# The list of reports to generate. All reports are looked for
# in `puppet/reports/name.rb`, and multiple report names should be
# comma-separated (whitespace is okay).
# The default value is 'store'.
reports = store,puppetdb

# Where the fileserver configuration is stored.
# The default value is '$confdir/fileserver.conf'.
fileserverconfig = /etc/puppet/fileserver.conf

# The entry-point manifest for puppet master.
# The default value is '$manifestdir/site.pp'.
manifest = /etc/puppet/manifests/site.pp

# The configuration file that defines the rights to the different
# rest indirections. This can be used as a fine-grained
# authorization system for `puppet master`.
# The default value is '$confdir/auth.conf'.
rest_authconfig = /etc/puppet/auth.conf

# The directory in which YAML data is stored, usually in a subdirectory.
# The default value is '$vardir/yaml'.
yamldir = /var/lib/puppet/yaml

# The directory in which to store reports
# received from the client. Each client gets a separate
# subdirectory.
# The default value is '$vardir/reports'.
reportdir = /var/lib/puppet/reports

# The configuration file for master.
# The default value is '$confdir/puppet.conf'.
config = /etc/puppet/puppet.conf

# Where puppet master logs. This is generally not used,
# since syslog is the default log destination.
# The default value is '$logdir/puppetmaster.log'.
masterlog = /var/log/puppet/puppetmaster.log

# Whether to store each client's configuration, including catalogs, facts,
# and related data. This also enables the import and export of resources in
# the Puppet language - a mechanism for exchange resources between nodes.
# By default this uses ActiveRecord and an SQL database to store and query
# the data; this, in turn, will depend on Rails being available.
# You can adjust the backend using the storeconfigs_backend setting.
storeconfigs = true

# The directory in which serialized data is stored, usually in a subdirectory.
# The default value is '$vardir/server_data'.
server_datadir = /var/lib/puppet/server_data

# The URL used by the http reports processor to send reports
# The default value is 'http://localhost:3000/reports/upload'.
reports = store, http
# The pid file
# The default value is '$rundir/$name.pid'.
pidfile = /var/run/puppet/master.pid

# Where puppet master looks for its manifests.
# The default value is '$confdir/manifests'.
manifestdir = /etc/puppet/manifests

# Where the CA stores certificate requests
# The default value is '$cadir/requests'.
csrdir = /var/lib/puppet/ssl/ca/requests

# Where the serial number for certificates is stored.
# The default value is '$cadir/serial'.
serial = /var/lib/puppet/ssl/ca/serial

# Whether to enable autosign. Valid values are true (which
# autosigns any key request, and is a very bad idea), false (which
# never autosigns any key request), and the path to a file, which
# uses that configuration file to determine which keys to sign.
# The default value is '$confdir/autosign.conf'.
autosign = /etc/puppet/autosign.conf

# The CA certificate.
# The default value is '$cadir/ca_crt.pem'.
cacert = /var/lib/puppet/ssl/ca/ca_crt.pem

# The certificate revocation list (CRL) for the CA. Will be used if present but otherwise ignored.
# The default value is '$cadir/ca_crl.pem'.
cacrl = /var/lib/puppet/ssl/ca/ca_crl.pem

# Where the CA stores signed certificates.
# The default value is '$cadir/signed'.
signeddir = /var/lib/puppet/ssl/ca/signed

# A Complete listing of all certificates
# The default value is '$cadir/inventory.txt'.
cert_inventory = /var/lib/puppet/ssl/ca/inventory.txt

# The name to use the Certificate Authority certificate.
# The default value is 'Puppet CA: $certname'.
ca_name = Puppet CA: puppet-master.mycompany.com

# The CA private key.
# The default value is '$cadir/ca_key.pem'.
cakey = /var/lib/puppet/ssl/ca/ca_key.pem

# Where the CA stores private certificate information.
# The default value is '$cadir/private'.
caprivatedir = /var/lib/puppet/ssl/ca/private

# Where the CA stores the password for the private key
# The default value is '$caprivatedir/ca.pass'.
capass = /var/lib/puppet/ssl/ca/private/ca.pass

# The root directory for the certificate authority.
# The default value is '$ssldir/ca'.
cadir = /var/lib/puppet/ssl/ca

# The CA public key.
# The default value is '$cadir/ca_pub.pem'.
capub = /var/lib/puppet/ssl/ca/ca_pub.pem

# Where Puppet looks for template files. Can be a list of colon-separated
# directories.
# The default value is '$vardir/templates'.
templatedir = /etc/puppet/templates

# The mapping between reporting tags and email addresses.
# The default value is '$confdir/tagmail.conf'.
tagmap = /etc/puppet/tagmail.conf

# How often RRD should expect data.
# This should match how often the hosts report back to the server.
# The default value is '$runinterval'.
rrdinterval = 1800

# The directory where RRD database files are stored.
# Directories for each reporting host will be created under
# this directory.
# The default value is '$vardir/rrd'.
rrddir = /var/lib/puppet/rrd

# During an inspect run, the file bucket server to archive files to if archive_files is set.
# The default value is '$server'.
archive_file_server = puppet-master.mycompany.com

# During an inspect run, whether to archive files whose contents are audited to a file bucket.
# archive_files = false

# Path to the device config file for puppet device
# The default value is '$confdir/device.conf'.
deviceconfig = /etc/puppet/device.conf

# The root directory of devices' $vardir
# The default value is '$vardir/devices'.
devicedir = /var/lib/puppet/devices

# The directory into which module tool data is stored
# The default value is '$vardir/puppet-module'.
module_working_dir = /var/lib/puppet/puppet-module

# Document all resources
# document_all = false

[production]
modulepath = /etc/puppet/modules:/usr/share/puppet/modules:/usr/share/puppet/production/config/modules:/etc/puppet/puppet-config/modules

[development]
modulepath = /etc/puppet/modules:/usr/share/puppet/modules:/usr/share/puppet/development/config/modules:/etc/puppet/puppet-config/modules


Notice the [production] and [development] sections. Here what’s happen: The openstack-infra/config it’s in constant changes, so I have two environments in my infrastructure, one with a openstack-infra/config version that I know it’s working fine (production) and one openstack-infra/config with the latest changes from the project (development) that might work or not, so I can always keep track of what’s going on upstream (here I use a cron job to update from times to times), without mess my production. I track all this stuff with puppetboard, which I will show how to configure later.

If you’re following this step-by-step, you can copy and paste this puppet.conf above, and replace puppet-master.mycompany.com with your puppetmaster address.

As you might figure out already, you need to create two directories in /usr/share/puppet/: development and production and put a openstack-infra/config copy in both (then you can keep updating development and check what works and what don’t, fix it in your side or upstream with a patch and then upgrade your production environment)

sudo mkdir /usr/share/puppet/development/
cd /usr/share/puppet/development/
sudo git clone https://github.com/openstack-infra/config
sudo mkdir /usr/share/puppet/production
cd /usr/share/puppet/production
sudo git clone https://github.com/openstack-infra/config

Now you just need to start your puppet-master service and you’re ready to rock!

Next step is create your recipes and mix it with openstack-infra/config upstream. We going to do this with each service we need, so, every post I will show the puppet recipes necessary to configure each service.

by Arx Cruz at December 03, 2014 09:26 AM