May 29, 2020


Exciting Features of OpenStack’s 21st Release: Ussuri

OpenStack has pioneered the concept of open infrastructure since 2010. It achieves this with new releases two times a year, giving users the best of services and experience. With over 24,000 code changes by over 1,000 developers from across over 50 countries and 188 organizations, the 21st OpenStack release, Ussuri, is here.

VEXXHOST couldn’t be more excited to be among the contributors of OpenStack Ussuri. Working with the community is always a pleasure. It is even more so when our efforts realize in the form of new features and improvements.

Ussuri updates bring about changes in two significant areas, the core infrastructure layer and security and encryption, along with plenty of other exciting features serving their use cases.

Reliability of the Core Infrastructure Layer

Nova – is the compute service. It added support for cold migration and resizing support. Live cross cell migration is not supported yet. But, the magic of an open source community is that it can contributions can be made for it. A big new addition is API policy introducing new default roles with scope_type capabilities. This new change enhances security and manageability by being richer at accessing projects at both system and project level.

Ironic – the bare metal provisioning, has added support for hardware retirement workflow to enable automation of hardware decommission in managed clouds.

Kuryr –  the networking bridge, added support for IPv6 and improves policy support.

Security and Encryption Enhancement

Octavia –  the load balancing service. It allows you to specify the Transport Layer Security (TLS) ciphers acceptable for listeners and pools. This feature gets load balancers to enforce security compliance requirements.  Another awaited update that became a part of Ussuri is the support for deployment in specific availability zones, allowing the deployment of load balancing capabilities to edge environments. An interesting thing that took place during these contributions was the mentorship of college students to get them familiarized with OpenStack. The learning by doing experience is very conducive to growing minds!

Neutron – the networking service, brings about several security improvements with this release. Support for stateless security groups to Role Based Access Control (RBAC) for address scopes and subnet pools are some significant improvements.

Other Important Features

The series of advancements in the new update continues and here are some other essential features that are a part of the 21st release.

Cinder added support for Glance multistore. It also supports image data colocation when uploading a volume to the image service. The latest features also included some new backend drivers. The work for volume-local-cache has started and is said to continue in the next release, Victoria.

Swift adds a new system namespace for the service, a versioning API, and S3 versioning.

You can decompress images, import single image or copy existing images in multiple stores and delete images from single-store through improvements in Glance.

User experience is improved as you can be given concrete role assignments without relying on the mapping API through Keystone’s additional features. You benefit most when using the federated authentication method.

A brand new feature of creating shares from snapshots across storage pools has been made available with Manilla.

Kolla is the containerized deployment service of OpenStack. This project has added initial support for TLS encryption of backend API services, providing end-to-end encryption of API traffic.

Magnum has added support in two areas. First, the Kubernetes version upgrade support. Second is the ability to upgrade the operating system of the Kubernetes cluster including master and worker nodes.

OpenStack Ussuri and VEXXHOST

“The extensive list of new features shows just how active the OpenStack community is and the VEXXHOST team is excited to be a part of such a progressive upstream community. As is expected of us, we are bringing the new release to our old and new clients as part of our OpenStack Upgrade Solution, and we hope for all OpenStack users to make the most of it”, said Mohammed Naser, CEO of VEXXHOST.

Come and upgrade to OpenStack Ussuri with us. VEXXHOST is here to guide you and consult with you in your OpenStack deployments every step of the way. Get a seamless experience while transitioning from an older release as our engineers will do the heavy lifting for you. Are you looking to make the most of the benefits that come with every release? Get in touch with our experts for more information on how we can help you get started on your OpenStack Upgrade journey.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Exciting Features of OpenStack’s 21st Release: Ussuri appeared first on VEXXHOST.

by Samridhi Sharma at May 29, 2020 06:12 PM

Thomas Goirand

A quick look into Storcli packaging horror

So, Megacli is to be replaced by Storcli, both being proprietary tools for configuring RAID cards from LSI.

So I went to download what’s provided by Lenovo, available here:

It’s very annoying, because they force users to download a .zip file containing a deb file, instead of providing a Debian repository. Well, ok, though at least there’s a deb file there. Let’s have a look what’s using my favorite tool before installing (ie: let’s run Lintian).
Then it’s a horror story. Not only there’s obvious packaging wrong, like the package provide stuff in /opt, and all is statically linked and provide embedded copies of libm and ncurses, or even the package is marked arch: all instead of arch: amd64 (in fact, the package contains both i386 and amd64 arch files…), but there’s also some really wrong things going on:

E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: libm
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: ncurses
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli
E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli64
E: storcli: embedded-library opt/MegaRAID/storcli/storcli64: libm
E: storcli: embedded-library … use –no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli64
E: storcli: changelog-file-missing-in-native-package
E: storcli: control-file-has-bad-permissions postinst 0775 != 0755
E: storcli: control-file-has-bad-owner postinst asif/asif != root/root
E: storcli: control-file-has-bad-permissions preinst 0775 != 0755
E: storcli: control-file-has-bad-owner preinst asif/asif != root/root
E: storcli: no-copyright-file
E: storcli: extended-description-is-empty
W: storcli: essential-no-not-needed
W: storcli: unknown-section storcli
E: storcli: depends-on-essential-package-without-using-version depends: bash
E: storcli: wrong-file-owner-uid-or-gid opt/ 1000/1000
W: storcli: non-standard-dir-perm opt/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/
W: storcli: non-standard-dir-perm opt/MegaRAID/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/storcli/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/
W: storcli: non-standard-dir-perm opt/MegaRAID/storcli/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid … use –no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/storcli
E: storcli: dir-or-file-in-opt … use –no-tag-display-limit to see all (or pipe to a file/program)

Some of the above are grave security problems, like wrong Unix mode for folders, even with the preinst script installed as non-root.
I always wonder why this type of tool needs to be proprietary. They clearly don’t know how to get packaging right, so they’d better just provide the source code, and let us (the Debian community) do the work for them. I don’t think there’s any secret that they are keeping by hiding how to configure the cards, so it’s not in the vendor’s interest to keep everything closed. Or maybe they are just hiding really bad code in there, that they are ashamed to share? In any way, they’d better not provide any package than this pile of dirt (and I’m trying to stay polite here…).

by Goirand Thomas at May 29, 2020 10:56 AM

May 28, 2020

OpenStack Superuser

Annual Superuser Awards Open!

Nominations are open for the annual Superuser Awards. The deadline is September 4. Nominees will select to nominate their organization depending on its Open Infrastructure use case:

  • AI / Machine Learning
  • Containers
  • CI/CD
  • Edge Computing
  • Data Center

This year, we will be recognizing award recipients by use case category.

All nominees will be reviewed by the community, and the Superuser editorial advisors will determine the winners. The nominees and winners will be announced in October by the OpenStack Foundation and the previous winner, Baidu.

Open Infrastructure provides resources to developers and users by integrating various open source components. The benefits are obvious, whether that infrastructure is in a private or a public context: the absence of lock-in, the power of interoperability opening up new possibilities, the ability to look under the hood, tinker with, improve the software and contribute back your changes.

The Superuser Awards recognize teams using Open Infrastructure to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the open source communities.  They aim to cover the same mix of open technologies as our publication, namely OpenStack, Kubernetes, Kata Containers, Airship, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, Zuul, OPNFV and more.

Teams of all sizes are encouraged to apply. If you fit the bill, or know a team that does, we encourage you to submit a nomination here.

After the community has reviewed all nominees, the Superuser editorial advisors will select winning organization(s).

When evaluating a winner for the Superuser Awards, advisors take into account the unique nature of use case(s), as well as integrations and applications of a particular team. Questions include how this team innovates with open infrastructure, for example working with container technology, NFV, and other unique workloads.

Additional selection criteria includes how the workload has transformed the company’s business, including quantitative and qualitative results of performance as well as community impact in terms of code contributions, feedback and knowledge sharing.

Winners will be recognized in a ceremony presented by the OpenStack Foundation and the previous winner, Baidu. Submissions are open now until September 4, 2020. You’re invited to nominate your team or someone you’ve worked with, too.

Launched at the Paris Summit in 2014, the community has continued to award users who show how open infrastructure is making a difference and provide strategic value in their organization. Past winners include  AT&T, CERNCity NetworkComcastNTT GroupTencent TStack, and  VEXXHOST.

Wonder what these organizations are doing with open infrastructure now? Superuser reached out to previous Award recipients to find out. We’ll be posting them for the next couple of weeks as a part of our “Where are they now?” series, leading up to our celebration of 10 years of OpenStack in July.

For more information about the Superuser Awards, please visit

The post Annual Superuser Awards Open! appeared first on Superuser.

by Helena Spease at May 28, 2020 01:00 PM


RDO Ussuri Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ussuri is the 21st release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.

The release is already available on the CentOS mirror network at

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

PLEASE NOTE: At this time, RDO Ussuri provides packages for CentOS8 only. Please use the previous release, Train, for CentOS7 and python 2.7.

Interesting things in the Ussuri release include:
  • Within the Ironic project, a bare metal service that is capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner, UEFI and device selection is now available for Software RAID.
  • The Kolla project, the containerised deployment of OpenStack used to provide production-ready containers and deployment tools for operating OpenStack clouds, streamlined the configuration of external Ceph integration, making it easy to go from Ceph-Ansible-deployed Ceph cluster to enabling it in OpenStack.
Other improvements include:
  • Support for IPv6 is available within the Kuryr project, the bridge between container framework networking models and OpenStack networking abstractions.
  • Other highlights of the broader upstream OpenStack project may be read via
  • A new Neutron driver networking-omnipath has been included in RDO distribution which enables the Omni-Path switching fabric in OpenStack cloud.
  • OVN Neutron driver has been merged in main neutron repository from networking-ovn.
During the Ussuri cycle, we saw the following new RDO contributors:
  • Amol Kahat 
  • Artom Lifshitz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Dan Pawlik 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Eyal 
  • Harald Jensås 
  • Kevin Carter 
  • Lance Albertson 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Riccardo Pittau 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • SurajP 
  • Toure Dunnon 

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 54 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

  • Adam Kimball 
  • Alan Bishop 
  • Alan Pevec 
  • Alex Schultz 
  • Alfredo Moralejo 
  • Amol Kahat 
  • Artom Lifshitz 
  • Arx Cruz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Cédric Jeanneret 
  • Chandan Kumar
  • Dan Pawlik
  • David Moreau Simard 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Emilien Macchi 
  • Eric Harney 
  • Eyal 
  • Fabien Boucher 
  • Gabriele Cerami 
  • Gael Chamoulaud 
  • Giulio Fidente 
  • Harald Jensås 
  • Jakub Libosvar 
  • Javier Peña 
  • Joel Capitao 
  • Jon Schlueter 
  • Kevin Carter 
  • Lance Albertson 
  • Lee Yarwood 
  • Marc Dequènes (Duck) 
  • Marios Andreou 
  • Martin Mágr 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Mike Turek 
  • Nicolas Hicher 
  • Rafael Folco 
  • Riccardo Pittau 
  • Ronelle Landy 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • Soniya Vyas
  • Sorin Sbarnea 
  • SurajP 
  • Toure Dunnon 
  • Tristan de Cacqueray 
  • Victoria Martinez de la Cruz 
  • Wes Hayutin 
  • Yatin Karel
  • Zoltan Caplovic
The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Victoria, which has an estimated GA the week of 12-16 October 2020. The full schedule is available at

Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 25-26 June 2020 for Milestone One and 17-18 September 2020 for Milestone Three.

Get Started
There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help
The RDO Project participates in a Q&A service at We also have our for RDO-specific users and operrators. For more developer-oriented content we recommend joining the mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at You can also find extensive documentation on

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on, however we have a more focused audience within the RDO venues.

Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Iury Gregory Melo Ferreira at May 28, 2020 08:49 AM

Michael Still

Introducing Shaken Fist


The first public commit to what would become OpenStack Nova was made ten years ago today — at Thu May 27 23:05:26 2010 PDT to be exact. So first off, happy tenth birthday to Nova!

A lot has happened in that time — OpenStack has gone from being two separate Open Source projects to a whole ecosystem, developers have come and gone (and passed away), and OpenStack has weathered the cloud wars of the last decade. OpenStack survived its early growth phase by deliberately offering a “big tent” to the community and associated vendors, with an expansive definition of what should be included. This has resulted in most developers being associated with a corporate sponser, and hence the decrease in the number of developers today as corporate interest wanes — OpenStack has never been great at attracting or retaining hobbist contributors.

My personal involvement with OpenStack started in November 2011, so while I missed the very early days I was around for a lot and made many of the mistakes that I now see in OpenStack.

What do I see as mistakes in OpenStack in hindsight? Well, embracing vendors who later lose interest has been painful, and has increased the complexity of the code base significantly. Nova itself is now nearly 400,000 lines of code, and that’s after splitting off many of the original features of Nova such as block storage and networking. Additionally, a lot of our initial assumptions are no longer true — for example in many cases we had to write code to implement things, where there are now good libraries available from third parties.

That’s not to say that OpenStack is without value — I am a daily user of OpenStack to this day, and use at least three OpenStack public clouds at the moment. That said, OpenStack is a complicated beast with a lot of legacy that makes it hard to maintain and slow to change.

For at least six months I’ve felt the desire for a simpler cloud orchestration layer — both for my own personal uses, and also as a test bed for ideas for what a smaller, simpler cloud might look like. My personal use case involves a relatively small environment which echos what we now think of as edge compute — less than 10 RU of machines with a minimum of orchestration and management overhead.

At the time that I was thinking about these things, the Australian bushfires and COVID-19 came along, and presented me with a lot more spare time than I had expected to have. While I’m still blessed to be employed, all of my social activities have been cancelled, so I find myself at home at a loose end on weekends and evenings at lot more than before.

Thus Shaken Fist was born — named for a Simpson’s meme, Shaken Fist is a deliberately small and highly opinionated cloud implementation aimed at working well in small deployments such as homes, labs, edge compute locations, deployed systems, and so forth.

I’d taken a bit of trouble with each feature in Shaken Fist to think through what the simplest and highest value way of doing something is. For example, instances always get a config drive and there is no metadata server. There is also only one supported type of virtual networking, and one supported hypervisor. That said, this means Shaken Fist is less than 5,000 lines of code, and small enough that new things can be implemented very quickly by a single middle aged developer.

Shaken Fist definitely has feature gaps — API authentication and scheduling are the most obvious at the moment — but I have plans to fill those when the time comes.

I’m not sure if Shaken Fist is useful to others, but you never know. Its apache2 licensed, and available on github if you’re interested.


by mikal at May 28, 2020 05:05 AM

Stephen Finucane

Using AMI Images in OpenStack

I recently had to validate some interactions between the OpenStack Image service, glance, and the Compute service, nova. For this, I needed separate kernel and ramdisk images.

May 28, 2020 12:00 AM

Emulated Trusted Platform Module (vTPM) in OpenStack 🔐

Work is ongoing in nova to provide support for attaching virtual Trusted Platform Modules (vTPMs) to instances. The below guide demonstrates how you can go about testing this feature for yourself.

May 28, 2020 12:00 AM

May 27, 2020


How Governments Use OpenStack Around The World

Governments all over the world trust in OpenStack for their cloud computing needs. With an OpenStack powered private cloud government can benefit from the very best in security, scalability, and agility. Choosing a cloud-based infrastructure allows governments to stay up to date on the latest technology trends. All the while saving on costs typically associated with traditional IT. In a world where security threats are increasing, it has never been more important for government bodies to implement additional measures against risk. This includes a robust cloud solution, which enables the government to focus on its core competencies and work for the citizens of their country.

Today we are going to dive into how different governments use OpenStack around the world. Our global tour will begin in The United States of America, take a detour to Australia and end with the impact of OpenStack on the French government. Let’s take a look at how governments use OpenStack around the world- no passport required!

How Governments Use OpenStack

Governments all over the world have important priorities. Keep their citizens safe, keep law and order within their jurisdiction, and problem solve any issues that arise are just some of the issues that they face on a day to day basis. Although the way that they govern their countries the following countries have one thing in common: OpenStack.

The United States Of America

In the United States, the National Security Agency (NSA) utilizes the power of OpenStack to keep abreast of the high-security needs of the country. The implementation of OpenStack at the governmental level was such a success that the NSA plan on implementing OpenStack to all 16 agencies that make up the intelligence community in the United States. Considering the emphasis that the United States place on security, it is evident that OpenStack delivers where it matters most.

Moreover, the role of OpenStack in government security for the United States is good for OpenStack users. Why? Because the NSA has such strong security requirements, it developed systems to accommodate these requirements. From securing APIs and guest OSes to developing code to suit their needs, the OpenStack community can experience the security benefits for themselves.



The government of Australia is also no strange to OpenStack’s private cloud infrastructure. The Australian government leveraged simplicity alongside agility of OpenStack to build a secure cloud platform. Major government departments such as the Digital Transformation Agency, Department of Health and the Defence and Australian Intelligence Community all worked to ensure that the implementation of OpenStack was a success. Thanks to OpenStack, Australia was able to deliver high-security digital services and solutions to areas of the government that needed it most.


Finally, the French government is no stranger to OpenStack themselves. France’s Interior Ministry has been harnessing the power of OpenStack private clouds to help govern the country. The Ministry IT engineers were taught to use OpenStack best practices. They went on to help in selecting which tools and deployment strategies would be right for France. Many of the applications that France is using right now are in place to push for bureaucratic reform. About 20 different projects are being considered for France’s OpenStack powered cloud and they plan to migrate 150 applications to the cloud in the next three to five years.

Whether you’re a government body, organization or business, no matter what industry or size, VEXXHOST is here to help you get started with a private cloud solution. We’ve been using and contributing to OpenStack since 2011. We know OpenStack inside and out. Contact our team of experts today to learn more.

Would you like to know about Private Cloud and what it can do for you? So download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post How Governments Use OpenStack Around The World appeared first on VEXXHOST.

by Samridhi Sharma at May 27, 2020 06:05 PM

OpenStack Superuser

OpenStack Ussuri is Python3-Only: Upgrade Impact

A brief history of Python2 -> Python3:

Python version 2.0 was officially released in 2000, OpenStack was founded in 2010 and has since used Python 2.0 as its base language. The Python Foundation realized that in order to prevent users from having to perform tasks in a backward or difficult way, big improvements needed to be made to the software.

We released Python 2.0 in 2000. We realized a few years later that we needed to make big changes to improve Python. So in 2006, we started Python 3.0. Many people did not upgrade, and we did not want to hurt them. So, for many years, we have kept improving and publishing both Python 2 and Python 3.

In 2015, The Python Foundation made a very clear announcement on multiple platforms to migrate to Python 3 and discontinue Python 2. This initial plan was later extended to 2020. 

We have decided that January 1, 2020, was the day that we sunset Python 2. That means that we will not improve it anymore after that day, even if someone finds a security problem in it. You should upgrade to Python 3 as soon as you can.

OpenStack Starting Support of Python 3:

With the announcement of the sunset of Python 2, it became very clear that OpenStack also could not support Python 2 for much longer. Because it would have been impossible to fix any security bugs on Python 2, it was better for OpenStack to drop its support completely and instead concentrate on Python 3.

OpenStack’s support of  Python 3 started in 2013, and many developers contributed towards the enormous task of transitioning the software. After so much hard work from the community, the Stein cycle (September 2018) was the time when running OpenStack under Python3 as default work became a community goal. The community goal is a way to achieve common changes in OpenStack. OpenStack runs under Python3 as default was a great effort and includes a lot of hard work by many developers. Doug Hellmann was one of the key developers and showed coordination and leadership with other developers and projects to finish this goal.

OpenStack Train (Oct 2019): Python3 by default:

In the OpenStack Train release (October 2019), OpenStack was tested on Python 3 by default. This meant that  you could upgrade your Cloud to Python 3 environment with full confidence. OpenStack Train was released with well tested Python 3 support, but still also supported Python 2.7. At the same time, we kept testing the latest Python 3 version, and the OpenStack Technical Committee (TC) started defining the testing runtime for each cycle. OpenStack is targeting Python 3.8 in the next development cycle beginning soon.

OpenStack Ussuri (May 2020): Python3-Only: Dropped the support of Python2:

With the Ussuri cycle, OpenStack dropped all support of Python 2. All the projects have completed updating their CI jobs to work under Python 3. This achievement allows the software to be able to remove all Python 2 testing as well as the configuration that goes along with it..

Very first thing in the Ussuri cycle, we started planning for the drop of Python 2.7 support. Dropping Python 2.7 was not an easy task when many projects depend on each other and also integrate CI/CD. For example, if Nova drops Python 2.7 support and becomes Python 3 only, it can break Cinder and many other projects’s CI/CD. We prepared a schedule and divided the work into three phases, dropping support from services first, then library or testing tools.

    Phase-1: Start of Ussuri -> Ussuri-1 milestone: OpenStack Services to start

             dropping the py2.7 support.

    Phase-2: milestone-1 -> milestone-2:  Common libraries and testing tooling

    Phase-3: at milestone-2: Final audit.

Even still, a few things got broken in initial work. So, we made DevStack as Python 3 by default which really helped move things forward. In phase 2, when I started making Tempest and other testing tools as python3-only, a lot of stable branch testing started breaking. That was obvious because Tempest and many other testing tools are branchless, meaning the master version is being used for testing both the current and  older releases of OpenStack. So all Python 2.7 testing jobs were using the Tempest master version. Finally, capping and fixing Tempest installed in py3 venv made all stable branches and master testing green.

Just a couple of weeks before the Ussuri release, we completed this work and made OpenStack as python3-only, with an updated wiki page. Two projects, Swift and Storlets, are going to keep supporting Python 2.7 for another one or two cycles.

What “OpenStack is Python3-Only” means for Users/Upgrades:

If your existing Cloud is on Python 3 env, then you do not need to worry at all. If it is on Python 2.7 and you are upgrading to Ussuri,then you need to check that your env has the Python 3.6 or higher version available. From the Ussuri release onwards, OpenStack will be working on Python 3.6 or higher only. For example, if you want to install Nova Ussuri version, then it will give an error if Python 3.6 or higher is not available. It is done via metadata (“python-requires = >=3.6”) in setup configuration file. Below is the screenshot of how the setup config file looks in the Ussuri release onwards:

python-requires = >=3.6

classifier =

      Environment :: OpenStack

      Intended Audience :: Information Technology

      Intended Audience :: System Administrators

      License :: OSI Approved :: Apache Software License

      Operating System :: POSIX :: Linux

      Programming Language :: Python

      Programming Language :: Python :: 3

      Programming Language :: Python :: 3.6

      Programming Language :: Python :: 3.7

      Programming Language :: Python :: 3 :: Only

      Programming Language :: Python :: Implementation :: CPython

If you are using a distribution that does not have Python 3.6 or higher available, then you need to upgrade your distro first. There is no workaround or any compatible way to keep running OpenStack on Python 2.7. We have sunset the Python 2.7 support from Ussuri onwards, and the only way is to also upgrade your python version. There are a few questions on the python upgrade which are covered in the FAQ section below.


Q1: Is Python 2 to Python 3 upgrade being tested in Upstream CI/CD?

Answer: Not directly, but it is being tested indirectly.We did not set up the grenade testing (upstream upgrade testing) for py2 setup to py3 setup. However, previous OpenStack releases like Stein and Train were tested on both the python versions. This means that the OpenStack code was not working or well-tested on the previous version before it was python3 only. This makes sure that upgrading the py2->py3 for OpenStack has been tested indirectly. If you are upgrading OpenStack from Stein or Train to Ussuri, then there should not be any issues.

Q2: How are the backport changes from Ussuri onwards to old stable branches going to be python2.7 compatible?

Answer: We still run the Python 2.7 jobs until Stable Train testing so that any backport from Ussuri or higher (which are tested on Python 3 only) will be backported on Train or older stable branches with testing on Python 2.7 also. If anything breaks on Python 2.7, it will be fixed before backporting. That way we will keep Python 2.7 support for all stable branches greater than Ussuri.

Q3: Will testing frameworks like Tempest which are branchless (using the master version for older release testing) keep working for Python 2.7 as well?

Answer: No. We have released the last compatible version for Python 2.7 for Tempest and other branchless deliverables. Branchless means that the tools master version is being used to test the current or older OpenStack releases. For example, Tempest 23.0.0 can be used as a Python 2.7 supported version and Tempest 24.0.0 or master is Python 3 only. But there is a way to keep testing the older Python 2.7 release also (until you upgrade your cloud and want Tempest master to test your cloud). You can run Tempest on a Python 3 node or virtual env and keep using the master version for testing Python 2.7 cloud. Tempest does not need to be installed on the same system as other OpenStack services, as long as the APIs are accessible from the separate testing node, or the virtual env Tempest is functioning.

For any other questions, feel free to ping on the #openstack-dev IRC channel.


The post OpenStack Ussuri is Python3-Only: Upgrade Impact appeared first on Superuser.

by Ghanshyam Mann at May 27, 2020 01:00 PM

May 26, 2020

Ed Leafe

Writing Again

Today marks 2 months since I was laid off from my job at DataRobot. It was part of a 25% reduction that was made in anticipation of the business slump from the COVID-19 pandemic, and having just been there for 6 months, I was one of the ones let go. I have spent the last … Continue reading "Writing Again"

by ed at May 26, 2020 03:13 PM

CERN Tech Blog

Scaling Ironic with Conductor Groups

CERN has introduced OpenStack Ironic for bare metal provisioning as a production service in 2018. Since then, the service has grown to manage more than 5000 physical nodes and is currently used by all IT services still requiring physical machines. This includes storage or database services, but also the infrastructure for compute services. Even the “compute nodes” used by OpenStack Nova are instances deployed via Nova and Ironic (but that will be a different blog post!

by CERN ( at May 26, 2020 07:00 AM

May 25, 2020

Galera Cluster by Codership

Galera Cluster 4 for MySQL 8 is Generally Available!

Codership is proud to announce the first Generally Available (GA) release of Galera Cluster 4 for MySQL 8 and improve MySQL High Availability a great deal. The current release comes with MySQL 8.0.19 and includes the Galera Replication Library 4.5 with wsrep API version 26. You can download it now (and note that we have packages for various Linux distributions). 

Galera 4 and MySQL 8.0.19 have many new features, but here are some of the highlights:

  • Streaming replication to support large transactions by splitting transaction replication then applying them in smaller fragments. You can use this feature to load data faster, as data is written to all nodes simultaneously (or not at all in case of a failure in any single node).
  • Improved foreign key support, as write set certification rules are optimised and there will be a reduction in the number of foreign key related false conflicts in certifications.
  • Group commit is supported and integrated with the native MySQL 8 binary log group commit code. Within the codebase, the commit time concurrency controls were reworked such that the commit monitor is released as soon as the commit has been queued for a group commit. This allows transactions to be committed in groups, while still respecting the sequential commit order.
  • There are new system tables for Galera Cluster that are added to the mysql database: wsrep_cluster, wsrep_cluster_members and wsrep_streaming_log. You can now view cluster membership via system tables.
  • New synchronization functions have been introduced to help applications implement read-your-writes and monotonic-reads consistency guarantees. These functions are: WSREP_LAST_SEEN_GTID(), WSREP_LAST_WRITTEN_GTID() and WSREP_SYNC_WAIT_UPTO_GTID().
  • The resiliency of Galera Cluster against poor network conditions has been improved. Handling of irrecoverable errors due to poor network conditions has also been improved, so that a node will always attempt to leave the cluster gracefully if it is not possible to recover from errors without sacrificing data consistency. This will help your geo-distributed multi-master MySQL clusters tremendously.
  • This release also deprecates the system variables: wsrep_preordered and wsrep_mysql_replication_bundle.

We are pleased to offer packages for CentOS 7, CentOS 8, Ubuntu 18.04, Ubuntu 20.04, Debian 10, OpenSUSE 15, and SUSE Linux Enterprise (SLES) 15 SP1. Installation instructions are similar to previous releases of Galera Cluster.

In addition to the release, we are going to run a webinar to introduce this new release to you. Join us for Galera Cluster 4 for MySQL 8 Release Webinar happening Thursday  June 4 at 9-10 AM PDT or 2-3 PM EEST (yes, we are running two separate webinars for the Americas and European timezones).  

EMEA webinar 4th of June, 2-3 PM EEST  (Eastern European Time)


USA webinar 4th of June, 9-10 AM PDT


by Sakari Keskitalo at May 25, 2020 05:34 AM

May 21, 2020

OpenStack Superuser

Zuul: A T-Systems Case Study

Headquartered out of Frankfurt, Germany; T-Systems GmbH is a German, global IT company. The 20 year-old company is a subsidiary of Deutsche Telekom and operates Open Telekom Cloud, one of the largest OpenStack-powered public clouds in the world.

Artem Goncharov, Open Telekom cloud architect, shares why T-Systems chose Zuul, an open source CI tool, and how they use it with GitHub and OpenStack.

How did your organization get started with Zuul

We started using Zuul for some of the internal developments using OpenStack. Getting some required changes merged into Zuul, we were able to deploy it productively as continuous integration (CI) for development of the open source tooling we offer to our clients. Moreover Zuul is currently used for monitoring of our platform services quality by periodically executing a set of tests (including permanent monitoring of the RefStack compliance). 

Currently we are working on making an internal offering inside of the Deutsche Telekom to allow usage of Zuul in different projects. Zuul is run in our own public cloud (OpenTelekomCloud) and so the VMs are also being spawned in the cloud. For the moment we are all-in OpenStack!

Describe how you’re using Zuul: 

Due to a complex (though more powerful) workflow using Gerrit, we made a decision to stay with GitHub to allow more people in the community to participate in the development of our projects. So currently we have Zuul working on our own public tenants and interacting with GitHub. Nodepool is used with an OpenStack driver to spin up virtual machines (VMs) for Zuul.

What is your current scale?

Currently a five nodes ZK, 1x Scheduler, 1x nodepool-builder, 1x nodepool-launcher, 1-2 Zuul-executors are satisfying our needs. Only 10 projects are currently effectively using Zuul (will increase soon to 30-50). Daily average 50 builds.

What benefits has your organization seen from using Zuul?

Implementing gating, while keeping control over where VMs are running and what they are doing. We now can easily implement testing workflows, that address all our needs.

What have the challenges been (and how have you solved them)?

  • Functional testing against real cloud (everyone can submit a pull request in GitHub, even allowing exposing real credentials). On the other hand we want to ensure pull requests still don’t break anything before doing manual code review. We solved this by dedicating separate domains for functional tests, which is available only to Zuul, so leaked credentials can not be misused. Additionally VMs only get a token, which is then immediately revoked and “project cleanup” is executed.
  • We have challenges with access controls on Github pull requests around controlling who can approve changes for merging. Currently we use a /merge comment.
  • Another challenge is complex operations of Zuul in real life. This is not yet solved properly besides Ansible playbooks. Not having a dedicated ops is heavily limiting future improvements.
  • Internal security department is picky on publishing that much information in logs (published to Swift), thus we have needed to override some Zuul/Zuul-jobs
  • Deploying everything in containers under Podman is a challenge, thus some components are deployed natively. Podman containers on FedoraCoreOS is another challenge.
  • Absence of out-of-box project admin user interface is blocking us from migrating further Jenkins jobs, which are scheduled ad-hoc.

What are your future plans with Zuul?

  • Make Deutche Telekom internal offering for another subsidiaries and projects to start using Zuul
  • Move to Kubernetes/OpenShift for operations of Zuul (where the challenge is “multi-cloud” for high availability)
  • Start using it more for continuous deployment (CD) which is in progress
  • Migrate all Jenkins job and drop it completely

Are there specific Zuul features that drew you to Zuul?

  • Gating 
  • Describing workflow with Ansible is so powerful, that it can literally do anything you will ever need.
  • Additionally Zuul architecture is so good, that we repeat it in some internal projects (which by purpose can not be handled by Zuul, but may soon see the light in OpenStack world).


The post Zuul: A T-Systems Case Study appeared first on Superuser.

by Helena Spease at May 21, 2020 12:00 PM

Stephen Finucane

Why You Can't Schedule to Host NUMA Nodes in Nova?

If I had a euro for every time someone had asked me or someone else working on nova for the ability to schedule an instance to a specific host NUMA node, I might never have to leave the pub (back in halcyon days pre-COVID-19 when pubs were still a thing, that is).

May 21, 2020 12:00 AM

May 20, 2020

OpenStack Superuser

OpenInfra Labs: An Open Infrastructure Collaboration for Research Use Cases

In early March—at what turned out to be one of the last non-virtual technology events held before Coronavirus lockdowns ended in-person conferences—I was fortunate to be among more than 200 attendees who gathered for two days in Boston at the Open Cloud Workshop to discuss the intersection of academic research and cloud computing software. 

The workshop is hosted by Massachusetts Open Cloud (MOC), a name that will be familiar to those who have attended OpenStack and Open Infrastructure Summits over the past few years. MOC is a consortium of universities in the New England area that share computing resources, data sets and operational practices. MOC equips its members with virtual resource sharing and on-demand user provisioning through high-bandwidth connections, all built upon OpenStack and driven by OpenStack APIs. Collectively, the members of MOC are active contributors to the OpenStack community and have delivered several Summit presentations, including in Atlanta (MOC Overview and Lessons Learned), Boston and Berlin

Another great outcome of MOC’s involvement in the OpenStack community is a new initiative called OpenInfra Labs. OpenInfra Labs is a community created by and for academic and research cloud operators who are testing open source code in production and publishing complete, reproducible stacks for existing and emerging research workloads. 

The primary objective of OpenInfra Labs is to deliver open source tools to run cloud, container, AI, machine learning and edge workloads repeatedly and predictably. 

OpenInfra Labs focuses on three core activities:

  • Integrated testing of all the components necessary to provide a complete use case
  • Documentation of operational and functional gaps required to run upstream projects in a production environment
  • Shared code repositories for operational tooling and the “glue” code that is often written independently by users

The OpenInfra Labs community was initiated by MOC, the OSF, and Red Hat. It has since welcomed a host of additional core industry partners and contributors who are interested not only in supporting academic research but also in knowledge transfer to help enterprises develop reliable and powerful federated computing resources. 

Learn More about OpenInfra Labs

To learn more, check out the April 28 meeting of the OpenStack Scientific SIG, which featured an introduction to OpenInfra Labs. 

If you are interested in building infrastructure for university or research purposes or represent an ecosystem vendor who would like to contribute to OpenInfra Labs, here are three ways to get involved:

Everyone is invited to engage with the OpenInfra Labs community and contribute your talents and expertise to current activities and community goals. 

The post OpenInfra Labs: An Open Infrastructure Collaboration for Research Use Cases appeared first on Superuser.

by Jeremy Stanley at May 20, 2020 01:00 PM

May 19, 2020

Fleio Blog

Fleio 2020.05: Reseller customization, security groups templates, new angular frontend, docker and more

Fleio 2020.05 is now available! The latest version was published today, 2020-05-19. New reseller customization With the latest version we have added to the reseller frontend more customization options. We have implemented themes support and custom logo support. This was added so that your resellers can actually differentiate from the cloud provider platform. You can […]

by Marian Chelmus at May 19, 2020 09:22 AM

May 17, 2020

CERN Tech Blog

A single cloud image for BIOS/UEFI boot modes on virtual and physical OpenStack instances

“Brace yourselves: upcoming hardware deliveries may come with UEFI-only support.” This announcement from our hardware procurement colleagues a few months ago triggered the OpenStack and Linux teams to look into how to add UEFI support to our cloud images. Up to now, CERN cloud users had been using the very same image for virtual and physical instances and we wanted to keep it that way. This blog post summarises some of the tweaks needed to arrive with an image that can be used to instantiate virtual and physical machines, can boot both of these in BIOS and UEFI mode, and works with Ironic managed software RAID nodes for both BIOS/UEFI boot modes as well.

by CERN ( at May 17, 2020 01:00 PM

May 16, 2020

Doug Hellmann

beagle 0.2.2

beagle is a command line tool for querying a hound code search service such as What’s new in 0.2.2? fix the reference to undefined function in link formatter Fix issues (contributed by Hervé Beraud) Refactor pipelines (contributed by Hervé Beraud) [doc] refresh oslo examples (contributed by Hervé Beraud)

by doug at May 16, 2020 01:42 PM

May 15, 2020

Galera Cluster by Codership

Installing Galera on Amazon Linux 2 for Geo-distributed Multi-master MySQL

We recently covered Installing Galera Cluster 4 with MySQL 8 on Ubuntu 18.04 , the new Galera version for MySQL High Availability. We got a request to see if we would be able to install it on Amazon Linux 2, and the short answer is yes, we are able to deploy Galera Cluster on Amazon Linux 2.

We have even published Installing a Galera Cluster on AWS guide for Geo-distributed MySQL Multi-master clustering which covers how to install a 3-node Galera Cluster on CentOS 7 to achieve disaster recovery . It turns out, Amazon Linux 2 tends to be quite compatible with this article (documentation). Heed the notices about how to configure SELinux, the firewall, as well as the security settings on AWS.

Today we will focus on installing Galera Cluster with MySQL 5.7 on Amazon Linux 2 (yes, the same instructions apply to installing the beta of MySQL 8 & Galera 4).

uname -a
Linux ip-172-30-0-54.ec2.internal 4.14.173-137.229.amzn2.x86_64 #1 SMP Wed Apr 1 18:06:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

[ec2-user@ip-172-30-0-54 ~]$ cat /etc/system-release
Amazon Linux release 2 (Karoo)

[ec2-user@ip-172-30-0-54 ~]$ cat /etc/os-release 
NAME="Amazon Linux"
ID_LIKE="centos rhel fedora"
PRETTY_NAME="Amazon Linux 2"

So beyond the article above, all you have to do is to ensure that there is a /etc/yum.repos.d/galera.repo file:

name = Galera
baseurl =
gpgkey =
gpgcheck = 1

name = MySQL-wsrep
baseurl =
gpgkey =
gpgcheck = 1

And then you install it via: sudo yum install galera-3 mysql-wsrep-5.7

Since this example took the smallest instance just for testing, a simple /etc/my.cnf was used to bootstrap a cluster:



wsrep_provider_options="gcache.size=128M; gcache.page_size=128M"

When you install MySQL 5.7, you’ll have to remember that you need to grab the password from the MySQL log, so do this via grep password /var/log/mysqld.log. After that, login and remember to change the root password by doing something like alter user 'root'@'localhost' identified with mysql_native_password by 'rootyes123A!';. Go ahead and run mysqld_bootstrap on the first node, and start MySQL normally on the 2nd and 3rd nodes.

We ran the MySQL test suite which has Galera Cluster tests as well, and the tests passed. As a company, Codership, considers the Amazon Linux 2 distribution compatible with our CentOS 7 binaries. Don’t forget we also release for Ubuntu, Debian, CentOS, openSUSE, Red Hat Enterprise Linux, and SUSE Enterprise Linux. This is of course in addition to FreeBSD. Expect a lot more distributions when MySQL 8 + Galera 4 goes Generally Available (GA).

by Sakari Keskitalo at May 15, 2020 07:27 AM

May 14, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: OpenStack Ussuri 

Ussuri, the 21st release of OpenStack, includes improvements in core functionality, automation, cross-cell cold migration, containerized applications, and support for new use cases at multiple levels in the stack.

Thank you to the more than 1,000 contributors from more than 50 countries and 188 organizations that contributed to the OpenStack Ussuri release. With these metrics, OpenStack continues to be one of the top three open source projects in the world in terms of active contributions, along with the Linux kernel and Chromium.

Among the many enhancements contributors delivered in Ussuri, three highlights are:

  1. Ongoing improvements to the reliability of the core infrastructure layer
  2. Enhancements to security and encryption capabilities
  3. Extended versatility to deliver support for new and emerging use cases

This year, we are celebrating 10 years of the OpenStack project. Since the software pioneered the concept of open infrastructure ten years ago, it has rapidly become the open infrastructure-as-a-service standard. Recently, new workload demands like AI, ML, edge, and IoT have given rise to the project’s support for new chip architectures, automation at scale down to the bare metal, and integration with myriad open source components. Intelligent open infrastructure—the integration of open source components that are evolving to meet these demands—creates an infrastructure that is self-monitoring, self-replicating, and delivering a versatile set of use cases.

The Ussuri release reinforces what OpenStack is well known for—namely, rock-solid virtual machine, container, and bare metal performance at massive scale. At the same time, Ussuri delivers security improvements via Octavia and Kolla. And, it supports new and emerging use cases with improvements to projects like Zun and Cyborg. 

Users can now use Nova to launch server instances with accelerators managed by Cyborg

Learn more about Ussuri features, check out screenshots from different OpenStack projects, and find out who contributed to the 21st OpenStack release at

OpenStack Foundation news

Project Teams Gathering (PTG) June 1-5


  • Large-scale Usage of Open Infrastructure Software
    June 29 – July 1, 2020. Register now for free!
  • Hardware Automation
    July 20 – 22, 2020
  • Containers in Production
    August 10 – 12, 2020

Airship: Elevate your infrastructure

  • Airshipctl has completed its 2.0 alpha milestone and is now working towards beta.
  • Airship will be participating in the virtual PTG! View the draft agenda, and make any suggestions by May 23.
  • If you are evaluating or running Airship, share your feedback in the Airship User Survey! Take the chance and provide anonymous feedback back to the community. Take the user survey now.

Kata Containers: The speed of containers, the security of VMs

  • We are happy to announce the new stable release for Kata 1.11.x branch. This is the first official release for 1.11.x and includes many changes compared to 1.10.x. See more details on the 1.11.x release.
    • Take a look at the full release notes for the changes in this release here.
    • We have released a new version of 1.10.x branch: 1.10.4.
  • If you are running Kata Containers, the user survey is your opportunity to provide anonymous feedback to the upstream community, so the developers can better understand Kata Containers environments and software requirements. Take your Kata survey today.

OpenStack: Open source software for creating private and public clouds

  • Ghanshyam Mann announced the completion of the Python 3 transition goal. It’s been a long journey since we started to introduce Python 3 support in 2013! OpenStack components and libraries are now Python3-only (except Swift and Storlets which will continue to support Python 2 in Ussuri).
  • Following the recent elections, the OpenStack Technical Committee selected Mohammed Naser as its chair for the Victoria cycle. It also confirmed the removal of the Congress and Tricircle projects in the Victoria release (scheduled for October 2020), and the merge of the LOCI team into the OpenStack-Helm team, due to commonality of scope.
  • Are you ready to take your cloud skills to another level? The updated Certified OpenStack Administrator (COA) exam can help you with that. Check out the OpenStack COA exam and become a Certified OpenStack Administrator

StarlingX: A fully featured cloud for the distributed edge

  • The nomination period for the upcoming TSC elections is starting next week! For details about the process please see the elections website
  • The StarlingX user survey is live. Take the StarlingX user survey and provide anonymous feedback to the upstream community.

Zuul: Stop merging broken code

  • Zuul’s Github driver now supports reporting results via the Github Checks API. Find out more in the Zuul Github driver docs.
  • Work has begun to support multi architecture docker image builds. Help us improve the system by setting the ‘arch’ parameter on your docker image build jobs.
  • Are you a Zuul user? Fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public. Take your Zuul survey today.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at May 14, 2020 06:08 PM

May 06, 2020

OpenStack Superuser

Where are they now? Superuser Awards winner: CERN

If you’ve been around the OSF community for any amount of time, chances are you’ve heard the name CERN.

Famous for their Large Hadron Collider, Higgs boson, and antimatter studies, the Geneva-based laboratory has spent decades researching physics and the universe. So what does that have to do with OpenStack? All of that research produces massive amounts of data, thus requiring a substantial amount of infrastructure.

Keep reading to find out how CERN’s OpenStack environment has evolved since they won the first Superuser Award at the OpenStack Summit six years ago.

What has changed in your OpenStack environment since you won the Superuser Awards?

At the OpenStack Summit Paris in 2014, CERN received the first Superuser Award from Guillaume Aubuchon, CTO of Digitalfilm Tree.

Presentation of first Superuser Award at Paris OpenStack Summit.

At the time, CERN’s cloud had been in production for a year with 65,000 cores running Havana providing VMs, images and identity. After six years and 13 upgrades, the CERN cloud now covers 11 OpenStack projects adding containers, bare metal, block, share, workflows, networking and file system storage.

What is the current size of CERN’s OpenStack environment?

Snapshot of CERN’s infrastructure dashboard.

Currently, the CERN cloud is around 300,000 cores across 80 cells with big recent growth in OpenStack Magnum to manage Kubernetes clusters, OpenStack Ironic servers for all the computer center hardware, and Fileshares with CephFS.

What version of OpenStack is CERN running?

We are in the process of upgrading from Stein to Train with most components already running Train. We use the RDO distribution.

What open source technologies does your team integrate with OpenStack?

The list is very long! The aim for the CERN cloud environment was to build a toolchain based on a set of open source projects which could also be used by other labs collaborating with CERN. A few examples are:

Cloud and Containers


  • Puppet and Foreman for configuration management
  • Terraform for automated provisioning (including external clouds)





  • Gitlab for version control, continuous integration
  • Koji for builds
  • Rundeck for automation

What workloads are you running on OpenStack?

Over 90% of the infrastructure in the CERN computer center is managed and provisioned by OpenStack. This includes the physics processing and storage, databases along with the infrastructure for the laboratory administration. The remaining hardware in the computer center is now being enrolled into Ironic to ensure strong resource management, accounting and lifecycle tracking.

How big is your OpenStack team?

The production support team in the CERN IT Department is around seven engineers with further students and fellows contributing to various project enhancements.

How is your team currently contributing back to the OpenStack project? Is your team contributing to any other projects supported by the OpenStack Foundation (Airship, Kata Containers, StarlingX, Zuul)?

CERN has made over 1,000 commits to OpenStack since the implementation started in 2011. The largest three OpenStack projects CERN have contributed to are Magnum, Nova and Keystone. CERN’s experiences have been presented at more than 30 talks at OpenStack summits as well as regional events such as the open Infrastructure days which have provided an opportunity to share the experiences of running OpenStack at scale and our current focus areas. This included an OpenStack day at CERN in 2019 covering experiences of OpenStack usage in science and hosting the Ironic mid-cycle meetup in 2020.

The CERN blog is available at and local developments are shared at

CERN has also contributed to governance and project management including an elected OpenStack individual board member, two members of the User Committee and PTL/core roles in Magnum, Keystone and Ironic.

What kind of challenges has your team overcome using OpenStack?

Given the demands of the Large Hadron Collider and the CERN experiments, provisioning more computing capacity without increasing the number of engineers was a challenge to overcome. Working with other members of the open source community in areas such as Container Orchestration-as-a-Service, Nova Cells, Identity Federation and Spot Market functionality has allowed these new features to be developed, reviewed by community and further enhanced. OpenStack Special Interest Groups such as the Scientific SIG and Large Scale SIG have provided a useful framework for debate, information sharing and common contribution.

A single framework for tracking, authentication and accounting for bare metal, virtual machines, storage and containers has been a major benefit for the CERN IT department. Allowing users to have self-service resources in a few minutes while ensuring that these are clearly allocated (and expired if appropriate) allows the CERN cloud users to focus on the goals of the laboratory rather than how to get the infrastructure they need.

Stay tuned for more updates from previous Superuser Award winners!


Cover Image: CERN

The post Where are they now? Superuser Awards winner: CERN appeared first on Superuser.

by Ashlee Ferguson at May 06, 2020 05:40 PM

May 05, 2020


3 KPIs Your Business Needs For Successful Cloud Infrastructure

When it comes to ensuring successful cloud infrastructure there are certain KPIs that you need to pay close attention to. Key Performance Indicators such as cost and quality can make a large impact on the bottom line of your business. Tracking your cloud infrastructure’s KPIs can help your business measure its cloud performance all the while developing new strategies to improve your business overall.

Today we are going to review which KPIs your business needs to address to stay competitive. If your business is looking to make the most of its cloud infrastructure it is time that you consider these three KPIs that your business should be measuring to indicate if your cloud strategy is working. Keep reading to see which are the ones your business should be looking out for.

#1: Security

The level of security and compliance is a performance indicator that your business cannot afford to ignore. From evaluating the reliability of your access points to rigorous compliance, it’s important to keep a close eye on security measures. Your cloud infrastructure, whether public or private cloud solutions, needs to ensure that only the right people have access to confidential information. Take the time to note how compliant you are with each of your security requirements. Everything from physical access if you have an on-premise solution to GDPR needs to be reviewed regularly to ensure that everything is up to date.

#2: Cost

Cost and ROI is a big factor when it comes to successful cloud infrastructure. Your business needs a solution that is working to bring returns not to drain financial resources. When it comes to selecting a cloud provider taking the time to calculate the infrastructure costs from the bottom up can help your business compare providers. After choosing a trusted cloud provider then it might be worth measuring how much cloud spend is your business using and how much is being wasted. Track these measurements frequently to ensure that overall waste goes down not up.


#3: User Experience

In any business strategy, the quality of user experience is a metric that needs to be tracked closely. It is important to keep the user experience at the core of your KPIs for successful cloud infrastructure. Does your cloud infrastructure have frequent downtime? Are you losing data in the disruptions of service? Is your cloud running as optimally as it should be? These are questions that need to be addressed in order to optimize your cloud infrastructure and improve overall user experience.

Getting Started: KPIs Your Business Needs For Successful Cloud Infrastructure

Whether your business is starting its cloud journey or if it is looking to optimize the infrastructure it already has, the experts at VEXXHOST can help your business stay competitive. We are here to help your business refine and improve its key performance indicators and optimize your cloud infrastructure. We’ve been contributing and using OpenStack since 2011. So it’s safe to say we know OpenStack clouds inside and out. Want to learn more about how we can help? Contact our team of experts today.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 3 KPIs Your Business Needs For Successful Cloud Infrastructure appeared first on VEXXHOST.

by Hind Naser at May 05, 2020 05:42 PM

Galera Cluster by Codership

Installing Galera Cluster 4 with MySQL 8 on Ubuntu 18.04

Since the beta of Galera Cluster 4 with MySQL 8 has been released, we’ve had people asking questions as to how to install it on Ubuntu 18.04. This blog post will cover just that.


  • All 3 nodes need to have Ubuntu 18.04 installed
  • Firewall (if setup) needs to accept connections on 3306, 4444, 4567, 4568 (a default setup has the firewall disabled)
  • AppArmor disabled (this is as simple as executing: systemctl stop apparmor and systemctl disable apparmor).

Installation and Configuration

We have good installation documentation as well as a quick how to get this installed in AWS (though this is CentOS centric).

First, you will need to ensure that the Galera Cluster GPG key is installed:

apt-key adv --keyserver --recv BC19DDBA


This is followed by editing /etc/apt/sources.list.d/galera.list to have the following lines in the file:

deb bionic main

deb bionic main

You should now run an apt update and then install Galera 4 with MySQL 8:

apt install galera-4 mysql-wsrep-8.0

You are now told to enter a root password as apt/dpkg supports interactivity during installations. Please enter a reasonably secure password. Then you are asked if you should use a strong password, which is caching_sha2_password (you are encouraged to pick this).


Then you need to edit the/etc/mysql/mysql.conf.d/mysqld.cnf file to add the following lines:

wsrep_provider_options="gcache.size=128M; gcache.page_size=128M"

Remember that you will need to change wsrep_node_name and wsrep_node_address. The above is a very basic configuration.

Ensure that you have stopped MySQL (systemctl stop mysql). On the first node, execute:


You can execute: mysql -u root -p -e "show status like 'wsrep_cluster_size'” and see:

| Variable_name      | Value |
| wsrep_cluster_size | 1     |

Now, when you bring up the second node as simply as systemctl start mysql, you can execute the same command above and will see that the wsrep_cluster_size has increased to 2. Repeat this again for the third node. You can also choose to test replication by creating a database and table on one node, and see that the replication is happening in real time.

To find out more, start MySQL and execute show status like 'wsrep%';.

We hope this helps you get started, and we are definitely looking at providing packages for Ubuntu 20.04 which just got released. Look forward to more guides on getting started on other types of Linux distributions.


by Sakari Keskitalo at May 05, 2020 02:20 PM

StackHPC Team Blog

Flatten the Learning Curve with OpenStack HIIT

With the current Coronavirus lockdown affecting many countries (including all the countries in which we work), remote working and videoconference has become the only way to be productive.

At StackHPC our flexible and distributed team is already used to working this way with clients. We have gone further, and developed online training for workshops we would normally deliver in person.

OpenStack HIIT: OpenStack in Six Sessions

With a nod to the intensity of OpenStack's infamous learning curve, we've called our new workshop format OpenStack HIIT.

OpenStack HIIT is a remote workshop, delivered by video conference. The workshop is organised into six sessions. Session topics include:

  1. Step-by-step deployment of an OpenStack control plane into a virtualised lab environment.
  2. A deep dive into the control plane to understand how it fits together and how it works.
  3. Operations and Site Reliability Engineering (SRE) principles. Best practices for operating cloud infrastructure.
  4. Monitoring and logging for OpenStack infrastructure and workloads.
  5. Deploying platforms and applications to OpenStack infrastructure.
  6. OpenStack software-defined networking deep dive.
  7. Ceph storage and OpenStack.
  8. Contributing to a self-sustaining open source community.
  9. Deploying Kubernetes using OpenStack Magnum.

Each session is led by a Senior Tech Lead from StackHPC's team. The workshop is designed to be interactive and up to six attendees can be supported.

Because it is remotely delivered, the sessions can be spread out, enabling attendees to read around the subject, practice content learned and prepare ahead for the next session.

The interactive sessions use lab infrastructure provided as part of the workshop. In some circumstances a client's own infrastructure can be used, which gives a client the opportunity to retain the lab environment and to use it between sessions. Additional provision for qualification of a client environment is required in this case.

OpenStack HIIT

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Stig Telfer at May 05, 2020 01:07 PM

April 30, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: Upcoming OSF Virtual Events  

We’re Going Virtual!
Last month, based on the input from the community, board, and the latest information available from the health experts, we announced our decision not to host the OpenDev + PTG in Vancouver this June. Instead, we’d like to invite you to join us for the upcoming, first of its kind virtual OSF event series! 


Join us for OpenDev, an ongoing collaborative event series focused on advancing open source software and communities. Participants can expect discussion oriented, interactive sessions exploring challenges, sharing common architectures, and collaborating around potential solutions. Previous OpenDev events include Edge Computing in 2017 and CI/CD in 2018.

The virtual OpenDev event series will consist of three separate events hosted in the upcoming months, each focused on a different open infrastructure topic:

  • Large-scale Usage of Open Infrastructure Software
    June 29 – July 1, 2020. Register now!
  • Hardware Automation
    July 20 – 22, 2020
  • Containers in Production
    August 10 – 12, 2020

If you are interested in the Hardware Automation or Containers events, we’d love your input on the best time block to host the sessions before registration goes live. Please share your preference here: Hardware AutomationContainers in Production.   


The Project Teams Gathering (PTG) is an event where open source upstream contributors (user working groups, development teams, operators, SIGs) gather to collaborate on software requirements and roadmaps. Registration is now open for the virtual PTG, taking place June 1-5.

The event is open to all OSF projects, and teams are currently signing up for their time slots. Find participating teams here, and the schedule will be posted on the PTG website in the upcoming weeks. Join us

Sponsor Shout Out

We also want to thank all of the OpenStack Foundation Platinum, Gold, and Corporate sponsors for their ongoing support that make these virtual events possible and free to attend. We couldn’t do it without you!  

Airship: Elevate your infrastructure

  • Join Airship at the virtual PTG! Stay up to date on meeting plans via the mailing list.
  • Check out the April update on the blog for the latest Airship 2.0 progress, virtual March meeting notes, and more.
  • Interested in learning how to set up a Cluster API development environment? Find step-by-step directions and documentation in this tutorial. This development environment will allow you to deploy virtual nodes as Docker containers in Kind, test out changes to the Cluster API codebase, and gain a better understanding of how Airship works at the component level to deploy Kubernetes clusters.
    • Read more about how Airship 2.0 plans to use Cluster API in this blog post.

Kata Containers: The speed of containers, the security of VMs

  • The community has set up an etherpad page for Kata virtual PTG. Please register your name and time slots if you plan to attend it. Also if you have anything to discuss during PTG, feel free to add it there as well.
  • Kata Containers Demo: A Container Experience with VM Security
    • Eric Ernst, principal systems software engineer, for Ampere and Bharat Kunwar, software engineer, for StackHPC explain how Kata Containers work, as well as its performance and security advantages. They also describe a use-case scenario and new research.

OpenStack: Open source software for creating private and public clouds

  • The OpenStack community is in the final preparation stage for the ‘Ussuri’ releasescheduled for May 13. Discussion on how to properly celebrate virtually is under way on the openstack-discuss mailing-list.
  • The election cycle to designate the stewards for our next development cycle, Victoria, just concluded. The Technical Committee (now a group of 11 people) welcomes two new members: Belmiro Moreira and Kristi Nikolla. They, along with three returning members (Graham Hayes, Mohammed Naser, and Rico Lin) make the newly elected members of the TC. Also, huge thank you to Alexandra Settle, Jim Rollenhagen, Thierry Carrez, and Zane Bitter for their past service. PTLs for project teams were also renewed, with 12 new people stepping up.
  • The 2019 User Survey results have been analyzed by the OpenStack Technical Committee. Read the full report for more information.
  • The Kolla team set up a new way to engage with users, and improve communication between Kolla operators and Kolla developers: the Kolla Klub. Interested? Read more information on how to join the next meeting.

StarlingX: A fully featured cloud for the distributed edge

  • The community is preparing for the upcoming virtual PTG to discuss topics like planning for the 5.0 release cycle, testing and cross-project collaboration. Stay tuned for updates as the event is getting closer!
  • The next StarlingX TSC elections are happening in less than a month! Check out the details on the elections web page in case you are interested in running for one of the 4 seats.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at April 30, 2020 06:06 PM

StackHPC Team Blog

StackHPC Under African Skies: Kayobe in Cape Town

StackHPC are pleased to announce along with our partner, Linomtha ICT, a new OpenStack system at the Centre for High Performance Computing to support researchers and academics across South Africa. StackHPC worked with Linomtha, Supermicro and Mellanox to jointly engineer the system and support project management. The system deploys OpenStack Kayobe together with a billing system engineered around CloudKitty & Monasca.

The text below can also be found on Linomtha's blog.

LinomthaID logo

The Centre for High Performance Computing (CHPC) is proud to announce a new on-premise cloud infrastructure that has been delivered recently under exceptional circumstances. The delivery of the system is testament to the close collaboration CHPC has with Linomtha ICT (SA) and their strategic technology partners StackHPC Ltd (UK), Supermicro (SA), and Mellanox (IL), and will ensure that the CHPC has a stable environment to continue to deliver on its mandate. The OpenStack Production Cloud Services caters for the CHPC scientific users executing, for example, custom workflows, embarrassingly parallel workloads and webhosting. The OpenStack services will also be a road-header for such HPC configuration in the future. It is envisaged that this platform will build both the skills and operational experience for CHPC, to develop, provision and operate a National federated OpenStack platform, which will be linked with other countries, that are involved in the Square Kilometre Array (SKA) project.

The Cloud infrastructure has been designed in such a way that the transportation of data to and from the CHPC to the external institutions that are connected to the NICIS network or those that want to utilise the DIRISA long term storage can be achieved.

Linomtha, a majority black owned company comprising of an energetic mix of business people, entrepreneurs and engineers with experience and skills from various fields, together with CHPC, successfully completed the installation of the OpenStack Production Cloud Service project.

Linomtha recognises the important role that ICT can play in terms of economic growth, social inclusion and government efficiency. The key individuals driving Linomtha all have extensive practical experience in the field of ICT, working on large scale government and private sector projects across the country and are recognized as experts, both locally and internationally. Linomtha is a value-added reseller of StackHPC as well as Supermicro, the key technology partners in responding to CHPC's RFP. LinomthaICT's sister company, LinomthaID, provided the Billing/Invoicing portal for the solution through its VOIS platform.

The CHPC has been running a VMware virtual environment or cluster (IT-Shop) previously, as an alternative to support scientific projects or applications which were not best suited for High Performance Computing Platform. Projects were mostly hosted on the IT-Shop Cluster as web portals to support these special scientific groups to share data-knowledge or compute their specific scientific workflows.

The IT-Shop cluster is currently over-provisioned, especially for memory resources, due to the large demand of numerous projects requiring high-spec virtual machines and has become an unreliable environment, no longer able to adequately serve the users, as the performance and available capacity has deteriorated over time.

The CHPC OpenStack Production Cloud will provide a sufficient and efficient environment to continue to support these kinds of projects from the IT-Shop. In addition, the CHPC Cloud Solution will offer the following benefits and functionalities which were not met on the current IT-Shop:

  • Self-Service Portal. CHPC Cloud users will now have the ability to deploy application on-demand with limited technical support to promote rapid and efficient IT Service.
  • Metered Service and Resource Monitoring. CHPC will now be able to monitor resource utilization from individual users or projects to prepare billing statement as per our cost-recovery model.
  • Avoid Vendor Lock-In. The OpenStack solution is open source. CHPC will Reduce-On-Cost related to proprietary software such as the VMware vSphere Solution.
  • Enable Rapid Innovations (DevOps). The CHPC Staff can significantly reduce on development and testing periods and have more freedom to experiment with new technology or even do customisation to expand the capabilities of the OpenStack Cloud.

The CentOS based OpenStack Cloud is a self-service Virtual Machine (VM) provisioning portal for CHPC Administrators where common administrative tasks like VM creation, recoup unused resources, and infrastructure maintenance tasks are automated and capacity analysis, utilization, and end-user costing reports can be generated.

Through this project, CHPC administrators have been exposed to the initial implementation of the OpenStack system and have hands on experience of performing the various required tasks.

Linomtha together with Supermicro, Mellanox, StackHPC and LinomthaID have jointly-engineered the CSIR OpenStack Cloud Solution. This solution is built on Supermicro Server and Storage systems that deliver first to market innovation and optimized for value, performance and efficiency. Using the Supermicro TwinPro Servers to provide 320cores/640threads (2.50 - 3.90GHz) and over 3TB DDR4 2933 Memory providing some 9GB RAM per core all in just 4U of rack space, connected through Mellanox 100GB Ethernet Networking to Supermicro Ultra and Supermicro Simply Double Servers providing a CEPH Storage cluster with over 1.5PB (1500TB) of Mechanical Disk Storage and more than 220TB of Flash Storage.

OpenStack was deployed with OpenStack Kayobe, a tool largely developed and maintained by StackHPC within the OpenStack Foundation. Kayobe provides for easy management of the deployment process across all compute, storage and networking infrastructure using a high degree of automation through infrastructure as code. Kayobe invokes a containerised Kolla control plane providing for easier upgrades and maintainability. In addition to the infrastructure element, Kayobe also deploys rating, monitoring and logging services providing insight on resources and their use.

The integration of the invoicing engine and portal, VOIS, was undertaken by LinomthaID who extracted the billing information of the Openstack Usage provided by CloudKitty, and localised and customised the invoicing to CHPC requirements.

Ensuring there was constant and clear communication during the project, the Linomtha project team ensured daily stand-up calls, weekly progress meetings and utilised tools such as Slack and Google Meet - which allowed for quick turnaround times for addressing queries.

We were impressed with the Slack communication and the shared Google drive provided for documentation between team members, it made the sharing of thoughts much easier resulting in solving problems quickly and collaboratively.

A single point of contact was identified from each stakeholder involved in the project, allowing for communication to flow to the right people and ensuring action items were accomplished and ultimately, meeting the challenging deadline.

One component of the project was training which initially was to take place on-site, but due to the restraints of COVID-19, the team improvised and the training was successfully delivered remotely, over a five-day period. The training was deemed a great success! The training has ensured that the CHPC Administrators have sufficient knowledge and confidence to efficiently manage the environment.

The training was one of the best we've attended, the setup was great, the trainer's expertise and their quick thinking or rather well-considered answers in providing solutions to our questions was impressive. The information gathered and shared is helping us with our OpenStack operations and we can only grow strong from here with our OpenStack expertise as well.

No project is without challenges and this one was no exception. One of the lessons learnt was that the time between the initial workshop and implementation was too compressed. It did not allow for all team members, including technical resources, to fully understand the finer technical detail of the project and allow them to all contribute.

Despite the challenges encountered during the project, through the professional Linomtha Project Management deployment, milestones were met, the deadline accomplished, quality documentation drafted, successful training delivered and the handover to operations completed within the required deadline and budget.

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by John Taylor at April 30, 2020 03:00 PM

April 29, 2020

StackHPC Team Blog

Kata Containers on The New Stack

Our team draws on a broad base of expertise in the technologies used to build the high-peformance cloud. Occasionally our research breaks new ground, and we are always thrilled with the opportunity to talk about it.

The New Stack recently approached Bharat from our team to participate in a webinar on Kata containers. Often Kata containers are pitched with the soundbite "the speed of containers, the security of VMs". Bharat's previous research on IO performance suggested the real picture was more nuanced.

The end result is a great article and webinar (with Eric Ernst from Ampere), which can be read here. Bharat's presentation can be downloaded here (as PDF).

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Bharat Kunwar at April 29, 2020 01:40 PM

April 21, 2020


6 Reasons Why You Should Run Your Containers On OpenStack

Let’s talk about running containers on OpenStack.

In a fiercely competitive market, it’s crucial that businesses keep up to date with trends and innovations within the IT space. The power of an OpenStack powered cloud allows users to deploy and update applications faster than ever before to keep up with what will only be bigger and bigger increases in demand.

Container technology in OpenStack can offer strategic flexibility and agility when it matters most. Moreover, with the power of containerization, your business will be able to manage applications in a consistent fashion all the while increasing overall efficiency.

Today we are going to review six reasons why containers and OpenStack work well together and why you should be running your containers on OpenStack. Keep reading to learn more.

Reason 1: Provides Measurable Standards

Thanks to the support of the growing OpenStack community that is comprised of users and developers, OpenStack is able to provide a good platform for building scalable clouds. Certainly, by providing measurable standards for cloud platforms, OpenStack can offer flexibility, efficiency, innovations, and savings for all users of their infrastructure.

Reason 2: Improves Overall Security

Some businesses are hesitant to adopt containers due to security concerns. Thankfully, OpenStack can help limit some of these security concerns and risks. Through the integration of certain tools for scanning and certification, OpenStack allows for the verification of container content. Thus ensuring that all content and containers are safe. Certainly, OpenStack clouds support both single and multi-tenant options for private and public clouds respectively. Furthermore, our business is able to select which cloud best suits your unique security needs. That being said, at VEXXHOST you’ll find virtual machines, bare metal and containers available all in one environment.

Reason 3: Allows Teams To Develop Apps Faster

If your business or enterprise is looking to develop better quality applications with speed then containers may be able to help. Containers can help increase the portability of applications while reducing the overall time it takes to develop them. In addition, highly distributed applications are able to avail of microservice architecture and containers can help deploy these microservices with speed. Containers plus OpenStack is a great way to add speed to your cloud infrastructure.

Reason 4: The OpenStack Community

The OpenStack community has created several projects that support containers. These projects work to support containers and the third-party ecosystems around them, within an OpenStack powered cloud. In more recent developments, OpenStack offers different container-centric management solutions, such as monitoring, multi-tenant security, and isolation.

Reason 5: Software-Defined Infrastructure Services

OpenStack compute, network, storage, tenancy security, and service management are just some of the software-defined infrastructure services that they offer. This ecosystem provides a plethora of capabilities and choices for developers and users alike. Moreover, containers are able to run within virtual machine sets, aggregating OpenStack compute and other infrastructure resources.

Reason 6: Continuous Standardization

Lastly, OpenStack embraces advanced open standards for container technology. The OpenStack Containers team was created to work and build upon container standards. Thus allowing things like the runC runtime standard from the Open Container Initiative (OCI) to become a reality. From there, OpenStack continues to develop simpler ways for organizations to adopt container technology within their OpenStack powered cloud.

Run Containers On OpenStack

In conclusion, is your business considering getting started with OpenStack? Trust the experts at VEXXHOST to help guide you through the process. Contact us today to learn more about our OpenStack powered private cloud services.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 6 Reasons Why You Should Run Your Containers On OpenStack appeared first on VEXXHOST.

by Hind Naser at April 21, 2020 07:47 PM

April 20, 2020


3 Tips For Easy Cloud Application Development

Application development is not a one size fits all model. In contrast between traditional IT department systems and cloud programming tools, there’s a lot of differences between them. These significant differences can mean slower processing times, complex integration and issues for traditional IT that consume time and resources. The best way to take on any challenges surrounding old IT infrastructure and application development is for businesses to be open to adopting new technologies to face any issues head-on.

Good news: 76% of businesses have some form of their data center infrastructure on the cloud. Better news: In 2015, 52% of enterprises with 1,000 employees or more planned on increasing their cloud spend. This number is only growing with each passing year as cloud computing takes a larger prominence in the IT-sphere.

When it comes to the growth of the cloud it impacts on application development are evident. From changes in outlining design specifications to writing code, the cloud is here to help deliver applications more efficiently. We’re here today to go over some tips and tricks for easy cloud application development. Ready to get learning? Let’s dive straight in.

Tip #1: Address Performance Issues Early On In Cloud Application Development

If you don’t address performance issues early on it can have a devastating impact on your system development. Prepare your team to work around potential network bottlenecks or latency issues. Applications need to be architected to ensure that network resources are always available. Before, applications ran on a handful of computers. Now cloud computing allows applications to run on multiple servers and even larger data centers. Create your application design with the potential server load or bandwidth in mind to make sure that everything runs smoothly from the start.

Tip #2: Understand Your Impact

The impact of cloud implementation in application development goes much further than your IT department. From internal departments such as sales or human resources to external reaches such as partners or customers, your application systems have reached. Moreover, with the cloud, your business is able to extend its systems and share data. You need to ensure that your data and all applications are secure, especially when opening up data to users outside your organization. Examine all connected components to ensure that information reaches those who need it and stays inaccessible to those who don’t.

Tip #3: Keep A Close Eye On System Resources

In order to ensure the smoothest experience in cloud application development, users need to be wary of their usage of system resources. There is a dynamic aspect to application development. System configurations are always in flux, or a virtual machine could be used for a test one day and still be running a few days later. With traditional IT systems, an oversight isn’t the end of the world. Even though this is not a major expense, thanks to the abstract nature of cloud infrastructure these costs can impact the productivity of application development. Moreover, cloud computing gives businesses the benefits of additional flexibility, better agility, and lower costs. Keep a close eye on your system resources to make sure that your development is as efficient as it can be.

Cloud application development is here to help streamline your business processes. Curious to learn more? Contact our team of experts to learn how a public cloud solution can help get your business started with the cloud.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 3 Tips For Easy Cloud Application Development appeared first on VEXXHOST.

by Hind Naser at April 20, 2020 08:15 PM

OpenStack Superuser

Women of Open Infrastructure: Meet Melanie from the OpenStack Nova Project

This post is part of the Women of Open Infrastructure series to spotlighting the women in various roles in the community who have helped make the Open Infrastructure successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of Open Infrastructure taking shape. If you’re interested in you are interested in being featured or would like to nominate someone to tell their stories, please email

This time we’re talking to Melanie Witt from the OpenStack Nova project. She tells Superuser about how she became an active contributor in the community and her best practices on stay on top of things in the fast-moving open source industry.

What’s your role (roles) in the Open Infrastructure community?

I am a core reviewer in the OpenStack Nova project. I also served as Nova Project Team Lead (PTL) for the Rocky and Stein release cycles.

What obstacles do you think women face when getting involved in the Open Infrastructure community?   

For me, I think the primary obstacle I faced when I was first getting involved in the community was being different than most everyone else. Questions like, “will they accept me?” or “do I belong here?” came to my mind. I think I started off too shy because of this.

The community documentation made it easy for me to get started and before I knew it, I was proposing patches, filing and triaging bugs, and chatting on IRC with members of the community. Everyone was (and still is, eight years later) so welcoming and willing to help me. I love this community and am still so happy I joined. The only thing I would change is I would have started off less shy.

Why do you think it’s important for women to get involved with open source?

I think it’s important for everyone to get involved with open source. Open source software is such a unique model where a large community of contributors works together on software we all share. Each contribution is multiplied not only to a single company’s product or customers but to everyone in the world who uses that same open source software. It’s like “distributed software.” We have so many improvements constantly flowing into the software from different people and organizations that it can get difficult to keep track of all of them (in a good way!).

Open source is so important and impactful that I think everyone who is interested in getting involved should absolutely get involved. You might hesitate and wonder whether it’s for you if you are different, but I encourage everyone to give it a try. It can be very rewarding.

Efforts have been made to get women involved in open source, what are some initiatives that have worked and why?

I think the most important thing is general community encouragement. When someone asks a question in an IRC channel or when they propose a patch, file a bug, post to the mailing list, having a community that responds with friendliness, helpfulness, and actionable guidance makes all the difference, in my opinion. When someone reaches out to make a contribution, they’ve put themselves out there to a new community. It’s important to help them learn the ropes and by doing that you also let them know you appreciate their contribution. When people know their contributions are appreciated, they are more likely to return and make more contributions. Keep it going and eventually, they will hold positions in the community like core reviewer, PTL, Technical Committee (TC) member, etc.

Community documentation is the second most important thing, in my opinion. This will be the first thing that prospective contributors see and interact with, so it’s important that it be clear, concise, and easy to consume as a layperson. All of us had to start somewhere and the easier it is to understand the documentation as a new person who doesn’t know anything yet, the more likely we are to obtain new contributors. It’s hard to make that first step to get involved when you have no clue what anything is or how to use it.

Open source moves very quickly. How do you stay on top of things and what resources have been important for you during this process?

I do a lot of things to stay up to speed. First, I have a separate email address for open source work and I set up email filters for Gerrit notifications, launchpad bugs, and mailing lists. This helps me to find highlights in each area: of code reviews|bugs|community discussion mailing list quickly. Next, I try to attend community meetings on IRC and if I can’t attend, I read the recorded meeting log created by the channel meetbot. I have my IRC client set up with a ZNC bouncer and receive notifications when my IRC nick is mentioned. I review these at the start of each workday and respond to items related to me.

By doing these things, I’m able to have at least a high-level idea of what’s going on even when I’m more occupied with downstream work. During the busiest of downstream times, I spend at least 15 minutes a day reviewing the code review|bug|community mailing list email filter folders just so I have an idea what’s been going on.

The last thing I’ll mention might sound obvious, but another thing to do is to let people know when you’re interested in something. If you’re interested in a feature or a bug fix, chime in on the review or the bug or  IRC. When people know others are interested, they’re more mindful about communicating updates and will likely keep you in the loop directly.

What advice would you give to a woman considering a career in open source? What do you wish you had known?

I would say, give it a try! Each open source community is different and it’s important to know that you can find the right community that is a good match for you. Not everyone finds the same fit in the same communities. I would advise not to give up on a career in open source if the first community you tried was not a good match. Communities are mostly about people, not only code, so there is an element of match-making personalities and styles involved.

I think the number one thing I wish I had known a long time ago is how rewarding it is to take risks. I’m actually thinking more about small risks, like asking questions, jumping into a code review|bug report|community mailing list post that you weren’t previously involved with, asking for advice, and proposing an idea or patch and potentially being wrong or having people not like your idea. Lots of these things can feel like embarrassments or failures but I’ve learned over the years that these are things that make you part of a community and team. You might feel embarrassed but what others see is that you are engaged and motivated to solve problems. That you are someone they might want to ask to weigh in on something later or you might be someone to go to for questions on that topic.

Push yourself to ask questions and share thoughts on code reviews, even if they are not perfect. Oftentimes, an imperfect question or comment will build a little bridge for someone else to catch a problem in a patch or see a way to improve a patch. This helps build your relationship with the community as people get to know your contributions and get more chances to appreciate them.

Taking risks is hard but I think it’s really worth it. I’d say it’s even essential. Otherwise, you stay “distant” in the community, to some degree. So, please take risks. I have been in the community for many years and I am still pushing myself to take risks.

The post Women of Open Infrastructure: Meet Melanie from the OpenStack Nova Project appeared first on Superuser.

by Superuser at April 20, 2020 05:00 PM

April 19, 2020


The ultimate guide to Kubernetes

Here at Mirantis we're committed to making things easy for you to get your work done, so we've decided to put together this guide to Kubernetes.

by Nick Chase at April 19, 2020 09:47 PM


Community Blog Round Up 19 April 2020

Photo by Florian Krumm on Unsplash

Three incredible articles by Lars Kellogg-Stedman aka oddbit – mostly about adjustments and such made due to COVID-19. I hope you’re keeping safe at home, RDO Stackers! Wash your hands and enjoy these three fascinating articles about keyboards, arduino and machines that go ping…

Some thoughts on Mechanical Keyboards by oddbit

Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.


Grove Beginner Kit for Arduino (part 1) by oddbit

The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.


I see you have the machine that goes ping… by oddbit

We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!


by Rain Leander at April 19, 2020 09:45 AM

April 17, 2020


Why Decision Makers Need To Build Cloud Culture

If you’re a decision-maker in your business or organization then you’ve probably already considered implementing some form of cloud solution. Or maybe you’ve already deployed a public or private cloud for your business. Moreover, decision-makers are aware that moving to the cloud is a transition that requires both time and resources. Despite this, the overall positive impact of a winning cloud strategy is evident.

It goes without saying that implementing cloud infrastructure goes far beyond the scope of your IT department. Therefore, cloud technology has a notable impact on all layers of a company, from sales to human resources and beyond. In order to create a successful cloud, it’s integral that decision-makers build cloud culture throughout their business. Futhermore, your staff need to understand the power of cloud infrastructure, how it benefits them and receive education on cloud best practices. By encouraging shared knowledge amongst your team you are helping to build a cloud educated workforce that is ready to approach the cloud.

We’re here today to dive into why decision-makers need to build cloud culture in the workplace and how building cloud culture pays off. Keep reading to learn more.

The Cloud Culture Difference

When a business moves away from traditional IT infrastructure they open a new world of possibilities. From being able to opt for hosted solutions in a data center to building an on-premise solution right on-site, there a clear incentive to make the move to the cloud. Certainly, any business that is moving towards a modern infrastructure is looking towards cloud computing. Moreover, implementing both a private or public cloud will have a ripple effect on the IT department and beyond. Any aspect of your business that touches technology can see the benefits of cloud infrastructure. The best way to get the entire team on board with implementing a cloud solution is to get them to invest early in understanding the opportunities and benefits of the cloud. Everyone on your team should have a role in adopting, adapting and maintaining the cloud.

Cloud-Powered Digital Transformation

Once upon a time, cloud computing was usually allocated purely to the IT department. They gave them a set of resources and responsibilities for cloud deployment. Now today things have very much changed. The IT department must collaborate with decision-makers to ask how a digital transformation can benefit the business or organization overall. Which departments will adopt the most from the cloud and what gaps in knowledge your team will address to ensure that this is a success. If you store confidential data from human resources in the cloud then it is essential that your HR department is aware of security best practices. Certainly, the same goes for your sales team if they utilized a cloud based CRM.

Of course, there is an obvious learning curve, but ensuring that you and your employees are fully invested in cloud adoption and cloud culture is the best way to begin your cloud journey. Actively working on cloud culture is the first and one of the most important steps that you can take to optimize your cloud for success. Thinking of implementing a private or public cloud to modernize your IT infrastructure? Contact us to learn more about how we can help you get there.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why Decision Makers Need To Build Cloud Culture appeared first on VEXXHOST.

by Hind Naser at April 17, 2020 05:33 PM

April 16, 2020


Edge Computing Challenges

The key factor that is missing in many discussions about edge computing is a clear view of how we will be expected to deploy, manage and gain a clear picture of these edge resources.

by Nick Chase at April 16, 2020 04:37 PM

OpenStack Superuser

What’s Next for Open Infrastructure and the OpenStack Foundation

2020 has already shown us that the future can be unpredictable. We’re just three short months into the year, and already our global community has experienced the unexpected in myriad ways. In the open infrastructure community—and the OpenStack Foundation (OSF) specifically—we certainly don’t have a “crystal ball” for what 2020 has in store, but we are confident that the future holds progress, because we have a shared vision and three critical assets:

  1. an operational model for building open source software and communities that truly works;
  2. strong open source software projects that are making steady progress; and
  3. a global community that continues to make great strides in engagement and collaboration.

The Vision

The open infrastructure community remains committed to the OSF mission of helping people build and operate open infrastructure. Our vision, articulated in “big picture” terms, is our community’s four-way method:

  1. identify use cases
  2. collaborate across communities
  3. build the required new technology
  4. test everything end-to-end

You can read more about this in theOSF 2019 Annual Report, which highlights our recent achievements across the community and expresses some of our goals for the year ahead.

A Model that Works

A couple of years ago, our community recognized that we had something special at OSF: a successful approach to helping software communities thrive and grow, an approach that was worth replicating. At the same time, we recognized that the best way to support open infrastructure was to expand our contributions to a broader ecosystem of open source projects.

We developed a formal model for project supportthat includes (1) accepting pilot projects to nurture and (2) confirming for long-term support those projects that demonstrate progress and our community’s core values (Four Opens). Using this model, we have confirmed three new open infrastructure projects to complement OpenStack in powering the world’s open infrastructure: Kata Containers, Airship and Zuul. Those are in addition to active growth in the StarlingX community. In 2020, our goal is to create more documentation around this model.

Progress with Open Infrastructure Software Projects

In 2019, the OSF community had a productive year, merging 58,000 code changes to produce open source infrastructure software like Airship, Kata Containers, StarlingX, and Zuul, along with the third most active open source project in the world, OpenStack.

Here’s a quick run-down of what each of these projects aspire to accomplish in 2020:

  • Airship: The Airship community plans a complete rebuild of Airship core code with a beta version planned for June and a full 2.0 release later in 2020. In addition, Airship 2.0 will penetrate into more industry domains such as Common NFVi Telco (CNTT) and 5G testbeds. Other goals include:
    • Supporting smaller deployments
    • Making all workflows fully declarative
    • Adopting upstream entrenched projects
    • Enabling simpler document creation and management (Airship YAML was hard)
    • Providing an improved flow for executing updates (changing the tires while the car is moving is hard)/li>
    • Penetrating into the NFVi domain, enabling the reference implementation of Common Telco NFVi (CNTT) and supporting its VNF certification.
    • Capitalizing on hardware donations by Ericsson and Dell for an Airship community lab to leverage as 3rd-party CI
    • Empowering the 5G testbed by Ericsson
  • Kata Containers:Looking ahead to 2020, the Kata community will focus on supporting its growing user community, driving innovation with the Kata 2.0 roadmap, and continuing open collaboration with the rapidly expanding container ecosystem.
  • OpenStack:As usual, the OpenStack technical committee members will continue their work to expose special interest groups (SIGs) broadly, to ensure all the different profiles and interests in OpenStack are efficiently represented, working and collaborating together.
  • StarlingX: Edge computing use cases are emerging among organizations running StarlingX in production. At the Shanghai Summit in November 2019, China UnionPay presented how its contactless payment system leverages StarlingX. In 2020, the community is focusing on project upgrades and functional testing as the contributors work towards the 4.0 release.
  • Zuul: Zuul maintainers have begun collaborating with the Gerrit project to add Gerrit Checks API support to Zuul. The goal is to have a Zuul running to help gate the Gerrit project once this feature is added. Looking ahead, the Gerrit Checks API is only one of many features they would like to add to Zuul. From an integration standpoint Gitlab and Bitbucket support is under active development, changes have begun to merge Google Compute Engine support to Nodepool, and Microsoft Azure driver work has begun. Developers still have plans to remove the current single point of failure for the scheduler process and manage job and queue state with the distributed database. This will make it easier to run Zuul reliably without downtime.

In 2020, we are continuing to highlight emerging technologies and ask our community to proactively address the demands of intelligent open infrastructure. For example, workloads like AI and ML require support for new chip architectures, automation at scale down to the bare metal, and integration with many other open source components, all while stretching “cloud” to the edge for 5G and IoT. That’s why “Intelligent Open Infrastructure” will be the theme of our first virtual events of 2020—OpenDev + PTG—where we will focus on the integration of open source components to create an infrastructure that is monitoring itself, replicating itself, and delivering a versatile set of use cases.

A Productive, Engaged Community

With over 100,000 members and millions more visiting OSF websites in 2019 to get involved, the community made huge strides in addressing the needs of what 451 Research predicts will soon be a $7.7B market for OpenStack and a $12B+ combined market for OpenStack & containers.

Some of the world’s largest brands—AT&T, Baidu, Blizzard Entertainment, BMW, China UnionPay, Walmart, and Volvo among others—shared their open source infrastructure use cases and learnings last year. We had the opportunity to engage directly with all of our Gold and Platinum sponsors as well as numerous organizations that have not historically been involved in OpenStack directly. New contributors were on-boarded through multiple internship and mentoring programs as well as through the OpenStack Upstream Institute, which was held in seven countries last year.

We’re looking to make 2020 just as productive.

So even though the future is unknown, we can confidently predict great progress in the OSF community in 2020 as we pour our efforts into growing software projects, the open infrastructure ecosystem, and the open source movement as a whole.

The post What’s Next for Open Infrastructure and the OpenStack Foundation appeared first on Superuser.

by Jonathan Bryce at April 16, 2020 01:06 PM

April 15, 2020

OpenStack Superuser

OpenStack Ironic Bare Metal Program case study: Red Hat

The OpenStack Foundation announced in April 2019 that its Ironic software is powering millions of cores of compute all over the world, turning bare metal into automated infrastructure ready for today’s mix of virtualized and containerized workloads.

Over 30 organizations joined for the initial launch of the OpenStack Ironic Bare Metal Program, and Superuser is running a series of case studies to explore how people are using it.

Today, Red Hat is sharing how they integrate OpenStack Ironic into its product offerings and what benefit this provides its customers.

Why did you select OpenStack Ironic for your bare metal provisioning in your product?

As part of Red Hat’s decision to use TripleO, Ironic offers an abstraction layer for bare metal nodes that provides a similar experience to managing virtual instances with OpenStack. This common experience allows a predictable way of working with bare metal nodes. This includes using bare metal in automation workflows such as CI/CD, testing automation of hardware components or any kind application deployments on bare metal. Additionally, Ironic provides a vendor agnostic community where the vendors work together with the community.

These and other use cases that we keep learning from our customers made incorporating and supporting Ironic an easy decision. We have been supporting Ironic with Red Hat OpenStack Platform starting with our OSP 10 release.

What was your solution before implementing OpenStack Ironic?

As Red Hat was looking at determining solutions to facilitate bare metal machine deployment around the time frame of OSP 7, The TripleO project and ultimately Ironic projects were chosen over Foreman for OSP 7 for the first time. It made sense to use an OpenStack project (Ironic) to deploy OpenStack based on feedback from customers using Ironic to deploy OpenStack. That decision that helped make Ironic better over the years by introducing new enterprise capabilities such as lifecycle management as part of “day 2 operations” for bare metal nodes, remote management via API and scaling out of bare-metal resources.

What benefits does OpenStack Ironic provide your users?

Ironic provides a more efficient way to consume bare metal-managed nodes for our customers who were asking for a more simplified and cost effective way to deploy and  manage large numbers of bare metal nodes with OpenStack. Ironic allows them to accomplish it for a variety of use cases including:

  • API driven Installation of physical machines to support cloud infrastructure or tenant workloads.
  • A repeatable and reliable way to test the performance of every new revision of a hardware component for servers and Ironic is well-suited to include physical servers in a CI environment.
  • Performing 3D rendering in a large number of nodes and with the ability to automate the tests.
  • GPU PCI passthrough with Red Hat OpenStack Platform director – Ironic and its family of tools enables retrieval of hardware information prior to installation which allows OpenStack installers to have advanced features such as GPU pass-through in their environments – supporting AI/ML workloads such as TensorFlow
  • Multi-tenant access to bare metal with network isolation allowing multiple teams to work with the same pool of bare metal nodes in a safe way, thanks to integrating support to manage switches via networking-ansible and neutron

Ultimately we are bringing lessons learned using APIs to manage virtual machines to bare metal enabling our customers to leverage automation instead of issuing help tickets to request physical machine resources. Ironic is the tool kit that is leveraged facilitating the installation of Red Hat OpenStack Platform and OpenShift automated Installer Provisioned Infrastructure (IPI) upon bare metal using the Metal3 project.

What feedback do you have to provide to the upstream OpenStack Ironic team?

Our feedback would be to focus on stability and integration with hardware vendors, a good example is the Ironic integration with the Redfish standard over the years. Redfish is the modern, vendor-agnostic bare metal machine management protocol developed by DMTF (Distributed Management Task Force) and already supported by several major vendors (as well as by Ironic since OpenStack Pike release).  It is important and an opportunity to standardize across vendors, as IPMI has done so far, but with advanced features. DHCP-less types of deployment will also help operators and Red Hat is working on this via virtual-media provisioning. Integration via Metal3 is also strategic for both Kubernetes and Ironic, so maintaining focus on expanding Ironic with Kubernetes is important to ensure both projects will be more successful and ready for consumption by a large number of users.

The post OpenStack Ironic Bare Metal Program case study: Red Hat appeared first on Superuser.

by Allison Price at April 15, 2020 01:00 PM

April 14, 2020


Who Is The OpenStack Community?

The OpenStack community is truly one of a kind. Of course, we’d say that as active members since 2011, but it really is true. Since its humble beginnings in 2010, the OpenStack community has only continued to flourish, evolve, and grow. This is a testament to the vast diversity of this community that is driven by transparency, agility, and collaboration.

Its strength lies in its diversity, their willingness to share their knowledge, and their commitment to providing an open-source cloud computing platform that is accessible for everyone. So if you’re new to OpenStack or want to learn more about their unique community of developers and users, we’re here to introduce you to the OpenStack community.

Strength In Diversity

The OpenStack community strives to create an inclusive and safe space for everyone. Moreover, they are committed to cultivating a space for developers and users to drive innovation in open-source no matter their race, color, religion, gender identity, or sexual orientation. By doing so, all voices are heard and anyone can rise up to a leadership position with the right skill and work ethic. It makes for better open-source software and a stronger community overall.

One of the diversity initiatives at OpenStack is Women of OpenStack (WOO). They work relentlessly to increase diversity within the community by providing professional networking, mentorship, and educational resources to women in the OpenStack community. All women and allies are welcome. Not only does this provide much-needed inclusion within the OpenStack community but it also opens doors to many brilliant minds who otherwise would have difficulty accessing and participating within open-source.

The Open Infrastructure Summit

The Open Infrastructure Summit, formerly known as The OpenStack Summit, is a great opportunity for the OpenStack community and those who share common interests to come together to learn, collaborate, talk about all things open-source. Certainly, it’s open to anyone who has an interest in IT infrastructure and open-source. It welcomes everyone, no matter what background or level, to come, learn, and become part of a wider community. Thousands of people coming together to learn and collaborate? That’s something we are definitely on board with.

Our Role In The OpenStack Community

As we mentioned before, the team at VEXXHOST has been part of the OpenStack community since 2011. We understand the significance and importance of being community-driven. The team at VEXXHOST aspires to not only be active contributors and users of OpenStack and members of their vast community, but also sponsors, attendees, and event organizers of open-source events for the community. We’ve hosted OpenStack Canada day and have helped organize the Montreal OpenStack meetup too. What can we say? We love to be right in the action, interacting with users and developers from all walks of life. We donate infrastructure to OpenStack because we believe in their mission and are passionate about open-source. As the OpenStack community continues to grow we aspire to grow with them and support them any way we can.

Do you have OpenStack on your mind? We run exclusively OpenStack managed services across our entire infrastructure. Our OpenStack powered cloud solutions validates through rigorous testing to provide API compatibility for our OpenStack core services. Moreover, we are currently running the latest release, OpenStack Train for private cloud and have been since the launch date. Also did we mention we are Certified OpenStack? So you can rest easy knowing you’ve trusted your infrastructure with the experts. Contact us today to learn how we can help make your private or public cloud a reality.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Who Is The OpenStack Community? appeared first on VEXXHOST.

by Hind Naser at April 14, 2020 08:00 PM

OpenStack Superuser

Women of Open Infrastructure: Meet Amy Wheelus at AT&T

This post is part of the Women of Open Infrastructure series to spotlighting the women in various roles in the community who have helped make the Open Infrastructure successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of Open Infrastructure taking shape. If you’re interested in you are interested in being featured or would like to nominate someone to tell their stories, please email

This time we’re talking to Amy Wheelus, VP – Broadband and Video Systems at AT&T. She tells Superuser about how she became an active member in the community, and why it’s important for women to get involved with open source.

What’s your role (roles) in the Open Infrastructure community?

In my previous role within AT&T, I led our Network Cloud team which is responsible for AT&T’s internal private network cloud. This team is very active in the Open Infrastructure community, and I had the privilege to support my team in their efforts in Open Infrastructure. I served in the capacity of evangelist for the use of open source software within AT&T and within the telecommunications industry. One specific area that I was involved with in the last year was the creation of the CNTT Common NFVi Telco Task Team whose mission was to create a small number of common reference architectures and implementations. It was a really exciting endeavor and the first reference architecture was based on OpenStack.

What obstacles do you think women face when getting involved in the Open Infrastructure community?   

Just like most of the technology industry, the Open Infrastructure community is male dominated, but in the few years that I have been an active participant, I’ve seen an increase in the number of women presenting talks and taking leadership roles in the community. I think that we have to continue to spotlight women who are changing the game and highlight contributions from women across the field from developers to code reviewers to leaders in their companies.

Why do you think it’s important for women to get involved with open source?

Open Source is about working together to solve common problems and the best way to do that is with diverse thought.  Women think differently than men – not better and not worse – just differently and it is this difference of viewpoint that is important to creating solutions as a team. This is what Open Source is all about – bringing a diverse group of people together to work for the good of everyone.  We need the diversity of thought that women bring to the table.

Efforts have been made to get women involved in open source, what are some initiatives that have worked and why?

I think the best initiatives are when others are personally invested in getting women involved.  We need to each personally commit to encouraging and supporting a woman to become more involved in open source.  If each person (male and female) were to encourage one woman to get more involved – think of the impact that could have on the open source community.

Open source moves very quickly. How do you stay on top of things and what resources have been important for you during this process?

You aren’t joking when you say that open source moves very fast – it is difficult to stay on top of everything going on which is why it is important to focus your energy on a few areas where you can make a difference.  The OpenStack website is a great resource on infrastructure. Linux Foundation’s Open Source Networking site is another resource that I use to stay on top of the trends in the telecommunications and networking areas.

What advice would you give to a woman considering a career in open source? What do you wish you had known?

Open source gives the opportunity to work with hundreds of people around the world to solve common problems. It gives people the opportunity to explore many different areas. If I had understood how working upstream in open source could eliminate costs for my company many years ago, I would have been a believer sooner. I think women can flourish in open source careers whether it is in the technical expert areas or in the management areas. There are no limits to what can be achieved if we bring the best minds of the world together to solve common problems.


The post Women of Open Infrastructure: Meet Amy Wheelus at AT&T appeared first on Superuser.

by Superuser at April 14, 2020 01:00 PM

Galera Cluster by Codership

Announcing the Release Candidate of MySQL 8.0 + Galera 4

A time for new beginnings beckons upon us, and Codership is pleased to announce the much anticipated Release Candidate of MySQL 8.0 that comes with Galera 4. It comes with MySQL 8.0.19 and includes the Galera Replication Library 4.5 Release Candidate and wsrep API version 26. 


Galera 4 and MySQL 8.0.19 have many new features, but here are some of the highlights:

  • Streaming replication to support large transactions by splitting transaction replication then applying them in smaller fragments.
  • Improved foreign key support, as write set certification rules are optimised and there will be a reduction in the number of foreign key related false conflicts in certifications.
  • Group commit is supported and integrated with the native MySQL 8 binary log group commit code. Within the codebase, the commit time concurrency controls were reworked such that the commit monitor is released as soon as the commit has been queued for a group commit. This allows transactions to be committed in groups, while still respecting the sequential commit order.
  • There are new system tables for Galera Cluster that are added to the mysql database: wsrep_cluster, wsrep_cluster_members and wsrep_streaming_log. You can now view cluster membership via system tables.
  • New synchronization functions have been introduced to help applications implement read-your-writes and monotonic-reads consistency guarantees. These functions are: WSREP_LAST_SEEN_GTID(), WSREP_LAST_WRITTEN_GTID() and WSREP_SYNC_WAIT_UPTO_GTID().
  • Poor network conditions can lead to errors in a Galera Cluster, and handling of this is now improved, such that a node will attempt to leave the cluster gracefully if there is ever a possibility of data inconsistency.
  • A more robust wsrep codebase, with better state handling and error handling. 

We are pleased to offer packages for CentOS 7, CentOS 8, and Ubuntu 18.04. Installation instructions are similar to previous releases of Galera Cluster.

Please evaluate Galera 4 at our downloads page (


Known issues with this release candidate are:

  • This is still a release candidate not meant for production use.
  • Upgrading from an earlier version of Galera Cluster is not supported yet.

Please try this release candidate, and provide feedback via our Google Group or via email to:



by Sakari Keskitalo at April 14, 2020 11:56 AM

Michael Still

Exporting volumes from Cinder and re-creating COW layers


Today I wandered into a bit of a rat hole discovering how to export data from OpenStack Cinder volumes when you don’t have admin permissions, and I thought it was worth documenting here so I remember it for next time.

Let’s assume that you have a Cinder volume named “child1”, which is a 64gb volume originally cloned from “parent1”. parent1 is a 7.9gb VMDK, but the only way I can find to extract child1 is to convert it to a glance image and then download the entire volume as a raw. Something like this:

$ cinder upload-to-image $child1 "extract:$child1"

Where $child1 is the UUID of the Cinder volume. You then need to find the UUID of the image in Glance, which the Cinder upload-to-image command will have told you, but you can also find by searching Glance for your image named “extract:$child1”:

$ glance image-list | grep "extract:$cinder_uuid"

You now need to watch that Glance image until the status of the image is “active”. It will go through a series of steps with names like “queued”, and “uploading” first.

Now you can download the image from Glance:

$ glance image-download --file images/$child1.raw --progress $glance_uuid

And then delete the intermediate glance image:

$ glance image-delete $glance_uuid

I have a bad sample script which does this in my junk code repository if that is helpful.

What you have at the end of this is a 64gb raw disk file in my example. You can convert that file to qcow2 like this:

$ qemu-img convert $child1.raw $child1.qcow2

But you’re left with a 64gb qcow2 file for your troubles. I experimented with virt-sparsify to reduce the size of this image, but it doesn’t work in my case (no space is saved), I suspect because the disk image has multiple partitions because it originally came from a VMWare environment.

Luckily qemu-img can also re-create the COW layer that existing on the admin-only side of the public cloud barrier. You do this by rebasing the converted qcow2 file onto the original VMDK file like this:

$ qemu-img create -f qcow2 -b $parent1.qcow2 $
$ qemu-img rebase -b $parent1.vmdk $

In my case I ended up with a 289mb $ file, which isn’t too shabby. It took about five minutes to produce that delta on my Google Cloud instance from a 7.9gb backing file and a 64gb upper layer.


by mikal at April 14, 2020 08:27 AM

April 13, 2020


Swarm to Kubernetes workload migration

Swarm support continues, but if you're embracing Kubernetes, we've developed a suite of services, and tooling to make things as painless as possible.

by Rick. Pugh at April 13, 2020 05:35 PM

April 10, 2020


Why Open-Source Tools For Cloud Management Matter

Today we’ll be covering why open-source tools for cloud management matters. When it comes to technology in the 21st century, it’s impossible to mention modern IT infrastructure without mentioning the cloud. When it comes to which tools are essential for the best open-source results, it’s important to do your research. Certainly, as you probably already know, the power of cloud computing isn’t going anywhere anytime soon.

Is your business thinking about which ways it can utilize open-source cloud management to better its overall strategy? Good thing the experts at VEXXHOST are here to break down precisely why open-source cloud management matters and how you can use essential open-source tools to ensure that your cloud is running at its full potential.

Why Open-Source Tools For Cloud Management

In a world where everything feels complicated enough, it’s important to embrace simplicity in your cloud management solution. Complexity is the enemy. In the spirit of keeping things simple, open-source tools for cloud management are here to help you.

From anticipating challenges and changes down the road to embracing control to mitigate risks, there are many reasons why open-source tools can be a game-changer for cloud management.

Community Equals Less Risk

One of the most noteworthy features of open-source is the vibrant community of contributors that help build the software. This means that open-source software is not the property of a single company, individual or organization. Moreover, its destiny is entirely decided by a group of developers with the best interest of open-source at the heart of everything that they do. When it comes to cloud management, open-source software is there to support their cloud deployment. Meaning that a vast community helps keep some level of control over its direction.

The Option Of Forking

Your business may not want to develop products. That is completely understandable. That being said, the power of open-source gives the option to fork the code tree in a way that directions where features or functionality can support your individual needs. Be it for your entire system or simply a subsystem, there is value for your business in having the option.

Anticipate What Changes Lie Ahead

It’s safe to say that open-source management tools will continue to change and innovate alongside open-source software. Especially as cloud computing and management continue to grow within businesses and enterprises all over the world. Moreover, anticipating what changes lie ahead can help a business manage in an increasingly complex market. Thus helping your business stay relevant in a fiercely competitive landscape.

Trust OpenStack For Cloud Management

It is impossible to mention open-source without getting into OpenStack. OpenStack offers infrastructure-as-a-service private and public clouds. It is a powerhouse that can compute, store and network resources in your own data center or in one of your cloud providers of choice. For example, it features a vast ecosystem of projects and is supported by a large community of users and developers. OpenStack is a strong choice for cloud management, as it can run cloud deployments with ease.

In short, if you’re looking to take control of your cloud management then consult the experts at VEXXHOST. Our team of experts is here to help you better understand which tools are best for you and grow your business with an OpenStack powered cloud. Curious to learn more? Contact us today!

Would you like to know more about Zuul? So download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post Why Open-Source Tools For Cloud Management Matter appeared first on VEXXHOST.

by Hind Naser at April 10, 2020 03:26 PM

April 08, 2020


A Dive Into Fully Managed Services

When it comes to finding a cloud solution that fits your business the process can feel overwhelming. There are a variety of managed cloud services and solutions available to businesses big and small. Choosing the right cloud provider is an important decision. From finding the right level of support for your unique business to finding a provider who truly listens to your individual requirements, there are many steps that you need to take in order for your business to reap the full benefits of managed services.

Curious to learn more? Let’s take a dive into fully managed services; what they mean, who are they best suited for and the best ways to make the most of them. Keep reading to learn some key information that could benefit your business.

What Does Fully Managed Services Mean?

A fully-managed cloud solution means that your business entrusts a cloud provider to help maintain your private cloud. This means that your business is able to focus on what matters most. Meanwhile, a cloud hosting provider of your choosing will maintain your cloud. Typically businesses experience increased return on investment, improved flexibility and better use of resources when they adopt a fully managed cloud solution. The benefits of finding the right cloud provider are enormous.

When your business uses a fully managed service for your cloud needs, you are taking advantage of scalability and flexibility of the cloud to power your business. Moreover, it doesn’t matter where your business is in their cloud journey, whether you’re migrating to the cloud or looking to adopt new releases, a good provider will work with your business to find the right plan of action.


A fully managed solution means that your business will not ever have to worry about infrastructure. Fully managed means fully supported. Your cloud provider of choice is responsible for all the heavy lifting so your business can focus on what it does best.

Upgrades and Security Updates

Worried about falling behind on upgrades and security updates? With a fully managed solution, your cloud provider will ensure that your cloud is running the latest version of all components. They will also make sure that all security compliances are met at all times.


No matter if your business is a start-up or a large international corporation, there is a fully managed solution available to suit your individual needs. Throughout all industries, the right cloud provider can help determine which infrastructure layout would be best suitable for your use case.

The Right Provider

When your business trusts the right provider to fully manage your cloud services you’re optimizing your cloud. The right provider is able to offer a fully managed cloud solution based on their experience and expertise. Moreover, they will take overall maintenance and monitoring of your cloud computing components, from computing, storage, networks and beyond. The right provider can make all the difference in your cloud strategy.

The VEXXHOST Difference

If your business is looking for a trusted cloud provider who can help transition into fully managed cloud services VEXXHOST is there to help. From network architecture, design, best practices, OpenStack bug fixes, upgrades and more, we go beyond just deploying your infrastructure. Contact our team of experts to learn how we can help optimize your cloud strategy. We’re here to give your business the freedom of a fully managed private cloud solution.

Would you like to know about Private Cloud and what it can do for you? So download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post A Dive Into Fully Managed Services appeared first on VEXXHOST.

by Hind Naser at April 08, 2020 07:35 PM

April 07, 2020

OpenStack Superuser

Women of Open Infrastructure – Growing with the Open Source Community


In the 1990s, when I was a child, Bill Gates, who was the co-founder and the CEO of Microsoft, published a book called The Road Ahead. The book summarized the implications of the personal computing revolution and described a future profoundly changed by the arrival of a global information superhighway. Gates mentioned that things have been changed over the past two decades. However, besides the information superhighway he predicted, it extended to everywhere in our society, such as e-commerce, social networking applications, network conference and cloud computing massively impacted our life every day, which was far beyond his estimation.

The Road Ahead

Computer science, as a novel technology and area to human beings since the last century, is actually an unexplored ocean and has been attracting more and more navigators to explore. People can’t imagine which is ahead in the way of the boat and can’t forecast which land they are about to discover, due to the fast evolution of information technology. I was one of them, when I was admitted to the university, I resolutely chose computer science as my major. And now I discovered my new land – cloud computing.

After I became a postgraduate student, I totally got immersed in the ocean and attempted to find a facility for deployment, orchestration, operation and management of those virtual and physical machines. OpenStack became the beacon light for me, and I got started with the project at that time. It is such a magic box that is powerful enough to cover almost every need, but also a complex system since it is combining so many components together, each with different functionalities. OpenStack is my first impression on how the cloud actually works behind virtualization and how the cloud service providers (CSP) offer their services. These tricks and features make me get far more interested while digging into the land of clouds.

Contributions from company employees and individual volunteers
Contributions from company employees and individual volunteers

Hey, things look much more exciting when I start to know there’s something called open source community, where tens of thousands of people are working together to build one super project, and they all come from different countries, companies and with different gender, age and races. The OpenStack community is definitely a typical example and also one of the powerful leaders for that.

According to an empirical study on OpenStack conducted by Prof. Minghui Zhou and her team in Beijing University in 2018, companies are taking the lead in open source software development by making far more contributions than volunteers. Consequently, company engagement inspires individual volunteers to participate in the community, including myself. I then realized that security and privacy are the top concern of companies using cloud since it shares resources on the networks. After determining cloud security as the research area in school, I made use of OpenStack to perform penetration tests on clouds where security threats were faced.


It’s a fait accompli that the field of computer science is heavily skewed toward men, and the gender situation in the open source arena is even more lopsided. The OpenStack community has been making huge efforts to improve diversity and inclusion spanning from leadership, governance, event representation, to code-and noncode-related contributions.

The percentage of women (blue) in governance and leadership positions. The numbers in parentheses are the total members of each group.

The percentages of code- and noncode-related artifacts contributed by women (blue), men (red), and individuals whose gender could not be identified (green). The percentage of women is 10–12%, depending on the data source and the analysis. Participation at the governance and leadership level increased remarkably, and participation in all of the code- and noncode-related contributions also increased.

I dived into cloud computing and OpenStack after finishing my study, and became a company contributor in January 2018 at Intel, which is a Platinum Member of the OpenStack Foundation and one of the top five companies contributing then. Like the community, Intel has always committed to developing a culture of equality, diversity, and inclusion. In Intel, my team is a big family and the female members hold up half the sky. In this family, our primary responsibility is to take advantage of technologies to change the world and make it better, from the lower-level perspective. We have been enabling server capabilities to enrich the functionality of cloud computing according to new use scenarios, e.g. Enhanced Platform Awareness (EPA).

The OpenStack community doesn’t exclude anyone, even elementary school students, and doesn’t hesitate to offer a helping hand to any people needed. I learned from others and started to be familiar with many other projects and people inside the community. In May 2019, I was invited to present my analysis on edge computing projects as a speaker at the Open Infrastructure Summit in Denver. Warm encouragement and greetings from the audience encouraged me a lot and made me believe I am part of it.


The next generation is the future of our open source community. Intel continuously supports joint programs with universities and research institutes. In 2015 Intel sponsored the eighth Intel-Cup National Collegiate Software Innovation Contest based on OpenStack and performed a study on OpenStack’s ease of use. Only one out of the 20 teams succeeded to deploy OpenStack independently within 36 hours. The final survey showed that for those undergraduate students, it was difficult for them to master numerous OpenStack operations, and the deployment process was complicated with most issues in the networking part. Nowadays we believe its ease of use has been improved dramatically, but we still admit ease of use is the biggest barrier for newcomers to enter this area and participate in the community. Therefore, mentorship always matters.

In the summer of 2019, my team and I got a chance to mentor some brilliant undergraduates from the Joint Institute of University of Michigan and Shanghai Jiaotong University (UM-SJTU) with the art of cloud computing and introduce them to the OpenStack community. The students were appealed by the charm of cloud computing and edge and took a lot of effort while studying the projects inside the OpenStack community. Although some of the projects looked a bit complex and difficult, they still tried their best to construct a cloud gaming infrastructure with StarlingX, which is an edge computing project with low latency and high bandwidth that incorporates OpenStack components and is one of the open infrastructure projects supported by the OSF. In the end, those four students managed to win the Gold Prize in the demo design summit of the university and received a chance to present a session to illustrate their project at Open Infrastructure Summit Shanghai in November 2019.

The Gold Prize winners of UM-SJTU

In addition to the new blood, the open source community is showing more and more vitality. Joy Liu, an 18-year-old girl who is even studying in her high school with much knowledge on computer science as well as cloud computing, showed her enthusiasm on facial recognition on top of edge infrastructure, which motivated me to become her mentor in this area. Based on the Integrated Cloud Native (ICN),  blueprint in Akraino, they constructed a reference architecture and successfully presented a session about the architecture at Open Infrastructure Summit Shanghai.

The deeper a person integrates into the community, the more impressive the person feels. Things look quite different after my transition from a mentee to a junior mentor. There seems to be more areas to explore and also more places that I could devote myself to. The OpenStack community is embracing everyone who has the capability and willingness to join, and we could see more chances available for the new generation to explore and participate in the future.


Inside the open source community, we witnessed so many intelligent and skillful people (both men and women) delivering their power and expertise to drive the growth of the community, as well as the development of the technology. Also, more and more people growing with the open source community, from learning to mentorship offering, just like I did. To these participants, open source culture is never just reusing free code on GitHub to enhance and promote their products. The culture is an ethos that values sharing. It embraces an approach to technology innovation, invention and development that emphasizes internal and external collaboration across different genders, ages, races, countries and companies.

People are now in the best era. As technology is evolving rapidly, everything is changing every day with innovations. It might be impossible to reproduce the miracle that The Road Ahead did, as it is quite difficult for someone to predict what the world will happen in the next two decades, either 5G, edge computing, artificial intelligence, or Internet of Things (IoT). However, us contributors will hold our hands and embrace the bright future together with the community.

The post Women of Open Infrastructure – Growing with the Open Source Community appeared first on Superuser.

by Ruoyu Ying at April 07, 2020 01:00 PM

April 06, 2020


Why Your Enterprise Needs OpenStack’s Cloud Infrastructure

It’s no secret that cloud infrastructure is only continuing to grow. An OpenStack powered cloud is quickly becoming the first choice for many enterprises. Unsurprisingly, the total revenue from public cloud IT services in 2019 expects to grow by nearly 20% into a $330 billion USD industry in 2022. With 69% of enterprises already operating cloud infrastructure for their business workloads, it’s the norm to be utilizing a cloud solution.

Is your enterprise still on the fence when it comes to OpenStack’s cloud infrastructure? It’s time to get off the fence and adopt a modern solution for your cloud infrastructure. We’re here to argue that your enterprise needs OpenStack’s cloud infrastructure. Keep reading to learn precisely why you need to get off the fence and fast.

What Is So Appealing About An OpenStack Powered Cloud?

Many enterprises ask: What is so appealing about an OpenStack powered cloud anyways? From rapid innovation, better agility boosted scalability and easier compliance, there are many reasons why enterprises are trusting cloud technology. When innovation becomes an enterprises’ most competitive asset, it’s important to stay relevant.

Moreover, when it comes to any enterprise, the ability to work with agility in all work environments can drive successful initiatives where they matter most within an organization. With the power of an OpenStack powered cloud, it suddenly becomes possible to utilize the power and flexibility of the cloud no matter where you are. Whether you’re in the office or working from home, it’s possible to connect as long as you have a strong internet connection and proper credentials to access your cloud. This flexibility not only boosts productivity but it also safeguards your team in case anyone needs to be working from out of the office. An occurrence that is increasingly common.

The idea of scale is also a major benefit for enterprises. Whether your enterprise is working towards rapid growth or something unexpected arises, the opportunity to scale intelligently is abundant with OpenStack’s cloud infrastructure. Enterprises can easily scale up or down depending on their individual business needs. Having the flexibility of scale reduces cloud waste and saves on overall costs. These are two benefits of OpenStack’s cloud infrastructure that are difficult to ignore.

Compliance and data protection are always priorities for any enterprise. OpenStack’s cloud infrastructure is secure and works to protect confidential data. When you’re looking for a cloud solution that was built with security in mind, OpenStack is the best option. As threats are increasing in scale and severity, it’s important to keep security and compliance in mind.

What Are You Waiting For?

Is your enterprise ready to adopt OpenStack’s cloud infrastructure? Our team of experts at VEXXHOST is here to help you get off the fence and get started with a bespoke cloud solution. We use, contribute and breathe OpenStack and have been since 2011. We’re active members of the OpenStack community and can help your enterprise adopt OpenStack easily and without friction. Contact us today to learn more about how VEXXHOST can help facilitate your migration to an OpenStack powered cloud solution. It’s time to get off the fence and get started.

Would you like to know about Cloud Pricing? Download our white paper and get reading!

Cloud Economics White Paper

Your Guide to Cloud Economics: Public Cloud Vs. Private Cloud

The post Why Your Enterprise Needs OpenStack’s Cloud Infrastructure appeared first on VEXXHOST.

by Hind Naser at April 06, 2020 02:50 PM

April 03, 2020


How OpenStack Can Cut Costs Without Impacting Quality

The notion that it’s possible to cut costs without impacting quality may seem unlikely, but with an OpenStack powered cloud it’s more than possible. When evaluating the total wealth of a business it’s important to factor in return on investment. For any business, there are various ways to minimize overall costs. In the age of cloud computing, it is important for many companies to limit the costs of their IT infrastructure. As a result, focusing on saving on costs while maintaining the same level of quality can be attractive for businesses.

For today’s blog, we’ve compiled some ways your business can use the power of an OpenStack powered cloud to reduce costs without impacting any quality. Curious to cut costs and improve your bottom line with the power of OpenStack? Keep reading to learn how.

Measure Cloud Waste Without Impacting OpenStack’s Quality

The first way of decreasing costs without affecting quality is to find any gaps in your cloud strategy. Remember, it’s impossible to measure or change what you’re not aware of. Through the use of OpenStack’s dashboard Horizon, it’s possible to create and manage volumes within your cloud. Take time to review how your business is utilizing your current cloud and get a clear view of any inefficiencies. Then as a decision-maker, you’ll have a better idea of what needs to be improved upon. Idle resources and infrastructure that is too big for the needs of your business can be serious financial drains. With OpenStack, you’re able to attach or detach volume to an instance as needed, which can impact your cloud waste. Once you’re able to better monitor your cloud and how it powers your workload then you’ll have a firm idea of how to move forward.

Make A Plan

After you’ve taken the time to identify where your cloud waste is coming from and adjust it through Horizon, then the next step is to create a plan to reduce these inefficiencies. Through OpenStack Horizon, users are able to create and manage roles, all the while managing projects, and users. Create clear goals, processes, and deadlines for both decision-makers and your IT department. One of the best ways to create a strong plan is to make small changes and work your way up to more monumental tasks. Removing unneeded processes or redundant code can free many costs within an organization or business.

Understand The Importance Of Long Term Investment

Using your OpenStack dashboard and other projects to their full potential is a surefire way to cut costs without impacting the overall quality of your cloud solution. Although it may feel like a serious undertaking to review the inner workings of your cloud, you’ll be able to drive up your overall profit margins by doing so. OpenStack technology gives you an in-depth view of your cloud solution through a simplified dashboard, thus giving you the insights that you need to manage your business to the fullest.

Did you know that 72% of OpenStack users cite cost savings as their number one business driver? If you’re thinking about implementing an OpenStack powered cloud or looking to make the most out of your current OpenStack cloud solution, contact the experts at VEXXHOST. We have been using and contributing to OpenStack since 2011. Therefore, we have the experience to help you better your overall cloud infrastructure, to reduce waste and increase profits.

Would you like to know more about Zuul? So download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post How OpenStack Can Cut Costs Without Impacting Quality appeared first on VEXXHOST.

by Hind Naser at April 03, 2020 07:31 PM

April 01, 2020


Why You Need An OpenStack Powered Private Cloud To Save The Day

Can an OpenStack powered private cloud save the day? In an uncertain world, it’s important to have some form of certainty. If you’re looking for a secure, reliable and cost-effective way to utilize open source technology, then an OpenStack powered private cloud may be exactly what you’re looking for. Whether you’re looking for a hosted fully managed private cloud so you can focus on other needs within your business. Or you’re looking to go the extra mile and invest in an on-premise private cloud for the ultimate control over your cloud. A private cloud is here to change the way you do business.

We’re here to argue that OpenStack is the hero that you need in a private cloud-driven world. Don’t believe us? We’ve compiled 4 reasons why an OpenStack private cloud is here to save the day. No superman required.

Reason #1: Cost

Firstly, whether you’re a large business or small enterprise at the end of the day cost plays a factor in all IT decisions. Moreover, a public cloud environment may be suitable for companies with smaller workloads but if your business is working with copious amounts of sensitive data then a private cloud solution is the better choice. Often, businesses who opt for a public cloud find themselves paying for high traffic workloads. Instead, by opting for a private cloud you’re able to benefit from intensive workloads and ultimately save money for your business or enterprise. A better return on investment always saves the budget and the day.

Reason #2: Availability

Secondly, with a private cloud, it doesn’t matter where you are in the world. As long as you have an internet connection you have the ability to build your open-source cloud. Operational tools and processes for private clouds support high availability, no matter where you are. Moreover, when it comes time for maintenance or upgrades, with an OpenStack powered private cloud it’s possible to benefit from new features all the while experiencing little to no downtime.

Reason #3: Compliance

It should go without saying, if your business is in a highly regulated industry that features strict compliance requirements then an OpenStack private cloud is your best option. If we take a look at the financial industry for example, it’s crucial that confidential banking information remains private. It could be catastrophic for a financial institution to have a data breach. Furthermore, certain areas of the world have stricter data compliance procedures such as Canada or Europe. It’s important that your private cloud adheres to strict guidelines.

Reason #4: Unique Business Requirements

Lastly, if your business features any unique requirements they might not be available in a public cloud. A private cloud solution is here to save the day. A cloud provider can help provide consulting services or even full management of your private cloud if your requirements are truly one of a kind. With the right cloud vendor on your side, it’s possible to make a strong OpenStack powered cloud solution that benefits your business in the short and long term. Our data privacy is compliant even under the most rigorous requirements. We at VEXXHOST have deployments in Canada, Europe and Asia.

Why OpenStack Powered Private Cloud

The idea of upgrading to an OpenStack powered private cloud shouldn’t feel like kryptonite. Let the experts at VEXXHOST consult and guide you through the process. We offer consulting services and fully managed private cloud solutions to suit any size business in any industry. Contact us today to learn more.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why You Need An OpenStack Powered Private Cloud To Save The Day appeared first on VEXXHOST.

by Hind Naser at April 01, 2020 08:14 PM

March 31, 2020


Why Containers Plus OpenStack Is The Best Way To Manage Applications

Let’s talk containers plus OpenStack. It goes without saying that users are looking for applications that offer agility, flexibility and the opportunity to implement automation wherever possible. In this, OpenStack has situated itself as the go-to deployment environment for containerized applications. Meaning that cloud providers are now able to innovate and enable businesses and enterprises to build, deliver and thrive through the use of high-quality applications. With 57% of enterprises stating that they are already using or planning to implement containers on OpenStack, nearly more than half of enterprises will be taking advantage of the benefits of containerized applications.

Whether your enterprise is leaning towards a public or private cloud model for application development, today we are going to break down why containers plus OpenStack is the best way to manage applications. Curious to learn more? Keep reading.

Why Containers Plus OpenStack?

When you use containers plus OpenStack to manage applications you have the opportunity to leverage the best of both worlds. Users are able to develop and deliver better applications in less time. It’s important to keep in mind though that containers are not a technology that can stand on its own, a container needs additional technological infrastructure to build, deploy, manage and maintain applications and infrastructure services. This is where OpenStack comes in to make a powerful impact on containers and cloud computing as a whole.

Public Versus Private Cloud: The Big Debate

In some cases, businesses can benefit from using an OpenStack powered private cloud. With an on-premise private cloud solution, it becomes possible to optimize both hardware and software-based environments. Moreover, improved performance is expected thanks to the ability to keep resources on-premise. Businesses can find greater flexibility thanks to the ability to grow their cloud-based on their own schedule.

An on-premise private cloud solution may not be the most practical solution for every business though. A public cloud solution may work better for projects that have a shorter lifespan since an on-premise or hosted private cloud does require larger upfront investments. An OpenStack powered private cloud also works for projects that need to implement their cloud solution quickly and efficiently, while still remaining cost-effective.

Now this brings us to why containers plus OpenStack is the best way to manage your applications. We all know that Kubernetes is an application tool, while OpenStack is an infrastructure tool. Furthermore, OpenStack is also an application. Kubernetes helps to make OpenStack easier to run and manage various services. This could be anything from key operations such as availability, upgrades and more. OpenStack is also able to launch and run self-service Kubernetes clusters for both end-users and their applications. Therefore, OpenStack is simplified with containers no matter what cloud model you have in place.

Get Ready For Containers Plus OpenStack

It’s evident that containers plus OpenStack are able to provide businesses with a sustainable of managing their applications within a private or public cloud model. Moreover, thanks to the role of containers within OpenStack, future releases will only continue to enrich the open-source communities.

Thinking about adopting a private or public solution alongside a container orchestration engine? Trust the experts at VEXXHOST to guide you through the implementation of your cloud solution. We’ve been using and contributing to OpenStack since 2011, therefore it’s safe to say we know OpenStack inside and out. Contact us today to learn more about how we can help.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why Containers Plus OpenStack Is The Best Way To Manage Applications appeared first on VEXXHOST.

by Hind Naser at March 31, 2020 06:42 PM

March 30, 2020


The Impact Of Cloud Computing In Fintech

cloud computing Fintech

The impact of cloud computing in fintech is evident. While the use of cloud technology within fintech services is still catching on, the opportunity for growth is massive. Even though cloud adoption is still in its early stages, cloud computing in fintech is growing at a steady pace. Moreover, a total of 22% of all applications within fintech are currently running on the cloud. That being said, this leaves substantial room for growth and innovation.

Moving forward, banks are now able to partner with fintech startups with ease. Most noteworthy, startups are developing as cloud-native from the very start. The global fintech market size expects to grow to $124.3 billion USD by the end of 2025 at a Compound Annual Growth Rate of 23.84%. As an increasing number of businesses make the move to adopt a digital payment system, the demand for fintech solutions is only expected to grow and drive market growth.

Curious to learn more about the benefits of cloud computing in fintech and some critical trends that are shaping fintech as we know it? Keep reading to find out.

Critical Trends and Benefits Of Cloud Computing In Fintech

Some of the major benefits of adopting cloud computing within the fintech industry are increasing flexibility, better security, driven innovation and a rise in scalability. These benefits are currently shaping critical trends that are driving growth within fintech.

1. Data Aggregation

Storing any findata such as account balance information, spending habits, budgeting, and cash flow securely is a must. For instance, compiling information from banking databases allows for proper processing. The availability, as well as confidentiality, of this findata is extremely convenient not only for financial institutions but users as well.

2. Self Service Application

From the surge of self-service kiosks to being able to control a bank account from a simple application in your handheld device, self-service is giving increased autonomy and flexibility to users. When users are able to access financial information, send money and even create a budget via their phone then they have more opportunities to take control of their finances. In other words, thanks to these developments in software users can complete a transaction without the help of any human representatives.

3. Security

When it comes to any financial information, security is an obvious priority. Therefore, thanks to the power and security of cloud computing fintech leaders can rest assured that their data is safe. Likewise, traditional IT setups can run the risk of cyberattacks to phishing emails, but cloud computing gives high resilience through security architecture.

The True Impact Of The Cloud

In conclusion it’s safe to say that the Fintech community is dynamic and driving the industry shift towards cloud computing. Moreover, this is only expected to keep growing. Certainly, no matter what industry you’re in, the experts at VEXXHOST can help you build a private cloud infrastructure. No matter the scale of your business, you can benefit from critical developments within the cloud. Contact us today to learn more.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post The Impact Of Cloud Computing In Fintech appeared first on VEXXHOST.

by Hind Naser at March 30, 2020 03:13 PM

March 27, 2020


A Brief Comparison of Containers and Virtual Machines

Although containers and virtual machines may be perceived to be the same they are fundamentally quite different technologies. Moreover, the most significant difference being that containers can enable the virtualization of an operating system so that multiple workloads can run on a single instance. In contrast, virtual machines use hardware to virtualize and run many operating system instances.

Today we are going to give a brief overview of some of the differences between containers and virtual machines. Keep reading to learn more about the differences between the two technologies.

Virtual Machines

Virtual machines came from the necessity to increase power and capacity from bare metal applications. They are made through running software on top of physical services. Meanwhile, this happens to reproduce a particular hardware system called a hypervisor. A hypervisor, also known as a virtual monitor, is hardware that creates and runs virtual machines. It is situated between the hardware and the virtual machine. In other words, its main purpose is to virtualize the server.

Virtual machines have the capacity to run different operating systems on the same physical server and they can be quite large in size – up to several gigabytes. Moreover, each virtual machine has a separate operating system image, which continues to increase the need for memory and storage. This can be an added challenge in everything from testing and development, to production and even disaster recovery. Certainly, it can limit the portability of applications and a cloud solution.

The hypervisor is quite the workforce. For instance, it is responsible for interacting with all NIC cards from all hardware, the same goes for storage within your virtual machine. Furthermore, the hypervisor is quite busy and there is a significant amount of it that is masking from the operating system above it.

vm containers


Containers are a useful way to run isolated systems on a single server or host operating system. For example, since the growth in popularity of operating system virtualization, the software is now able to predictably run from one server environment to another. The containers themselves sit on top of a physical server and its host operating system. Each container shares the host operating system kernel, binaries, and libraries. These shared components are available only as read-only.

One of the major highlights of containers is that they are extremely light and are only megabytes in size. Meaning that they have the potential to start in seconds instead of minutes with virtual machines. Thanks to a common operating system, containers can reduce management overhead all the while fixing bugs and other maintenance tasks. To sum up, the big difference between containers and virtual machines is that containers are significantly lighter and more portable.

Concluding Containers and Virtual Machines

In conlusion, when it comes to containers compared to virtual machines there are many differences. With virtual machines, the hardware is able to run multiple operating system instances. In contrast, containers have the benefit of portability and speed to help them streamline software and its development.

In short, are you curious to learn more about how virtual machines and containers can work within your cloud strategy? Contact us today to speak to one of the experts at VEXXHOST.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post A Brief Comparison of Containers and Virtual Machines appeared first on VEXXHOST.

by Hind Naser at March 27, 2020 03:42 PM

March 26, 2020


Object Storage With OpenStack Swift

OpenStack Swift is an OpenStack project that offers cloud storage software that enables easy storage and retrieval of data. This becomes possible through a simple Application Program Interface (API). If you’re looking to take advantage of software built for scale then Swift is an excellent choice. It’s optimized for availability, as well as longevity, to benefit the data set in its entirety. Think of Swift as the best option for storing unstructured data that you’d like to grow without limits.

Today we will explore OpenStack Swift alongside its key features and how it can be of use within your OpenStack powered cloud. We will dive into how Swift is scalable and available, reliable and secure and how it can integrate seamlessly through OpenStack APIs.

Object Storage With OpenStack Swift

OpenStack Object Storage, otherwise known as OpenStack Swift, manages the storage of large amounts of data across clusters for a long term basis. It is a cost-effective storage solution for your OpenStack powered cloud. Swift was one of the original OpenStack projects and continues to be still very relevant today. It is possible to use Swift for the storage, backup and archiving of unstructured data. Moreover, this could be anything from documents, static web content, video files, image files, emails and even virtual machine images. Each object stored has associated metadata as part of the extended attributes of the file.

Let’s Talk About Features

Scalable and Available

Swift offers a scalable infrastructure with high availability to store as much data as needed without having to worry about your overall capacity in the long term. This cloud object storage offers the best in terms of service availability supported by strong durability as well as reliability. With Swift, you’ll never have to struggle with storage limitations or inaccessibility.

Reliable and Secure

One of the benefits of OpenStack Swift is that it is reliable and secure. With Swift, your systems are able to store multiple copies of data across your infrastructure. Encrypted data goes through SSL, meaning that you can always access your data with the highest security possible. A trusted cloud provider is able to help ensure that your data is SSL ready. With enterprise-grade security, you can rest easy knowing that your data is only accessible to those who need it. Users also benefit from the seamless integration of other OpenStack services through APIs and the use of an advanced dashboard control panel. Meaning that your business is able to make the most out of APIs.

How To Get Started

If you’re looking to start with OpenStack Swift to reap its many storage benefits then contact the experts at VEXXHOST. We’re here to help you get your OpenStack powered cloud off the ground and utilize the relevant OpenStack projects to create a unique cloud that suits all your needs. We can support you through every step and make sure that you’re getting the most out of your OpenStack cloud. Whether you’re a small business or larger enterprise, there is a custom cloud solution for you. Contact us today to learn more about OpenStack Swift and what it can do for you in your new cloud ecosystem.

Would you like to know more about Zuul? So download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post Object Storage With OpenStack Swift appeared first on VEXXHOST.

by Hind Naser at March 26, 2020 04:43 PM

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: 10 Years Of OpenStack

Many amazing tech milestones happened in 2010. Steve Jobs launched the first iPad. Sprint announced its first 4G phone. Facebook reached 500 million users. OpenStack was born

In real time, the pace of change in the tech industry often feels glacial, but looking at things over a ten-year span, a lot of stark differences have emerged since 2010. So before you plug back in your AirPods, fire up Fornite and watch a new show on Disney+, let’s take a look at how OpenStack has transformed the open source industry in the past 10 years. 

The Decade Challenge – OpenStack Edition

What began as an endeavor to bring greater choice in cloud solutions to users, combining Nova for compute from NASA with Swift for object storage from Rackspace, has since grown into a strong foundation for open infrastructure. None of it would be possible without the consistent growth of the OpenStack community. In the 10 years since the community was established, OpenStack is supported by one of the largest, global open source communities of over 105,000 members in 187 countries from over 700 organizations, backed by over 100 member companies! Developers from around the world work together daily on a six-month release cycle with developmental milestones.

Looking back to OpenStack in 2010, we were ecstatic to celebrate our first year of growth from a couple dozen developers to nearly 250 unique contributors in the Cactus release (the third OpenStack release). Fast Forward to the year of 2019, we have a total of 1,518 unique change authors who approved more than 47,500 changes and published two major releases (Stein and Train). Between that, the community successfully delivered 16 software releases on time. Today, we are not only celebrating our community’s achievement for the past 10 years, but also looking forward to the continuous prosperity of the community in the next 10 years.

Your Top 10 Favorite Moments With OpenStack Are…

As you can see, there are so many milestones to celebrate in the past 10 years of OpenStack with the community. We want to hear from you about what your top 10 favorite things related to OpenStack are. Go into this survey and choose a question to answer. The topics range from your top 10 most memorable moments of OpenStack, your top 10 most used features in OpenStack to your top 10 favorite cities you visited for OpenStack. We are looking forward to hearing your favorites, and we invite you all to join us and celebrate 10 awesome years of OpenStack.

OpenStack Foundation news

  • Based on the input from the community, board, and the latest information available from the health experts, we’ve made the decision not to hold the OpenDev + PTG in Vancouver this June. Instead, we’re exploring ways to turn it into a virtual event and would love the help of everyone in the community. Learn more in this mailing list post by Mark Collier.
  • There will be two community meetings next week to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Learn more in this mailing list.

Airship: Elevate your infrastructure

  • The Airship community will be holding a virtual meet-up on March 31 from 1400-2200 UTC that will serve much the same purpose as the originally planned KubeCon face-to-face team meeting. Goals of the meetup include aligning on Airship use cases and high-level design, finalizing actionable low-level design for the upcoming release, and reviewing work in progress.
  • Catch up on the latest news in the March update, live on the Airship blog now.
  • Connect with the Airship community on Slack! We’re mirroring to #airshipit on IRC so you can use your preferred platform. Join at

Kata Containers: The speed of containers, the security of VMs

  • We have just released the latest stable 1.9.6, 1.10.2 releases and cut 1.11.0-alpha1 release. The 1.9.6 and 1.10.2 stable releases included latest bug fixes. And the 1.11.0-alpha1 release prepared more stuff for the incoming 1.11.0 release, notably. See the message here. We look forward to stabilizing it in the next few weeks. Thank you to the users and contributors!

OpenStack: Open source software for creating private and public clouds

  • If you’re running OpenStack, please share your feedback and deployment information in the 2020 OpenStack User Survey. It only takes 20 minutes and anonymous feedback is shared directly with developers!
    • Why is it important for you to take the user survey? Find out here!
  • We are entering the final stages of the Ussuri development cycle, with feature freeze happening on April 6, in preparation for the final release on May 13. The schedule for the next cycle (Victoria) was published, with a final release planned for October 14. The ‘W’ release (planned for Q2, 2021) will be called ‘Wallaby’.
  • In the coming weeks the OpenStack community will renew its leadership, with 5 TC seats up for election, as well as all PTL positions. Nominations are open until March 31!
  • A framework for proposing crazy ideas for OpenStack has been created, with the first idea being posted there: project Teapot.

StarlingX: A fully featured cloud for the distributed edge

  • The StarlingX community recently held their Community Meetup in Chandler, AZ. Check out the updates on the current development activities and plans for future releases on the StarlingX blog.
  • If you’re currently testing StarlingX, running PoC implementations or running the software in production take a few minutes and fill out a short survey to provide feedback to the community. All information is confidential to the OpenStack Foundation unless you designate that it can be public.

Zuul: Stop merging broken code

  • Are you a Zuul user? Please take a few moments to fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public.
  • Zuul versions 3.17.0 and 3.18.0 have been released. Both releases address security issues and you should refer to the release notes for more details. Additionally, socat and kubectl must now be installed on the executors.
  • Nodepool 3.12.0 has been released. This adds support for Google Cloud instances. Refer to the release notes for more information.

Upcoming Open Infrastructure and Community Events

For more information about these events, please contact

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at March 26, 2020 01:00 PM

From Tutorials to Case Studies, Share your Open Infrastructure Wisdom with Superuser

Superuser is an online publication for the community, by the community. We’re publishing our editorial process to actively solicit submissions from the open infrastructure community, and we want to hear from you!

When Superuser launched in 2014, the goal was to serve as a conduit for our community of developers and users to share their experiences building and operating open infrastructure around the world. Summits, PTGs, Open Infrastructure Days, and other community gatherings are great ways to connect offline, and Superuser is an online way to keep those connections going. Content has historically been developed by a team of writers working with the foundation, as well as a handful of community members, and we want to invite the entire community to share their lessons learned, use cases, tutorials, and overall open infrastructure thoughts here as well.

We want to hear your ideas for articles you’d like to contribute about open source infrastructure and open source projects. Share your ideas with the Superuser editorial team. You might want to read over the Superuser Editorial Guidelines first, before reaching out.

So, what topics are fair game? Most anything relating to building and operating open infrastructure is a candidate:

  • AI/machine learning
  • Bare metal
  • Container infrastructure
  • CI/CD
  • Edge computing
  • Telecom + NFV
  • Public cloud
  • Private & hybrid cloud
  • Security
  • General open infrastructure thoughts

We’ll be particularly interested in content that’s relevant to projects in the open infrastructure community, like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul, and others.

User case studies are always a big hit, but so are how-to tutorials, best practices, lessons learned, event recaps, project roadmap updates, and new release information. We’ll even consider product version updates and thought leadership content, provided it’s vendor neutral and focused on the community rather than competitors. Superuser is not a trade publication: we’re here to amplify the values of openness, collaboration, and solving shared problems.

So, what are some examples of content we’ll decline? Anything that reads like an advertisement or sales solicitation won’t make the cut. Opinion submissions must be supported by verifiable facts and voiced in a collaborative tone that invites community participation in finding solutions everyone can use equally.

Examples of submissions that fit the editorial mission of Superuser include:

  • The latest Kata Containers release features
  • A personal take on the latest Open Infrastructure Summit
  • How Verizon Media is using OpenStack at scale
  • How to run a packaged function with OpenStack Qinling
  • Running StarlingX at the edge for telcos
  • How to run project gating with Zuul

What ideas do you have? Share them with us using this short form to engage with our editorial team.

The post From Tutorials to Case Studies, Share your Open Infrastructure Wisdom with Superuser appeared first on Superuser.

by Allison Price at March 26, 2020 08:00 AM

March 25, 2020


Block Storage With OpenStack Cinder

If you’re looking for scalability within your cloud storage then OpenStack Cinder is worth highlighting. Cinder users are able to dramatically decrease and increase their storage capacity without having to worry about expensive physical storage systems or servers. Meaning that businesses and organizations alike can benefit from the best flexibility at lower costs.

Today we are going to go over the basics of OpenStack Cinder. Keep reading to learn more about how Cinder block storage is a fundamental part of an OpenStack starter kit and how its block storage capabilities create a more versatile and secure cloud solution.

Let’s Talk OpenStack Cinder

When it comes to volume storage for bare metal and virtual machines it’s important to have high integration compatibility. Moreover, Cinder is able to provision volume storage for virtual machines through the use of Nova. Meanwhile, Cinder can provision volume storage for bare metal through Ironic. Thus giving it the flexibility to work with both projects.

The compatibility of Cinder doesn’t stop there though. Cinder is also Ceph compatible, meaning that Cinder makes it possible for users to work with Ceph storage without any added complications to your cloud. Through the use of snapshots of management functionality, Cinder is able to back up data stored on block storage volumes. Restoring storage volumes or creating new block storage volumes are now possible with Cinder. Moreover, Cinder simplifies code management by allowing users to use one code for each service. Cinder can manage all the provisioning and deleting the needs of the users with efficiency and ease of use.

Simplified Management and Secure Communication

Users are able to simplify the management of their storage devices thanks to Cinder’s simple API management. The implementation of one code for all backends makes this possible. Therefore, instead of needing to maintain different code for each backend, it is possible to use a single code to facilitate the process. Cinder is the gatekeeper of all the codes which allows it to create volumes on different backends. With simple integration, it’s no longer necessary to create integrations between other services and their backends. Cinder becomes one of the ways in which they can communicate securely.

Another way in which Cinder enables secure communication is through seamless encryption. This gives users the best experience of OpenStack’s block storage and key management technologies. Cinder is able to integrate with key management so that it is possible to utilize the associate key to decrypt its content when the server starts. Encrypted data is not accessible to anyone without the appropriate key. This means that your content remains secure even under the rare possibility that someone should take the server.

Get Started

Thinking it’s about time your business or organization upgrade to an OpenStack powered cloud? Trust the experts at VEXXHOST to help make your cloud aspirations a reality. We are OpenStack certified and have been using and contributing to OpenStack since 2011. Certainly, with nearly a decade of experience, we are there to help you through every step of the way. Contact us today to get started with an OpenStack powered cloud that’s unique to your business needs.

Would you like to know about Private Cloud and what it can do for you? So download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post Block Storage With OpenStack Cinder appeared first on VEXXHOST.

by Hind Naser at March 25, 2020 04:29 PM

OpenStack Superuser

Establishing Trusted Network Interconnection of OpenStack Clouds


Applications and the network have become distributed. Applications are fragmented to micro-services so the network is being composed of different clouds from different regions. But with this, the need to have control of all resource aspects is increased due to increasing security concerns. SD-WAN is available for enterprise. But what if data centers or endpoints are less and spread across multiple regions? This article focuses on the interconnection of OpenStack clouds using Neutron APIs.

The Neutron to Neutron Communication

There may be a situation where you need to interconnect two or more separate data centers or NFV PoPs powered with OpenStack. Those data centers are considered to be located in different regions as well. These data centers either want to have an interconnection on-demand initially. Further, the interconnection may require private addressing and isolation to share data end-to-end with a dedicated communication channel. A combination of on-demand and private addressing and isolation possible with Neutron VPN as a service (VPNaaS). We have different VPN options available after performing selecting a suitable solution after VPN reviews. But this solution involved IPSec which has a performance overhead. Additionally, for a proper interconnection, you want a solution that avoids the overhead of packet isolation.

One of the architecture for interconnection of OpenStack cloud can be – adding an orchestrator in between clouds and resources in participant clouds are interconnected. But it has several demerits. Like

The orchestrator may need admin rights to establish networking in resources of data centers. But it is difficult when there are different organizations are involved. Also, adding orchestrator will expose the APIs to different attacks and because of this, it is treated as a complex system.

The recommended option remain is to extend the Neutron APIs to interconnect resources like virtual routers of OpenStack powered data centers. It involves two facets User Facing API and Neutron to Neutron API.

In User facing APIs, there will be a symmetrical call that will be made by centrally located admin to neutron modules in data centers. A link will be established with approval from both of the data centers.

In Neutron to Neutron, the API will allow each Neutron component to check if the symmetrical interconnection has been defined on the other side. In this way, Neutron components in the different regions coordinate together to set up these private isolated interconnections without orchestration nor network device configuration.

The solution was discussed at the OpenStack summit Berlin back in 2018. This solution is applicable to use cases where:

  • OpenStack is involved in the data center
  • If there are multiple regions involved with one OpenStack cloud
  • Between multiple OpenStack clouds where trust entities are co-ordinated
  • And, where different OpenStack cloud instances use the different SDN solutions

You can download the presentation from here and watch a demo

The post Establishing Trusted Network Interconnection of OpenStack Clouds appeared first on Superuser.

by Sagar Nangare at March 25, 2020 01:00 PM

Christopher Smart

Updating OpenStack TripleO Ceph nodes safely one at a time

Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, viaupdate run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the ceph-upgrade (ceph-ansible) step, converge step and reboots.

openstack overcloud update run --nodes CephStorage

Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.

I noticed recently when performing the update step on CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (norebalance was set).

We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.

My concern was that the update step does not check the status of the cluster, it just goes ahead and updates each node one by one (the seperate ceph update run step does check the state). If the Ceph nodes are updated faster than the cluster can fix itself, we might end up with multiple nodes going offline and hitting the issues mentioned above.

So to work around this I just ran this simple bash loop. It gets a list of all the Ceph Storage nodes and before updating each one in turn, checks that the status of the cluster is HEALTH_OK before proceeding. This would not possible if we update by Role instead.

source ~/stackrc
for node in $(openstack server list -f value -c Name |grep ceph-storage |sort -V); do
  while [[ ! "$(ssh -q controller-0 'sudo ceph -s |grep health:')" =~ "HEALTH_OK" ]] ; do
    echo 'cluster not healthy, sleeping before updating ${node}'
    sleep 5
  echo 'cluster healthy, updating ${node}'
  openstack overcloud update run --nodes ${node} || { echo 'failed to update ${node}, exiting'; exit 1 ;}
  echo 'updated ${node} successfully'

I’m not sure if the cluster doing down like that this is expected behaviour, but I opened a bugzilla for it.

by Chris at March 25, 2020 07:50 AM

March 24, 2020


Looking At OpenStack Glance

OpenStack Glance is an image service that provides an agile and convenient way to copy and launch instances. With Glance, users are able to upload, discover, register and retrieve virtual machine images with speed and ease. That is to say that you’ll be able to spend less time working with images and metadata definitions and more time working on your application.

Today we are going to take a look at Glance, OpenStack’s powerful yet agile image service. From giving users the power to upload OpenStack compatible images, to managing server images for your cloud, Glance is worth the double-take. Keep reading to see for yourself.

OpenStack Glance: An Image Speaks A Thousand Words

When it comes to OpenStack Glance, there are many features worth highlighting. For instance, starting with the central image repository, users are able to update through OpenStack’s centralized image storage service. Users are able to replicate or use snapshots of images and store them accordingly within their OpenStack powered cloud. Furthermore, this solves the issue of configuration drift, as the centralized image repository that surrounds all infrastructure features consistent updates.

Furthermore, when you need your servers to boot up quickly and efficiently, copy-on-write is there to work with agility. Not only that but copy-on-write has the potential to save your business or enterprise costs by reducing total disk usage. Moreover, it increases efficiency by using stored images as useful templates to get new servers up and running consistently. It’s more efficient to provision multiple servers than manually install a server operating system and then configure each additional service manually. This means that Glance’s copy-on-write saves users both time and money, two very valuable resources for any business or enterprise.

Uploads, Downloads and Compatibility

Glance enables secure uploads and secure downloads through signed image validation. Meaning that data is able to be validated through Glance prior to being stored within your cloud. Therefore, if validation is unsuccessful then the upload will fail and the image will be deleted. The same goes for all image downloads. If the data cannot perform the appropriate data verification upon image download then it will not be stored within your cloud. Secure sharing of multiple image types across tenants are also possible with Glance. It’s possible to share images with specific or all users securely.

In terms of compatibility, Glance isn’t restricted to specific servers as it can boot up virtual machines alongside Cinder and Ironic. Thanks to Glance’s RESTful API, the querying of virtual machine image metadata as well as the retrieval of the actual image is possible. Finally, one of the benefits of OpenStack’s advanced technologies is Glance’s simple integration with Cinder block storage under your regular infrastructure. This allows for expert storage and easy to use virtualization of block storage management.

Getting Started With Glance

In conlusion, we’ve gone over how Glance provides simple OpenStack based image storage in your cloud solution. Certainly, Glance is an easy way to copy and launch instances while allowing you to quickly and securely download and upload images and even features a block storage integration. Thinking of upgrading your cloud solution? Every OpenStack powered cloud features these image storage features.

We at VEXXHOST have been working with OpenStack since 2011 and are OpenStack Certified. Moreover, this means that no one knows an OpenStack powered cloud as we do. Our cloud services contain OpenStack software that validates through testing to provide API compatibility for OpenStack core services.

Curious to learn more about Glance and other OpenStack core services? Contact our team of experts today to learn how Glance can help elevate your cloud strategy.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Looking At OpenStack Glance appeared first on VEXXHOST.

by Hind Naser at March 24, 2020 02:34 PM

March 23, 2020


Let’s Get Networking: OpenStack Neutron

OpenStack Neutron is a networking component of OpenStack. Although it’s considered to be one of the more complicated projects in your OpenStack Compute set up, it’s also extremely powerful. This powerhouse is able to create virtual networks, routers, firewalls and beyond. Moreover, through Neutron, OpenStack is able to offer “network connectivity as a service”. Through the implementation of Neutron API, other OpenStack services manage interface devices.

Neutron is an OpenStack powered, flexible and secure software-defined network. Today we are going to break down the ins and outs of Neutron, like how it allows you to build single-tenant networks while still giving you complete control over your network architecture. Keep reading to see precisely why Neutron is a powerhouse in your OpenStack powered cloud solution.

OpenStack Neutron: The Building Block Of An OpenStack Cloud

As we mentioned earlier, Neutron is a networking component of OpenStack. It is a standalone service that interacts with other projects such as Keystone, Horizon, Nova, and Glance. Similarly to the projects that it runs alongside, the deployment of Neutron involves deploying several processes on each host. Neutron, like other services, relies on Keystone for its authentication and authorization of all API requests. Horizon allows basic integration with the Neutron API to allow tenants to create networks. On the other hand, Nova interacts with Neutron through API calls. Nova communicates with Neutron API to plug each virtual NIC on the instance through the use of Open vSwitch.

With OpenStack Neutron you’re able to reap the benefits of total peace of mind thanks to network segmentation. By splitting computer networks into subnetworks Neutron is able to boost performance and improve security. Thanks to the segmentation of network connections split across systems, each virtual machine has a private hypervisor in its own individual network.

One of the core requirements of OpenStack Neutron is to provide connectivity to and from instances. This is possible thanks to one of two categories: provider networks and tenant networks. Your OpenStack administrator creates provider networks. These networks map directly into an existing physical network inside of your chosen data center. It’s possible to enable shared provider networks amongst tenants as part of the network creation process. In contrast, tenant networks are networks created by users within groups of users or tenants. These networks cannot be shared amoungst other tenants. Furthermore, without a Neutron router, these networks isolate each other and everything else as well.

How To Get Started

In conclusion, there’s so much to OpenStack Neutron that we couldn’t cover it all in a single blog post. We’ve laid some foundation on understanding the basics of Neutron and how it builds and uses simple networks for instance connectivity. Moreover, if you’re looking to learn more about Neutron, its role within OpenStack and what it can do for your business get in touch with our team of experts. Certainly, we’ll be happy to listen to your cloud computing requirements and help create a cloud strategy that is right for your business or enterprise. Contact us to start with an OpenStack powered cloud solution today.

Would you like to know more about Zuul? So download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post Let’s Get Networking: OpenStack Neutron appeared first on VEXXHOST.

by Hind Naser at March 23, 2020 07:01 PM


Running Remote Workshops

In the current climate, where we are either unable to travel to collaborate or because we just want to reduce our impact on the environment, the ability to effectively collaborate remotely is critical.

by Shaun OMeara at March 23, 2020 05:40 PM


Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.


I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.

Communicate with the family to work out a schedule or join the call without video so you can still participate.

Manage Expectations

Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.

Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.

This will be an ongoing conversation that evolves as projects and situations evolve.

Know Thyself

Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.

Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.

Some people NEED to physically be in the office around other people. Some will be totally content to work from home.

Sure, some things aren’t optional, but work with what you can.

Figure out what works for you.

Embrace #PhysicalDistance Not #SocialDistance

Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.

Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.

For that matter, don’t forget to reach out to your friends and family.

Even introverts need to maintain a certain level of connection.

Further Reading

There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Now let’s hear from you!

What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.

And, as always, thank you for being a part of the RDO community!

by Rain Leander at March 23, 2020 03:14 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
May 30, 2020 02:38 AM
All times are UTC.

Powered by: