April 08, 2020


A Dive Into Fully Managed Services

When it comes to finding a cloud solution that fits your business the process can feel overwhelming. There are a variety of managed cloud services and solutions available to businesses big and small. Choosing the right cloud provider is an important decision. From finding the right level of support for your unique business to finding a provider who truly listens to your individual requirements, there are many steps that you need to take in order for your business to reap the full benefits of managed services.

Curious to learn more? Let’s take a dive into fully managed services; what they mean, who are they best suited for and the best ways to make the most of them. Keep reading to learn some key information that could benefit your business.

What Does Fully Managed Services Mean?

A fully-managed cloud solution means that your business entrusts a cloud provider to help maintain your private cloud. This means that your business is able to focus on what matters most. Meanwhile, a cloud hosting provider of your choosing will maintain your cloud. Typically businesses experience increased return on investment, improved flexibility and better use of resources when they adopt a fully managed cloud solution. The benefits of finding the right cloud provider are enormous.

When your business uses a fully managed service for your cloud needs, you are taking advantage of scalability and flexibility of the cloud to power your business. Moreover, it doesn’t matter where your business is in their cloud journey, whether you’re migrating to the cloud or looking to adopt new releases, a good provider will work with your business to find the right plan of action.


A fully managed solution means that your business will not ever have to worry about infrastructure. Fully managed means fully supported. Your cloud provider of choice is responsible for all the heavy lifting so your business can focus on what it does best.

Upgrades and Security Updates

Worried about falling behind on upgrades and security updates? With a fully managed solution, your cloud provider will ensure that your cloud is running the latest version of all components. They will also make sure that all security compliances are met at all times.


No matter if your business is a start-up or a large international corporation, there is a fully managed solution available to suit your individual needs. Throughout all industries, the right cloud provider can help determine which infrastructure layout would be best suitable for your use case.

The Right Provider

When your business trusts the right provider to fully manage your cloud services you’re optimizing your cloud. The right provider is able to offer a fully managed cloud solution based on their experience and expertise. Moreover, they will take overall maintenance and monitoring of your cloud computing components, from computing, storage, networks and beyond. The right provider can make all the difference in your cloud strategy.

The VEXXHOST Difference

If your business is looking for a trusted cloud provider who can help transition into fully managed cloud services VEXXHOST is there to help. From network architecture, design, best practices, OpenStack bug fixes, upgrades and more, we go beyond just deploying your infrastructure. Contact our team of experts to learn how we can help optimize your cloud strategy. We’re here to give your business the freedom of a fully managed private cloud solution.

Would you like to know about Private Cloud and what it can do for you? Download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post A Dive Into Fully Managed Services appeared first on VEXXHOST.

by Angela Bruni at April 08, 2020 07:35 PM

April 07, 2020

OpenStack Superuser

Women of Open Infrastructure – Growing with the Open Source Community


In the 1990s, when I was a child, Bill Gates, who was the co-founder and the CEO of Microsoft, published a book called The Road Ahead. The book summarized the implications of the personal computing revolution and described a future profoundly changed by the arrival of a global information superhighway. Gates mentioned that things have been changed over the past two decades. However, besides the information superhighway he predicted, it extended to everywhere in our society, such as e-commerce, social networking applications, network conference and cloud computing massively impacted our life every day, which was far beyond his estimation.

The Road Ahead

Computer science, as a novel technology and area to human beings since the last century, is actually an unexplored ocean and has been attracting more and more navigators to explore. People can’t imagine which is ahead in the way of the boat and can’t forecast which land they are about to discover, due to the fast evolution of information technology. I was one of them, when I was admitted to the university, I resolutely chose computer science as my major. And now I discovered my new land – cloud computing.

After I became a postgraduate student, I totally got immersed in the ocean and attempted to find a facility for deployment, orchestration, operation and management of those virtual and physical machines. OpenStack became the beacon light for me, and I got started with the project at that time. It is such a magic box that is powerful enough to cover almost every need, but also a complex system since it is combining so many components together, each with different functionalities. OpenStack is my first impression on how the cloud actually works behind virtualization and how the cloud service providers (CSP) offer their services. These tricks and features make me get far more interested while digging into the land of clouds.

Contributions from company employees and individual volunteers
Contributions from company employees and individual volunteers

Hey, things look much more exciting when I start to know there’s something called open source community, where tens of thousands of people are working together to build one super project, and they all come from different countries, companies and with different gender, age and races. The OpenStack community is definitely a typical example and also one of the powerful leaders for that.

According to an empirical study on OpenStack conducted by Prof. Minghui Zhou and her team in Beijing University in 2018, companies are taking the lead in open source software development by making far more contributions than volunteers. Consequently, company engagement inspires individual volunteers to participate in the community, including myself. I then realized that security and privacy are the top concern of companies using cloud since it shares resources on the networks. After determining cloud security as the research area in school, I made use of OpenStack to perform penetration tests on clouds where security threats were faced.


It’s a fait accompli that the field of computer science is heavily skewed toward men, and the gender situation in the open source arena is even more lopsided. The OpenStack community has been making huge efforts to improve diversity and inclusion spanning from leadership, governance, event representation, to code-and noncode-related contributions.

The percentage of women (blue) in governance and leadership positions. The numbers in parentheses are the total members of each group.

The percentages of code- and noncode-related artifacts contributed by women (blue), men (red), and individuals whose gender could not be identified (green). The percentage of women is 10–12%, depending on the data source and the analysis. Participation at the governance and leadership level increased remarkably, and participation in all of the code- and noncode-related contributions also increased.

I dived into cloud computing and OpenStack after finishing my study, and became a company contributor in January 2018 at Intel, which is a Platinum Member of the OpenStack Foundation and one of the top five companies contributing then. Like the community, Intel has always committed to developing a culture of equality, diversity, and inclusion. In Intel, my team is a big family and the female members hold up half the sky. In this family, our primary responsibility is to take advantage of technologies to change the world and make it better, from the lower-level perspective. We have been enabling server capabilities to enrich the functionality of cloud computing according to new use scenarios, e.g. Enhanced Platform Awareness (EPA).

The OpenStack community doesn’t exclude anyone, even elementary school students, and doesn’t hesitate to offer a helping hand to any people needed. I learned from others and started to be familiar with many other projects and people inside the community. In May 2019, I was invited to present my analysis on edge computing projects as a speaker at the Open Infrastructure Summit in Denver. Warm encouragement and greetings from the audience encouraged me a lot and made me believe I am part of it.


The next generation is the future of our open source community. Intel continuously supports joint programs with universities and research institutes. In 2015 Intel sponsored the eighth Intel-Cup National Collegiate Software Innovation Contest based on OpenStack and performed a study on OpenStack’s ease of use. Only one out of the 20 teams succeeded to deploy OpenStack independently within 36 hours. The final survey showed that for those undergraduate students, it was difficult for them to master numerous OpenStack operations, and the deployment process was complicated with most issues in the networking part. Nowadays we believe its ease of use has been improved dramatically, but we still admit ease of use is the biggest barrier for newcomers to enter this area and participate in the community. Therefore, mentorship always matters.

In the summer of 2019, my team and I got a chance to mentor some brilliant undergraduates from the Joint Institute of University of Michigan and Shanghai Jiaotong University (UM-SJTU) with the art of cloud computing and introduce them to the OpenStack community. The students were appealed by the charm of cloud computing and edge and took a lot of effort while studying the projects inside the OpenStack community. Although some of the projects looked a bit complex and difficult, they still tried their best to construct a cloud gaming infrastructure with StarlingX, which is an edge computing project with low latency and high bandwidth that incorporates OpenStack components and is one of the open infrastructure projects supported by the OSF. In the end, those four students managed to win the Gold Prize in the demo design summit of the university and received a chance to present a session to illustrate their project at Open Infrastructure Summit Shanghai in November 2019.

The Gold Prize winners of UM-SJTU

In addition to the new blood, the open source community is showing more and more vitality. Joy Liu, an 18-year-old girl who is even studying in her high school with much knowledge on computer science as well as cloud computing, showed her enthusiasm on facial recognition on top of edge infrastructure, which motivated me to become her mentor in this area. Based on the Integrated Cloud Native (ICN),  blueprint in Akraino, they constructed a reference architecture and successfully presented a session about the architecture at Open Infrastructure Summit Shanghai.

The deeper a person integrates into the community, the more impressive the person feels. Things look quite different after my transition from a mentee to a junior mentor. There seems to be more areas to explore and also more places that I could devote myself to. The OpenStack community is embracing everyone who has the capability and willingness to join, and we could see more chances available for the new generation to explore and participate in the future.


Inside the open source community, we witnessed so many intelligent and skillful people (both men and women) delivering their power and expertise to drive the growth of the community, as well as the development of the technology. Also, more and more people growing with the open source community, from learning to mentorship offering, just like I did. To these participants, open source culture is never just reusing free code on GitHub to enhance and promote their products. The culture is an ethos that values sharing. It embraces an approach to technology innovation, invention and development that emphasizes internal and external collaboration across different genders, ages, races, countries and companies.

People are now in the best era. As technology is evolving rapidly, everything is changing every day with innovations. It might be impossible to reproduce the miracle that The Road Ahead did, as it is quite difficult for someone to predict what the world will happen in the next two decades, either 5G, edge computing, artificial intelligence, or Internet of Things (IoT). However, us contributors will hold our hands and embrace the bright future together with the community.

The post Women of Open Infrastructure – Growing with the Open Source Community appeared first on Superuser.

by Ruoyu Ying at April 07, 2020 01:00 PM

April 06, 2020


The ultimate guide to Kubernetes

Here at Mirantis we're committed to making things easy for you to get your work done, so we've decided to put together this guide to Kubernetes.

by Nick Chase at April 06, 2020 09:47 PM


Why Your Enterprise Needs OpenStack’s Cloud Infrastructure

It’s no secret that cloud infrastructure is only continuing to grow. An OpenStack powered cloud is quickly becoming the first choice for many enterprises. Unsurprisingly, the total revenue from public cloud IT services in 2019 expects to grow by nearly 20% into a $330 billion USD industry in 2022. With 69% of enterprises already operating cloud infrastructure for their business workloads, it’s the norm to be utilizing a cloud solution.

Is your enterprise still on the fence when it comes to OpenStack’s cloud infrastructure? It’s time to get off the fence and adopt a modern solution for your cloud infrastructure. We’re here to argue that your enterprise needs OpenStack’s cloud infrastructure. Keep reading to learn precisely why you need to get off the fence and fast.

What Is So Appealing About An OpenStack Powered Cloud?

Many enterprises ask: What is so appealing about an OpenStack powered cloud anyways? From rapid innovation, better agility boosted scalability and easier compliance, there are many reasons why enterprises are trusting cloud technology. When innovation becomes an enterprises’ most competitive asset, it’s important to stay relevant.

Moreover, when it comes to any enterprise, the ability to work with agility in all work environments can drive successful initiatives where they matter most within an organization. With the power of an OpenStack powered cloud, it suddenly becomes possible to utilize the power and flexibility of the cloud no matter where you are. Whether you’re in the office or working from home, it’s possible to connect as long as you have a strong internet connection and proper credentials to access your cloud. This flexibility not only boosts productivity but it also safeguards your team in case anyone needs to be working from out of the office. An occurrence that is increasingly common.

The idea of scale is also a major benefit for enterprises. Whether your enterprise is working towards rapid growth or something unexpected arises, the opportunity to scale intelligently is abundant with OpenStack’s cloud infrastructure. Enterprises can easily scale up or down depending on their individual business needs. Having the flexibility of scale reduces cloud waste and saves on overall costs. These are two benefits of OpenStack’s cloud infrastructure that are difficult to ignore.

Compliance and data protection are always priorities for any enterprise. OpenStack’s cloud infrastructure is secure and works to protect confidential data. When you’re looking for a cloud solution that was built with security in mind, OpenStack is the best option. As threats are increasing in scale and severity, it’s important to keep security and compliance in mind.

What Are You Waiting For?

Is your enterprise ready to adopt OpenStack’s cloud infrastructure? Our team of experts at VEXXHOST is here to help you get off the fence and get started with a bespoke cloud solution. We use, contribute and breathe OpenStack and have been since 2011. We’re active members of the OpenStack community and can help your enterprise adopt OpenStack easily and without friction. Contact us today to learn more about how VEXXHOST can help facilitate your migration to an OpenStack powered cloud solution. It’s time to get off the fence and get started.

Would you like to know about Cloud Pricing? Download our white paper and get reading!

Cloud Economics White Paper

Your Guide to Cloud Economics: Public Cloud Vs. Private Cloud

The post Why Your Enterprise Needs OpenStack’s Cloud Infrastructure appeared first on VEXXHOST.

by Angela Bruni at April 06, 2020 02:50 PM

April 03, 2020


How OpenStack Can Cut Costs Without Impacting Quality

The notion that it’s possible to cut costs without impacting quality may seem unlikely, but with an OpenStack powered cloud it’s more than possible. When evaluating the total wealth of a business it’s important to factor in return on investment. For any business, there are various ways to minimize overall costs. In the age of cloud computing, it is important for many companies to limit the costs of their IT infrastructure. As a result, focusing on saving on costs while maintaining the same level of quality can be attractive for businesses.

For today’s blog, we’ve compiled some ways your business can use the power of an OpenStack powered cloud to reduce costs without impacting any quality. Curious to cut costs and improve your bottom line with the power of OpenStack? Keep reading to learn how.

Measure Cloud Waste Without Impacting OpenStack’s Quality

The first way of decreasing costs without affecting quality is to find any gaps in your cloud strategy. Remember, it’s impossible to measure or change what you’re not aware of. Through the use of OpenStack’s dashboard Horizon, it’s possible to create and manage volumes within your cloud. Take time to review how your business is utilizing your current cloud and get a clear view of any inefficiencies. Then as a decision-maker, you’ll have a better idea of what needs to be improved upon. Idle resources and infrastructure that is too big for the needs of your business can be serious financial drains. With OpenStack, you’re able to attach or detach volume to an instance as needed, which can impact your cloud waste. Once you’re able to better monitor your cloud and how it powers your workload then you’ll have a firm idea of how to move forward.

Make A Plan

After you’ve taken the time to identify where your cloud waste is coming from and adjust it through Horizon, then the next step is to create a plan to reduce these inefficiencies. Through OpenStack Horizon, users are able to create and manage roles, all the while managing projects, and users. Create clear goals, processes, and deadlines for both decision-makers and your IT department. One of the best ways to create a strong plan is to make small changes and work your way up to more monumental tasks. Removing unneeded processes or redundant code can free many costs within an organization or business.

Understand The Importance Of Long Term Investment

Using your OpenStack dashboard and other projects to their full potential is a surefire way to cut costs without impacting the overall quality of your cloud solution. Although it may feel like a serious undertaking to review the inner workings of your cloud, you’ll be able to drive up your overall profit margins by doing so. OpenStack technology gives you an in-depth view of your cloud solution through a simplified dashboard, thus giving you the insights that you need to manage your business to the fullest.

Did you know that 72% of OpenStack users cite cost savings as their number one business driver? If you’re thinking about implementing an OpenStack powered cloud or looking to make the most out of your current OpenStack cloud solution, contact the experts at VEXXHOST. We have been using and contributing to OpenStack since 2011. Therefore, we have the experience to help you better your overall cloud infrastructure, to reduce waste and increase profits.

Would you like to know more about Zuul? Download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post How OpenStack Can Cut Costs Without Impacting Quality appeared first on VEXXHOST.

by Angela Bruni at April 03, 2020 07:31 PM

April 01, 2020


Why You Need An OpenStack Powered Private Cloud To Save The Day

Can an OpenStack powered private cloud save the day? In an uncertain world, it’s important to have some form of certainty. If you’re looking for a secure, reliable and cost-effective way to utilize open source technology, then an OpenStack powered private cloud may be exactly what you’re looking for. Whether you’re looking for a hosted fully managed private cloud so you can focus on other needs within your business. Or you’re looking to go the extra mile and invest in an on-premise private cloud for the ultimate control over your cloud. A private cloud is here to change the way you do business.

We’re here to argue that OpenStack is the hero that you need in a private cloud-driven world. Don’t believe us? We’ve compiled 4 reasons why an OpenStack private cloud is here to save the day. No superman required.

Reason #1: Cost

Firstly, whether you’re a large business or small enterprise at the end of the day cost plays a factor in all IT decisions. Moreover, a public cloud environment may be suitable for companies with smaller workloads but if your business is working with copious amounts of sensitive data then a private cloud solution is the better choice. Often, businesses who opt for a public cloud find themselves paying for high traffic workloads. Instead, by opting for a private cloud you’re able to benefit from intensive workloads and ultimately save money for your business or enterprise. A better return on investment always saves the budget and the day.

Reason #2: Availability

Secondly, with a private cloud, it doesn’t matter where you are in the world. As long as you have an internet connection you have the ability to build your open-source cloud. Operational tools and processes for private clouds support high availability, no matter where you are. Moreover, when it comes time for maintenance or upgrades, with an OpenStack powered private cloud it’s possible to benefit from new features all the while experiencing little to no downtime.

Reason #3: Compliance

It should go without saying, if your business is in a highly regulated industry that features strict compliance requirements then an OpenStack private cloud is your best option. If we take a look at the financial industry for example, it’s crucial that confidential banking information remains private. It could be catastrophic for a financial institution to have a data breach. Furthermore, certain areas of the world have stricter data compliance procedures such as Canada or Europe. It’s important that your private cloud adheres to strict guidelines.

Reason #4: Unique Business Requirements

Lastly, if your business features any unique requirements they might not be available in a public cloud. A private cloud solution is here to save the day. A cloud provider can help provide consulting services or even full management of your private cloud if your requirements are truly one of a kind. With the right cloud vendor on your side, it’s possible to make a strong OpenStack powered cloud solution that benefits your business in the short and long term. Our data privacy is compliant even under the most rigorous requirements. We at VEXXHOST have deployments in Canada, Europe and Asia.

Why OpenStack Powered Private Cloud

The idea of upgrading to an OpenStack powered private cloud shouldn’t feel like kryptonite. Let the experts at VEXXHOST consult and guide you through the process. We offer consulting services and fully managed private cloud solutions to suit any size business in any industry. Contact us today to learn more.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why You Need An OpenStack Powered Private Cloud To Save The Day appeared first on VEXXHOST.

by Angela Bruni at April 01, 2020 08:14 PM

March 31, 2020


Why Containers Plus OpenStack Is The Best Way To Manage Applications

Let’s talk containers plus OpenStack. It goes without saying that users are looking for applications that offer agility, flexibility and the opportunity to implement automation wherever possible. In this, OpenStack has situated itself as the go-to deployment environment for containerized applications. Meaning that cloud providers are now able to innovate and enable businesses and enterprises to build, deliver and thrive through the use of high-quality applications. With 57% of enterprises stating that they are already using or planning to implement containers on OpenStack, nearly more than half of enterprises will be taking advantage of the benefits of containerized applications.

Whether your enterprise is leaning towards a public or private cloud model for application development, today we are going to break down why containers plus OpenStack is the best way to manage applications. Curious to learn more? Keep reading.

Why Containers Plus OpenStack?

When you use containers plus OpenStack to manage applications you have the opportunity to leverage the best of both worlds. Users are able to develop and deliver better applications in less time. It’s important to keep in mind though that containers are not a technology that can stand on its own, a container needs additional technological infrastructure to build, deploy, manage and maintain applications and infrastructure services. This is where OpenStack comes in to make a powerful impact on containers and cloud computing as a whole.

Public Versus Private Cloud: The Big Debate

In some cases, businesses can benefit from using an OpenStack powered private cloud. With an on-premise private cloud solution, it becomes possible to optimize both hardware and software-based environments. Moreover, improved performance is expected thanks to the ability to keep resources on-premise. Businesses can find greater flexibility thanks to the ability to grow their cloud-based on their own schedule.

An on-premise private cloud solution may not be the most practical solution for every business though. A public cloud solution may work better for projects that have a shorter lifespan since an on-premise or hosted private cloud does require larger upfront investments. An OpenStack powered private cloud also works for projects that need to implement their cloud solution quickly and efficiently, while still remaining cost-effective.

Now this brings us to why containers plus OpenStack is the best way to manage your applications. We all know that Kubernetes is an application tool, while OpenStack is an infrastructure tool. Furthermore, OpenStack is also an application. Kubernetes helps to make OpenStack easier to run and manage various services. This could be anything from key operations such as availability, upgrades and more. OpenStack is also able to launch and run self-service Kubernetes clusters for both end-users and their applications. Therefore, OpenStack is simplified with containers no matter what cloud model you have in place.

Get Ready For Containers Plus OpenStack

It’s evident that containers plus OpenStack are able to provide businesses with a sustainable of managing their applications within a private or public cloud model. Moreover, thanks to the role of containers within OpenStack, future releases will only continue to enrich the open-source communities.

Thinking about adopting a private or public solution alongside a container orchestration engine? Trust the experts at VEXXHOST to guide you through the implementation of your cloud solution. We’ve been using and contributing to OpenStack since 2011, therefore it’s safe to say we know OpenStack inside and out. Contact us today to learn more about how we can help.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why Containers Plus OpenStack Is The Best Way To Manage Applications appeared first on VEXXHOST.

by Angela Bruni at March 31, 2020 06:42 PM

March 30, 2020


The Impact Of Cloud Computing In Fintech

cloud computing Fintech

The impact of cloud computing in fintech is evident. While the use of cloud technology within fintech services is still catching on, the opportunity for growth is massive. Even though cloud adoption is still in its early stages, cloud computing in fintech is growing at a steady pace. Moreover, a total of 22% of all applications within fintech are currently running on the cloud. That being said, this leaves substantial room for growth and innovation.

Moving forward, banks are now able to partner with fintech startups with ease. Most noteworthy, startups are developing as cloud-native from the very start. The global fintech market size expects to grow to $124.3 billion USD by the end of 2025 at a Compound Annual Growth Rate of 23.84%. As an increasing number of businesses make the move to adopt a digital payment system, the demand for fintech solutions is only expected to grow and drive market growth.

Curious to learn more about the benefits of cloud computing in fintech and some critical trends that are shaping fintech as we know it? Keep reading to find out.

Critical Trends and Benefits Of Cloud Computing In Fintech

Some of the major benefits of adopting cloud computing within the fintech industry are increasing flexibility, better security, driven innovation and a rise in scalability. These benefits are currently shaping critical trends that are driving growth within fintech.

1. Data Aggregation

Storing any findata such as account balance information, spending habits, budgeting, and cash flow securely is a must. For instance, compiling information from banking databases allows for proper processing. The availability, as well as confidentiality, of this findata is extremely convenient not only for financial institutions but users as well.

2. Self Service Application

From the surge of self-service kiosks to being able to control a bank account from a simple application in your handheld device, self-service is giving increased autonomy and flexibility to users. When users are able to access financial information, send money and even create a budget via their phone then they have more opportunities to take control of their finances. In other words, thanks to these developments in software users can complete a transaction without the help of any human representatives.

3. Security

When it comes to any financial information, security is an obvious priority. Therefore, thanks to the power and security of cloud computing fintech leaders can rest assured that their data is safe. Likewise, traditional IT setups can run the risk of cyberattacks to phishing emails, but cloud computing gives high resilience through security architecture.

The True Impact Of The Cloud

In conclusion it’s safe to say that the Fintech community is dynamic and driving the industry shift towards cloud computing. Moreover, this is only expected to keep growing. Certainly, no matter what industry you’re in, the experts at VEXXHOST can help you build a private cloud infrastructure. No matter the scale of your business, you can benefit from critical developments within the cloud. Contact us today to learn more.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post The Impact Of Cloud Computing In Fintech appeared first on VEXXHOST.

by Angela Bruni at March 30, 2020 03:13 PM

March 27, 2020


A Brief Comparison of Containers and Virtual Machines

Although containers and virtual machines may be perceived to be the same they are fundamentally quite different technologies. Moreover, the most significant difference being that containers can enable the virtualization of an operating system so that multiple workloads can run on a single instance. In contrast, virtual machines use hardware to virtualize and run many operating system instances.

Today we are going to give a brief overview of some of the differences between containers and virtual machines. Keep reading to learn more about the differences between the two technologies.

Virtual Machines

Virtual machines came from the necessity to increase power and capacity from bare metal applications. They are made through running software on top of physical services. Meanwhile, this happens to reproduce a particular hardware system called a hypervisor. A hypervisor, also known as a virtual monitor, is hardware that creates and runs virtual machines. It is situated between the hardware and the virtual machine. In other words, its main purpose is to virtualize the server.

Virtual machines have the capacity to run different operating systems on the same physical server and they can be quite large in size – up to several gigabytes. Moreover, each virtual machine has a separate operating system image, which continues to increase the need for memory and storage. This can be an added challenge in everything from testing and development, to production and even disaster recovery. Certainly, it can limit the portability of applications and a cloud solution.

The hypervisor is quite the workforce. For instance, it is responsible for interacting with all NIC cards from all hardware, the same goes for storage within your virtual machine. Furthermore, the hypervisor is quite busy and there is a significant amount of it that is masking from the operating system above it.

vm containers


Containers are a useful way to run isolated systems on a single server or host operating system. For example, since the growth in popularity of operating system virtualization, the software is now able to predictably run from one server environment to another. The containers themselves sit on top of a physical server and its host operating system. Each container shares the host operating system kernel, binaries, and libraries. These shared components are available only as read-only.

One of the major highlights of containers is that they are extremely light and are only megabytes in size. Meaning that they have the potential to start in seconds instead of minutes with virtual machines. Thanks to a common operating system, containers can reduce management overhead all the while fixing bugs and other maintenance tasks. To sum up, the big difference between containers and virtual machines is that containers are significantly lighter and more portable.

Concluding Containers and Virtual Machines

In conlusion, when it comes to containers compared to virtual machines there are many differences. With virtual machines, the hardware is able to run multiple operating system instances. In contrast, containers have the benefit of portability and speed to help them streamline software and its development.

In short, are you curious to learn more about how virtual machines and containers can work within your cloud strategy? Contact us today to speak to one of the experts at VEXXHOST.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post A Brief Comparison of Containers and Virtual Machines appeared first on VEXXHOST.

by Angela Bruni at March 27, 2020 03:42 PM

March 26, 2020


Object Storage With OpenStack Swift

OpenStack Swift is an OpenStack project that offers cloud storage software that enables easy storage and retrieval of data. This becomes possible through a simple Application Program Interface (API). If you’re looking to take advantage of software built for scale then Swift is an excellent choice. It’s optimized for availability, as well as longevity, to benefit the data set in its entirety. Think of Swift as the best option for storing unstructured data that you’d like to grow without limits.

Today we will explore OpenStack Swift alongside its key features and how it can be of use within your OpenStack powered cloud. We will dive into how Swift is scalable and available, reliable and secure and how it can integrate seamlessly through OpenStack APIs.

Object Storage With OpenStack Swift

OpenStack Object Storage, otherwise known as OpenStack Swift, manages the storage of large amounts of data across clusters for a long term basis. It is a cost-effective storage solution for your OpenStack powered cloud. Swift was one of the original OpenStack projects and continues to be still very relevant today. It is possible to use Swift for the storage, backup and archiving of unstructured data. Moreover, this could be anything from documents, static web content, video files, image files, emails and even virtual machine images. Each object stored has associated metadata as part of the extended attributes of the file.

Let’s Talk About Features

Scalable and Available

Swift offers a scalable infrastructure with high availability to store as much data as needed without having to worry about your overall capacity in the long term. This cloud object storage offers the best in terms of service availability supported by strong durability as well as reliability. With Swift, you’ll never have to struggle with storage limitations or inaccessibility.

Reliable and Secure

One of the benefits of OpenStack Swift is that it is reliable and secure. With Swift, your systems are able to store multiple copies of data across your infrastructure. Encrypted data goes through SSL, meaning that you can always access your data with the highest security possible. A trusted cloud provider is able to help ensure that your data is SSL ready. With enterprise-grade security, you can rest easy knowing that your data is only accessible to those who need it. Users also benefit from the seamless integration of other OpenStack services through APIs and the use of an advanced dashboard control panel. Meaning that your business is able to make the most out of APIs.

How To Get Started

If you’re looking to start with OpenStack Swift to reap its many storage benefits then contact the experts at VEXXHOST. We’re here to help you get your OpenStack powered cloud off the ground and utilize the relevant OpenStack projects to create a unique cloud that suits all your needs. We can support you through every step and make sure that you’re getting the most out of your OpenStack cloud. Whether you’re a small business or larger enterprise, there is a custom cloud solution for you. Contact us today to learn more about OpenStack Swift and what it can do for you in your new cloud ecosystem.

Would you like to know more about Zuul? Download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post Object Storage With OpenStack Swift appeared first on VEXXHOST.

by Angela Bruni at March 26, 2020 04:43 PM

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: 10 Years Of OpenStack

Many amazing tech milestones happened in 2010. Steve Jobs launched the first iPad. Sprint announced its first 4G phone. Facebook reached 500 million users. OpenStack was born

In real time, the pace of change in the tech industry often feels glacial, but looking at things over a ten-year span, a lot of stark differences have emerged since 2010. So before you plug back in your AirPods, fire up Fornite and watch a new show on Disney+, let’s take a look at how OpenStack has transformed the open source industry in the past 10 years. 

The Decade Challenge – OpenStack Edition

What began as an endeavor to bring greater choice in cloud solutions to users, combining Nova for compute from NASA with Swift for object storage from Rackspace, has since grown into a strong foundation for open infrastructure. None of it would be possible without the consistent growth of the OpenStack community. In the 10 years since the community was established, OpenStack is supported by one of the largest, global open source communities of over 105,000 members in 187 countries from over 700 organizations, backed by over 100 member companies! Developers from around the world work together daily on a six-month release cycle with developmental milestones.

Looking back to OpenStack in 2010, we were ecstatic to celebrate our first year of growth from a couple dozen developers to nearly 250 unique contributors in the Cactus release (the third OpenStack release). Fast Forward to the year of 2019, we have a total of 1,518 unique change authors who approved more than 47,500 changes and published two major releases (Stein and Train). Between that, the community successfully delivered 16 software releases on time. Today, we are not only celebrating our community’s achievement for the past 10 years, but also looking forward to the continuous prosperity of the community in the next 10 years.

Your Top 10 Favorite Moments With OpenStack Are…

As you can see, there are so many milestones to celebrate in the past 10 years of OpenStack with the community. We want to hear from you about what your top 10 favorite things related to OpenStack are. Go into this survey and choose a question to answer. The topics range from your top 10 most memorable moments of OpenStack, your top 10 most used features in OpenStack to your top 10 favorite cities you visited for OpenStack. We are looking forward to hearing your favorites, and we invite you all to join us and celebrate 10 awesome years of OpenStack.

OpenStack Foundation news

  • Based on the input from the community, board, and the latest information available from the health experts, we’ve made the decision not to hold the OpenDev + PTG in Vancouver this June. Instead, we’re exploring ways to turn it into a virtual event and would love the help of everyone in the community. Learn more in this mailing list post by Mark Collier.
  • There will be two community meetings next week to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Learn more in this mailing list.

Airship: Elevate your infrastructure

  • The Airship community will be holding a virtual meet-up on March 31 from 1400-2200 UTC that will serve much the same purpose as the originally planned KubeCon face-to-face team meeting. Goals of the meetup include aligning on Airship use cases and high-level design, finalizing actionable low-level design for the upcoming release, and reviewing work in progress.
  • Catch up on the latest news in the March update, live on the Airship blog now.
  • Connect with the Airship community on Slack! We’re mirroring to #airshipit on IRC so you can use your preferred platform. Join at airshipit.org/slack.

Kata Containers: The speed of containers, the security of VMs

  • We have just released the latest stable 1.9.6, 1.10.2 releases and cut 1.11.0-alpha1 release. The 1.9.6 and 1.10.2 stable releases included latest bug fixes. And the 1.11.0-alpha1 release prepared more stuff for the incoming 1.11.0 release, notably. See the message here. We look forward to stabilizing it in the next few weeks. Thank you to the users and contributors!

OpenStack: Open source software for creating private and public clouds

  • If you’re running OpenStack, please share your feedback and deployment information in the 2020 OpenStack User Survey. It only takes 20 minutes and anonymous feedback is shared directly with developers!
    • Why is it important for you to take the user survey? Find out here!
  • We are entering the final stages of the Ussuri development cycle, with feature freeze happening on April 6, in preparation for the final release on May 13. The schedule for the next cycle (Victoria) was published, with a final release planned for October 14. The ‘W’ release (planned for Q2, 2021) will be called ‘Wallaby’.
  • In the coming weeks the OpenStack community will renew its leadership, with 5 TC seats up for election, as well as all PTL positions. Nominations are open until March 31!
  • A framework for proposing crazy ideas for OpenStack has been created, with the first idea being posted there: project Teapot.

StarlingX: A fully featured cloud for the distributed edge

  • The StarlingX community recently held their Community Meetup in Chandler, AZ. Check out the updates on the current development activities and plans for future releases on the StarlingX blog.
  • If you’re currently testing StarlingX, running PoC implementations or running the software in production take a few minutes and fill out a short survey to provide feedback to the community. All information is confidential to the OpenStack Foundation unless you designate that it can be public.

Zuul: Stop merging broken code

  • Are you a Zuul user? Please take a few moments to fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public.
  • Zuul versions 3.17.0 and 3.18.0 have been released. Both releases address security issues and you should refer to the release notes for more details. Additionally, socat and kubectl must now be installed on the executors.
  • Nodepool 3.12.0 has been released. This adds support for Google Cloud instances. Refer to the release notes for more information.

Upcoming Open Infrastructure and Community Events

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at March 26, 2020 01:00 PM

From Tutorials to Case Studies, Share your Open Infrastructure Wisdom with Superuser

Superuser is an online publication for the community, by the community. We’re publishing our editorial process to actively solicit submissions from the open infrastructure community, and we want to hear from you!

When Superuser launched in 2014, the goal was to serve as a conduit for our community of developers and users to share their experiences building and operating open infrastructure around the world. Summits, PTGs, Open Infrastructure Days, and other community gatherings are great ways to connect offline, and Superuser is an online way to keep those connections going. Content has historically been developed by a team of writers working with the foundation, as well as a handful of community members, and we want to invite the entire community to share their lessons learned, use cases, tutorials, and overall open infrastructure thoughts here as well.

We want to hear your ideas for articles you’d like to contribute about open source infrastructure and open source projects. Share your ideas with the Superuser editorial team. You might want to read over the Superuser Editorial Guidelines first, before reaching out.

So, what topics are fair game? Most anything relating to building and operating open infrastructure is a candidate:

  • AI/machine learning
  • Bare metal
  • Container infrastructure
  • CI/CD
  • Edge computing
  • Telecom + NFV
  • Public cloud
  • Private & hybrid cloud
  • Security
  • General open infrastructure thoughts

We’ll be particularly interested in content that’s relevant to projects in the open infrastructure community, like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul, and others.

User case studies are always a big hit, but so are how-to tutorials, best practices, lessons learned, event recaps, project roadmap updates, and new release information. We’ll even consider product version updates and thought leadership content, provided it’s vendor neutral and focused on the community rather than competitors. Superuser is not a trade publication: we’re here to amplify the values of openness, collaboration, and solving shared problems.

So, what are some examples of content we’ll decline? Anything that reads like an advertisement or sales solicitation won’t make the cut. Opinion submissions must be supported by verifiable facts and voiced in a collaborative tone that invites community participation in finding solutions everyone can use equally.

Examples of submissions that fit the editorial mission of Superuser include:

  • The latest Kata Containers release features
  • A personal take on the latest Open Infrastructure Summit
  • How Verizon Media is using OpenStack at scale
  • How to run a packaged function with OpenStack Qinling
  • Running StarlingX at the edge for telcos
  • How to run project gating with Zuul

What ideas do you have? Share them with us using this short form to engage with our editorial team.

The post From Tutorials to Case Studies, Share your Open Infrastructure Wisdom with Superuser appeared first on Superuser.

by Allison Price at March 26, 2020 08:00 AM

March 25, 2020


Block Storage With OpenStack Cinder

If you’re looking for scalability within your cloud storage then OpenStack Cinder is worth highlighting. Cinder users are able to dramatically decrease and increase their storage capacity without having to worry about expensive physical storage systems or servers. Meaning that businesses and organizations alike can benefit from the best flexibility at lower costs.

Today we are going to go over the basics of OpenStack Cinder. Keep reading to learn more about how Cinder block storage is a fundamental part of an OpenStack starter kit and how its block storage capabilities create a more versatile and secure cloud solution.

Let’s Talk OpenStack Cinder

When it comes to volume storage for bare metal and virtual machines it’s important to have high integration compatibility. Moreover, Cinder is able to provision volume storage for virtual machines through the use of Nova. Meanwhile, Cinder can provision volume storage for bare metal through Ironic. Thus giving it the flexibility to work with both projects.

The compatibility of Cinder doesn’t stop there though. Cinder is also Ceph compatible, meaning that Cinder makes it possible for users to work with Ceph storage without any added complications to your cloud. Through the use of snapshots of management functionality, Cinder is able to back up data stored on block storage volumes. Restoring storage volumes or creating new block storage volumes are now possible with Cinder. Moreover, Cinder simplifies code management by allowing users to use one code for each service. Cinder can manage all the provisioning and deleting the needs of the users with efficiency and ease of use.

Simplified Management and Secure Communication

Users are able to simplify the management of their storage devices thanks to Cinder’s simple API management. The implementation of one code for all backends makes this possible. Therefore, instead of needing to maintain different code for each backend, it is possible to use a single code to facilitate the process. Cinder is the gatekeeper of all the codes which allows it to create volumes on different backends. With simple integration, it’s no longer necessary to create integrations between other services and their backends. Cinder becomes one of the ways in which they can communicate securely.

Another way in which Cinder enables secure communication is through seamless encryption. This gives users the best experience of OpenStack’s block storage and key management technologies. Cinder is able to integrate with key management so that it is possible to utilize the associate key to decrypt its content when the server starts. Encrypted data is not accessible to anyone without the appropriate key. This means that your content remains secure even under the rare possibility that someone should take the server.

Get Started

Thinking it’s about time your business or organization upgrade to an OpenStack powered cloud? Trust the experts at VEXXHOST to help make your cloud aspirations a reality. We are OpenStack certified and have been using and contributing to OpenStack since 2011. Certainly, with nearly a decade of experience, we are there to help you through every step of the way. Contact us today to get started with an OpenStack powered cloud that’s unique to your business needs.

Would you like to know about Private Cloud and what it can do for you? Download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post Block Storage With OpenStack Cinder appeared first on VEXXHOST.

by Angela Bruni at March 25, 2020 04:29 PM

OpenStack Superuser

Establishing Trusted Network Interconnection of OpenStack Clouds


Applications and the network have become distributed. Applications are fragmented to micro-services so the network is being composed of different clouds from different regions. But with this, the need to have control of all resource aspects is increased due to increasing security concerns. SD-WAN is available for enterprise. But what if data centers or endpoints are less and spread across multiple regions? This article focuses on the interconnection of OpenStack clouds using Neutron APIs.

The Neutron to Neutron Communication

There may be a situation where you need to interconnect two or more separate data centers or NFV PoPs powered with OpenStack. Those data centers are considered to be located in different regions as well. These data centers either want to have an interconnection on-demand initially. Further, the interconnection may require private addressing and isolation to share data end-to-end with a dedicated communication channel. A combination of on-demand and private addressing and isolation possible with Neutron VPN as a service (VPNaaS). We have different VPN options available after performing selecting a suitable solution after VPN reviews. But this solution involved IPSec which has a performance overhead. Additionally, for a proper interconnection, you want a solution that avoids the overhead of packet isolation.

One of the architecture for interconnection of OpenStack cloud can be – adding an orchestrator in between clouds and resources in participant clouds are interconnected. But it has several demerits. Like

The orchestrator may need admin rights to establish networking in resources of data centers. But it is difficult when there are different organizations are involved. Also, adding orchestrator will expose the APIs to different attacks and because of this, it is treated as a complex system.

The recommended option remain is to extend the Neutron APIs to interconnect resources like virtual routers of OpenStack powered data centers. It involves two facets User Facing API and Neutron to Neutron API.

In User facing APIs, there will be a symmetrical call that will be made by centrally located admin to neutron modules in data centers. A link will be established with approval from both of the data centers.

In Neutron to Neutron, the API will allow each Neutron component to check if the symmetrical interconnection has been defined on the other side. In this way, Neutron components in the different regions coordinate together to set up these private isolated interconnections without orchestration nor network device configuration.

The solution was discussed at the OpenStack summit Berlin back in 2018. This solution is applicable to use cases where:

  • OpenStack is involved in the data center
  • If there are multiple regions involved with one OpenStack cloud
  • Between multiple OpenStack clouds where trust entities are co-ordinated
  • And, where different OpenStack cloud instances use the different SDN solutions

You can download the presentation from here and watch a demo

The post Establishing Trusted Network Interconnection of OpenStack Clouds appeared first on Superuser.

by Sagar Nangare at March 25, 2020 01:00 PM

Christopher Smart

Updating OpenStack TripleO Ceph nodes safely one at a time

Part of the process when updating Red Hat’s TripleO based OpenStack is to apply the package and container updates, viaupdate run step, to the nodes in each Role (like Controller, CephStorage and Compute, etc). This is done in-place, before the ceph-upgrade (ceph-ansible) step, converge step and reboots.

openstack overcloud update run --nodes CephStorage

Rather than do an entire Role straight up however, I always update one node of that type first. This lets me make sure there were no problems (and fix them if there were), before moving onto the whole Role.

I noticed recently when performing the update step on CephStorage role nodes that OSDs and OSD nodes were going down in the cluster. This was then causing my Ceph cluster to go into backfilling and recovering (norebalance was set).

We want all of these nodes to be done one at a time, as taking more than one node out at a time can potentially make the Ceph cluster stop serving data (all VMs will freeze) until it finishes and gets the minimum number of copies in the cluster. If all three copies of data go offline at the same time, it’s not going to be able to recover.

My concern was that the update step does not check the status of the cluster, it just goes ahead and updates each node one by one (the seperate ceph update run step does check the state). If the Ceph nodes are updated faster than the cluster can fix itself, we might end up with multiple nodes going offline and hitting the issues mentioned above.

So to work around this I just ran this simple bash loop. It gets a list of all the Ceph Storage nodes and before updating each one in turn, checks that the status of the cluster is HEALTH_OK before proceeding. This would not possible if we update by Role instead.

source ~/stackrc
for node in $(openstack server list -f value -c Name |grep ceph-storage |sort -V); do
  while [[ ! "$(ssh -q controller-0 'sudo ceph -s |grep health:')" =~ "HEALTH_OK" ]] ; do
    echo 'cluster not healthy, sleeping before updating ${node}'
    sleep 5
  echo 'cluster healthy, updating ${node}'
  openstack overcloud update run --nodes ${node} || { echo 'failed to update ${node}, exiting'; exit 1 ;}
  echo 'updated ${node} successfully'

I’m not sure if the cluster doing down like that this is expected behaviour, but I opened a bugzilla for it.

by Chris at March 25, 2020 07:50 AM

March 24, 2020


Looking At OpenStack Glance

OpenStack Glance is an image service that provides an agile and convenient way to copy and launch instances. With Glance, users are able to upload, discover, register and retrieve virtual machine images with speed and ease. That is to say that you’ll be able to spend less time working with images and metadata definitions and more time working on your application.

Today we are going to take a look at Glance, OpenStack’s powerful yet agile image service. From giving users the power to upload OpenStack compatible images, to managing server images for your cloud, Glance is worth the double-take. Keep reading to see for yourself.

OpenStack Glance: An Image Speaks A Thousand Words

When it comes to OpenStack Glance, there are many features worth highlighting. For instance, starting with the central image repository, users are able to update through OpenStack’s centralized image storage service. Users are able to replicate or use snapshots of images and store them accordingly within their OpenStack powered cloud. Furthermore, this solves the issue of configuration drift, as the centralized image repository that surrounds all infrastructure features consistent updates.

Furthermore, when you need your servers to boot up quickly and efficiently, copy-on-write is there to work with agility. Not only that but copy-on-write has the potential to save your business or enterprise costs by reducing total disk usage. Moreover, it increases efficiency by using stored images as useful templates to get new servers up and running consistently. It’s more efficient to provision multiple servers than manually install a server operating system and then configure each additional service manually. This means that Glance’s copy-on-write saves users both time and money, two very valuable resources for any business or enterprise.

Uploads, Downloads and Compatibility

Glance enables secure uploads and secure downloads through signed image validation. Meaning that data is able to be validated through Glance prior to being stored within your cloud. Therefore, if validation is unsuccessful then the upload will fail and the image will be deleted. The same goes for all image downloads. If the data cannot perform the appropriate data verification upon image download then it will not be stored within your cloud. Secure sharing of multiple image types across tenants are also possible with Glance. It’s possible to share images with specific or all users securely.

In terms of compatibility, Glance isn’t restricted to specific servers as it can boot up virtual machines alongside Cinder and Ironic. Thanks to Glance’s RESTful API, the querying of virtual machine image metadata as well as the retrieval of the actual image is possible. Finally, one of the benefits of OpenStack’s advanced technologies is Glance’s simple integration with Cinder block storage under your regular infrastructure. This allows for expert storage and easy to use virtualization of block storage management.

Getting Started With Glance

In conlusion, we’ve gone over how Glance provides simple OpenStack based image storage in your cloud solution. Certainly, Glance is an easy way to copy and launch instances while allowing you to quickly and securely download and upload images and even features a block storage integration. Thinking of upgrading your cloud solution? Every OpenStack powered cloud features these image storage features.

We at VEXXHOST have been working with OpenStack since 2011 and are OpenStack Certified. Moreover, this means that no one knows an OpenStack powered cloud as we do. Our cloud services contain OpenStack software that validates through testing to provide API compatibility for OpenStack core services.

Curious to learn more about Glance and other OpenStack core services? Contact our team of experts today to learn how Glance can help elevate your cloud strategy.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Looking At OpenStack Glance appeared first on VEXXHOST.

by Angela Bruni at March 24, 2020 02:34 PM

March 23, 2020


Let’s Get Networking: OpenStack Neutron

OpenStack Neutron is a networking component of OpenStack. Although it’s considered to be one of the more complicated projects in your OpenStack Compute set up, it’s also extremely powerful. This powerhouse is able to create virtual networks, routers, firewalls and beyond. Moreover, through Neutron, OpenStack is able to offer “network connectivity as a service”. Through the implementation of Neutron API, other OpenStack services manage interface devices.

Neutron is an OpenStack powered, flexible and secure software-defined network. Today we are going to break down the ins and outs of Neutron, like how it allows you to build single-tenant networks while still giving you complete control over your network architecture. Keep reading to see precisely why Neutron is a powerhouse in your OpenStack powered cloud solution.

OpenStack Neutron: The Building Block Of An OpenStack Cloud

As we mentioned earlier, Neutron is a networking component of OpenStack. It is a standalone service that interacts with other projects such as Keystone, Horizon, Nova, and Glance. Similarly to the projects that it runs alongside, the deployment of Neutron involves deploying several processes on each host. Neutron, like other services, relies on Keystone for its authentication and authorization of all API requests. Horizon allows basic integration with the Neutron API to allow tenants to create networks. On the other hand, Nova interacts with Neutron through API calls. Nova communicates with Neutron API to plug each virtual NIC on the instance through the use of Open vSwitch.

With OpenStack Neutron you’re able to reap the benefits of total peace of mind thanks to network segmentation. By splitting computer networks into subnetworks Neutron is able to boost performance and improve security. Thanks to the segmentation of network connections split across systems, each virtual machine has a private hypervisor in its own individual network.

One of the core requirements of OpenStack Neutron is to provide connectivity to and from instances. This is possible thanks to one of two categories: provider networks and tenant networks. Your OpenStack administrator creates provider networks. These networks map directly into an existing physical network inside of your chosen data center. It’s possible to enable shared provider networks amongst tenants as part of the network creation process. In contrast, tenant networks are networks created by users within groups of users or tenants. These networks cannot be shared amoungst other tenants. Furthermore, without a Neutron router, these networks isolate each other and everything else as well.

How To Get Started

In conclusion, there’s so much to OpenStack Neutron that we couldn’t cover it all in a single blog post. We’ve laid some foundation on understanding the basics of Neutron and how it builds and uses simple networks for instance connectivity. Moreover, if you’re looking to learn more about Neutron, its role within OpenStack and what it can do for your business get in touch with our team of experts. Certainly, we’ll be happy to listen to your cloud computing requirements and help create a cloud strategy that is right for your business or enterprise. Contact us to start with an OpenStack powered cloud solution today.

Would you like to know more about Zuul? Download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post Let’s Get Networking: OpenStack Neutron appeared first on VEXXHOST.

by Angela Bruni at March 23, 2020 07:01 PM


Running Remote Workshops

In the current climate, where we are either unable to travel to collaborate or because we just want to reduce our impact on the environment, the ability to effectively collaborate remotely is critical.

by Shaun OMeara at March 23, 2020 05:40 PM


Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.


I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.

Communicate with the family to work out a schedule or join the call without video so you can still participate.

Manage Expectations

Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.

Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.

This will be an ongoing conversation that evolves as projects and situations evolve.

Know Thyself

Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.

Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.

Some people NEED to physically be in the office around other people. Some will be totally content to work from home.

Sure, some things aren’t optional, but work with what you can.

Figure out what works for you.

Embrace #PhysicalDistance Not #SocialDistance

Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.

Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.

For that matter, don’t forget to reach out to your friends and family.

Even introverts need to maintain a certain level of connection.

Further Reading

There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Now let’s hear from you!

What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.

And, as always, thank you for being a part of the RDO community!

by Rain Leander at March 23, 2020 03:14 PM

March 21, 2020

Christopher Smart

Using Ansible and dynamic inventory to manage OpenStack TripleO nodes

TripleO based OpenStack deployments use an OpenStack all-in-one node (undercloud) to automate the build and management of the actual cloud (overcloud) using native services such as Heat and Ironic. Roles are used to define services and configuration, which are then applied to specific nodes, for example, Service, Compute and CephStorage, etc.

Although the install is automated, sometimes you need to run adhoc tasks outside of the official update process. For example, you might want to make sure that all hosts are contactable, have a valid subscription (for Red Hat OpenStack Platform), restart containers, or maybe even apply custom changes or patches before an update. Also, during the update process when nodes are being rebooted, it can be useful to use an Ansible script to know when they’ve all come back, services are all running, all containers are healthy, before re-enabling them.

Inventory script

To make this easy, we can use the TripleO Ansible inventory script, which queries the undercloud to get a dynamic inventory of the overcloud nodes. When using the script as an inventory source with the ansible command however, you cannot pass arguments to it. If you’re managing a single cluster and using the standard stack name of overcloud, then this is not a problem; you can just call the script directly.

However, as I manage multiple clouds and each has a different Heat stack name, I create a little executable wrapper script to pass the stack name to the inventory script. Then I just call the relevant shell script instead. If you use the undercloud host to manage multiple stacks, then create multiple scripts and modify as required.

cat >> inventory-overcloud.sh << EOF
#!/usr/bin/env bash
source ~/stackrc
exec /usr/bin/tripleo-ansible-inventory --stack stack-name --list

Make it executable and run it. It should return JSON with your overcloud node details.

chmod u+x inventory-overcloud.sh

Run simple tasks

The purpose of using the dynamic inventory is to run some Ansible! We can now use it to do simple things easily, like ping nodes to make sure they are online.

ansible \
--inventory inventory-overcloud.sh \
all \
--module-name ping

And of course one of the great things with Ansible is the ability to limit which hosts you’re running against. So for example, to make sure all compute nodes of role type Compute are back, simple replace all with Compute.

ansible \
--inventory inventory-overcloud.sh \
Compute \
--module-name ping

You can also specify nodes individually.

ansible \
--inventory inventory-overcloud.sh \
service-0,telemetry-2,compute-0,compute-1 \
--module-name ping

You can use the shell module to do simple adhoc things, like restart containers or maybe check their health.

ansible \
--inventory inventory-overcloud.sh \
all \
--module-name shell \
--become \
--args "docker ps |egrep "CONTAINER|unhealthy"'

And the same command using short arguments.

ansible \
-i inventory-overcloud.sh \
all \
-m shell \
-ba "docker ps |egrep "CONTAINER|unhealthy"'

Create some Ansible plays

You can see simple tasks are easy, for more complicated tasks you might want to write some plays.

Pre-fetch downloads before update

Your needs will probably vary, but here is a simple example to pre-download updates on my RHEL hosts to save time (updates are actually installed separately via overcloud update process). Note that the download_only option was added in Ansible 2.7 and thus I don’t use the yum module as RHEL uses Ansible 2.6.

cat >> fetch-updates.yaml << EOF
- hosts: all
    - name: Fetch package updates
      command: yum update --downloadonly
      register: result_fetch_updates
      retries: 30
      delay: 10
      until: result_fetch_updates is succeeded
      changed_when: '"Total size:" not in result_fetch_updates.stdout'
        warn: no

Now we can run this command against the next set of nodes we’re going to update, Compute and Telemetry in this example.

ansible-playbook \
--inventory inventory-overcloud.sh \
--limit Compute,Telemetry \

And again, you could specify nodes individually.

ansible-playbook \
--inventory inventory-overcloud.sh \
--limit telemetry-0,service-0,compute-2,compute-3 \

There you go. Using dynamic inventory can be really useful for running adhoc commands against your OpenStack nodes.

by Chris at March 21, 2020 11:06 AM

March 20, 2020


Let’s Talk OpenStack Nova

OpenStack Nova, also known as OpenStack Compute, is massively scalable and provides self-service style access to compute resources like virtual machines, containers, and bare metal services. Asides from being absolutely fundamental to your OpenStack cloud, Nova creates the computing resources of your cloud. Certainly, we don’t need to tell you that this is a significant role and there’s a reason why Nova is one of the most important services in your cloud.

Our aim today is to talk about OpenStack Nova and its role within your OpenStack powered cloud. Keep reading to learn more about this integral OpenStack project and how it relies upon and interacts with other projects to power your cloud.

What Is OpenStack Nova?

The OpenStack project Nova is a powerhouse that is responsible for everything from instance sizing to creation and management location. It is the project that enables virtual servers and has the ability to support the creation of virtual machines. Nova is able to provide on-demand access to compute resources thanks to its capacity to provision and manage big networks of virtual machines. By virtue of OpenStack Ironic, Nova is also able to support bare metal services. Moreover, Nova is also able to offer some support for system containers through Linux servicers that are able to run daemons. Thus making it a powerful but versatile aspect of your OpenStack cloud.

Basic OpenStack Services That Interact With Nova

There are certain OpenStack services needed to ensure that Nova functions at its most basic level. These are OpenStack Keystone, Glance, Neutron, and Placement. Even though Nova is able to integrate with other projects to provide other services such as bare metal compute instances, block storage and more, these are the base projects that allow Nova to work.


Firstly, Keystone authenticates all of OpenStack’s services and provides them with an identity. Keystone is the first element installed on OpenStack and it is in charge of all projects, including Nova.


Glance, on the other hand, works to manage server images for your cloud. Therefore, it has the power to upload OpenStack compatible images through the compute image repository.


Neutron is extremely powerful, as it provisions the virtual or physical networks that compute instances inside your OpenStack cloud. Everything from creating virtual networks, firewalls and more begins with Neutron.


Finally, when it comes to tracking the inventory of resources inside your OpenStack cloud, Nova needs Placement to provide assistance in choosing which provider of resources will be the best choice when creating a virtual machine.

These additional OpenStack services heavily interact with Nova to ensure optimal functionality and performance. As an end-user, Nova will be a powerful tool for users to create and manage servers with tools or with API directly. OpenStack Client completes this task, which is the command-line interface that commands most of the projects in OpenStack. Moreover, if you need to avail of more advanced features, Nova Client is another option but it’s recommended that users opt to use OpenStack Client or Horizon as tools for Nova.

Get Started With OpenStack Nova

Curious to learn more about how Nova powers your OpenStack cloud? The team of experts at VEXXHOST is here to guide you through every step of your OpenStack journey. We’re here to ensure that you utilize each project to their fullest potential. Contact us today to get set up with the very best OpenStack cloud environment to suit your unique needs.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Let’s Talk OpenStack Nova appeared first on VEXXHOST.

by Angela Bruni at March 20, 2020 05:36 PM

March 19, 2020


It Starts With OpenStack Keystone

OpenStack Keystone is a service that is in charge of all OpenStack projects. It’s the first element that should be installed on your OpenStack powered cloud. Moreover, every single OpenStack cloud has Keystone built into it. Ultimately, Keystone is an authentication and authorization component. It provides your cloud with API client authentication and self-discovery. Through the use of OpenStack’s Identity API, it’s able to distribute multi-tenant authorization.

We’re here today to dive into OpenStack Keystone and how it works within your OpenStack powered cloud. We will review Keystone’s identity services, security and access management capabilities. Keep reading to learn how the fundamentals of Keystone powers your OpenStack powered cloud.

OpenStack Keystone Identity Service

When it comes to your cloud authentication services you want to be confident with the level of security and privacy. OpenStack Keystone is an identity service. It’s user-friendly and an ideal candidate to work with authentication, policy management, and cataloging services. Keystone is able to organize a group of internal services that are exposed to one or several endpoints. This means that an authentication call through Keystone can validate user credentials with its identity service. Meaning that once the validation is successful it is then able to create and return a token with the Token service.

Keystone also has the ability to manually integrate LDAP and SSO. When you manually integrate your LDAP directory with keystone you’re able to enjoy the benefits of its authentication security. Which means a safer cloud solution for your business or enterprise. Also, Keystone allows users to take full advantage of their SSO to further streamline single-step authentication through a manual integration. Allowing you to make specific adjustments to your cloud is just one of the ways that OpenStack offers flexible but agile solutions.

Secure Means Secure

Vendor-agnostic authentication for your cloud services means that thanks to Keystone you’re able to streamline your login process for each service and application. It’s also able to work with your existing applications to cease vendor-related limitations right in its tracks. Obviously, you need to trust anything that has control over authentication services. Good thing Keystone features advanced security that minimizes any exposure risk to user credentials. Applications authenticate through Keystone and in turn, can delegate some of their role assignments. Moreover, user credentials are kept within the system config files, which means Keystone employs application credentials through only the use of the ID and a secret string. Meaning that your data is secure.

Take Advantage Of Our Expertise

When you have authenticated your cloud services and existing applications with Keystone you know you’re well on your way to a secure and streamlined OpenStack powered cloud. It all starts with Keystone for a reason.

In conclusion, whether you’re looking to get started with an OpenStack powered cloud or looking to upgrade your current OpenStack solution, the team at VEXXHOST is here to help. Let us support you through every step of your cloud journey and ensure you’re getting the most out of your cloud solution. No matter how big or small your business or enterprise, we work with a wide range of industries to bring you the power of cloud computing. Contact us today to learn more about how VEXXHOST can make a difference in your cloud strategy.

Would you like to know about Private Cloud and what it can do for you? Download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post It Starts With OpenStack Keystone appeared first on VEXXHOST.

by Angela Bruni at March 19, 2020 05:16 PM

OpenStack Superuser

Operate OpenStack? Take the Survey. Here’s why it matters

OpenStack is big. It’s one of the three most active open source projects in history. It is running on more than 10 million cores in production. Hundreds of users around the world count on it for mission-critical workloads. And, there’s an upstream developer community that’s eager for feedback from users so they can set priorities on how to improve the software.

Seven years ago, the OpenStack Foundation launched the OpenStack User Survey to address this need. Since then, it’s allowed the community to gain valuable insight into use cases and the global scale of implementation, then use that feedback to address feature requests and user requirements.

What is the OpenStack User Survey?

The User Survey is one of the best ways that the community can learn from OpenStack operators how to better understand what operators need to support their use cases. The results tell us more about:

  • user organizations, including what industries and geographies are running OpenStack in production,
  • key motivators for selecting OpenStack and opinions on what works best and what could benefit from improvement in both the software and the community, and
  • specific deployment information including size, OpenStack version, workloads, and vendor plugins and products being used. 

Each deployment is logged separately in the survey, and this information is only presented in aggregate—no deployment information is shared that includes an organization name unless participants provide explicit permission.

Who should take the User Survey?

Every organization that operates an OpenStack deployment should complete a survey. There only needs to be one survey completed for each deployment, so we recommend coordinating with your team to prevent duplicate submissions.

Please complete as much information as you can. You are also able to save the survey and return later if you do not have all of the information at one time. 

Why is taking the User Survey important? 

Direct feedback from the individuals and organizations operating OpenStack helps the upstream development community know what features to prioritize and which bugs to fix first, among other important learnings on how the software is being used. Each of the official project teams has the opportunity to add a question to the survey as well as review anonymized data and trends to further influence their roadmaps.

If you are operating OpenStack and you have feedback you would like implemented, the User Survey is one of your opportunities. 

The 2020 OpenStack User Survey is currently open and will close on Friday, August 21. 

The post Operate OpenStack? Take the Survey. Here’s why it matters appeared first on Superuser.

by Allison Price at March 19, 2020 01:00 PM

March 18, 2020


The Right Way To Upgrade OpenStack

Are you running on a prehistoric OpenStack release? Have you been thinking that it’s time to upgrade your OpenStack but you’re intimidated by the idea? We understand that OpenStack upgrades can be difficult. When you have an existing OpenStack private cloud you are already one step closer to benefiting from the latest updates. From stronger security, bug fixes and updates to various projects, there are several reasons why upgrading will benefit your business.

Even though OpenStack upgrades have a reputation for being difficult, they don’t have to be. We at VEXXHOST believe that upgrades should be a frictionless experience with minimal to no downtime. Keep reading to learn why painless upgrades matter. Learn how to overhaul your approach and how to get the OpenStack expertise you need to succeed.

Why Do Painless Upgrades Matter?

When it comes to upgrading OpenStack for your business or organization you cannot afford to be making mistakes. Ensuring high-availability and zero downtime are essential in order to experience a seamless upgrade. Painless upgrades mean you have more time and other resources to focus on other aspects of your business.

When you upgrade to the latest and greatest OpenStack release you’re maximizing your private cloud benefits. You’ll be taking advantage of the very best in continuously evolving open-source software. Plus you’ll be receiving all the benefits from important project updates. Not to mention, you’ll be strengthening your cloud security and ensuring that all features are up to date.

Whatever the motive behind upgrading, any user can agree that they are looking for upgrades that are simple and do not disrupt their current IT infrastructure.

Get The OpenStack Expertise You Need

There are several things that can cause friction when upgrading OpenStack. Whether your team is lacking the relevant technical experience or if you’re worried about the amount of time an upgrade will take, sometimes reaching out for help can make all the difference.

Our CEO Mohammed Naser covered why OpenStack upgrades are so difficult on our blog in the past and there are several main reasons why it can feel so hard. Ultimately, it’s not OpenStack itself that’s difficult, it’s the operational and infrastructure issues that can cause frustration. Someone who takes a do-it-yourself approach to upgrades can face many challenges, ranging from Forking OpenStack and maintaining local patches to the sheer time it takes to complete an upgrade. When in doubt, always reach out to an expert you can trust.

When you trust the experts at VEXXHOST to upgrade OpenStack for your business or organization you’re trusting nearly a decade of experience with using and contributing upstream to OpenStack. We have experience managing the largest OpenStack powered public cloud in Canada and many more cloud solutions worldwide. When you have a fully managed solution with VEXXHOST you’re about to avoid prolonged downtown, ensure your security is always up to date, and you can take advantage of the extra time to focus on what matters most to your business.

Leverage our expertise in OpenStack to ensure that you upgrade OpenStack the right way. We’re OpenStack certified and are currently running the latest OpenStack release, Train, and have been since the day of its release. Visit our OpenStack Upgrades page to learn more about how we can make your OpenStack upgrade seamless and friction-free.

Would you like to know about Cloud Pricing? Download our white paper and get reading!

Cloud Economics White Paper

Your Guide to Cloud Economics: Public Cloud Vs. Private Cloud

The post The Right Way To Upgrade OpenStack appeared first on VEXXHOST.

by Angela Bruni at March 18, 2020 04:39 PM

March 17, 2020

StackHPC Team Blog

StackHPC and COVID-19

As COVID-19 continues to spread, I would like to update you on the steps StackHPC is taking to ensure business continuity, in a secure, responsible and reliable manner, to the benefit of us, our business customers and contacts as well as to the wider community.

StackHPC has decided to have all employees work remotely for the benefit of their safety and well-being. As a consultancy company, we have a business continuity plan in place, and are confident that our teams have the resources required to ensure that our activities will not be compromised, despite the obvious challenges we are all experiencing.

Rest assured that you can rely on StackHPC to support all of our customers, partners and business contacts during these uncertain times.

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by John Taylor at March 17, 2020 07:00 PM


3 Reasons Why You Should Build Your Cloud On OpenStack

Has your business been thinking about building your cloud using OpenStack? When it comes time to start your open-source journey, your best solution is to trust an open-source cloud operating system. Especially one that can fulfill all the needs of your business or organization. The bigger and better the ecosystem, the more opportunities and choices you’ll have on your platform. Whether you’re looking to scale up with simplicity, utilize automation, take advantage of a unique community or partner with a cloud provider to reach your goals, there’s a lot to consider when you’re building a cloud environment.

Keep reading to learn precisely why OpenStack is the world’s most widely deployed open-source cloud infrastructure software and why you should consider building your cloud environment on OpenStack.

Scale With Simplicity

When you can adjust your computing capacity depending on the needs of your business then you have the right amount of scale and flexibility to succeed. Building your your cloud with OpenStack can allow you to scale with simplicity.

If you know you’re about to have an influx of data, or you need to accommodate more customers, or you need to run several demanding calculations at one time. No matter what the situation you can ensure that you’re prepared. A virtual server can help you keep up with the high demand and heavy workloads. OpenStack helps with the process since it’s created for ultimate agility and flexibility. Your business can run ten instances or ten thousand instances. Thanks to OpenStack it’s all possible.

Make Your Life Easier With Automation

With OpenStack, administration management is a breeze. OpenStack’s powerful tools allow for many tedious tasks to become automated, thus freeing up more time to be allotted to more pressing areas of the business.

API’s allow you to have complete control over your cloud through other programs. You can develop and deliver better applications faster while taking advantage of the software-defined infrastructure platform. Think of it as a simplified development of specific apps alongside faster development overall. Making it not only faster but a cheaper alternative as well.

Benefit From A Vibrant OpenStack Cloud Community

One of the biggest benefits of OpenStack is the unique community of users and developers that are a fundamental part of it. The OpenStack community goes so much further than just standard IT, it provides integral help to academic research, telecommunications, governments, and even the entertainment industry. Solutions and documentation are easily accessible through an OpenStack portal and developers are constantly working to improve OpenStack and fix bugs whenever they occur.

From global summits to diverse groups within the community, the OpenStack community only continues to grow and improve the open-source cloud operating system.

Build Your Cloud Using OpenStack

OpenStack was created for everyone, but it’s not as simple as merely running OpenStack on your hardware. To optimize your cloud infrastructure, you need to have an expert on your side. VEXXHOST is here to help to create an OpenStack cloud solution that works best for the needs of your business. A fully managed OpenStack cloud means that you’re leveraging the expertise of your vendor and ensuring the highest level of availability possible.

Did you know that the experts at VEXXHOST are OpenStack certified? For nearly the past decade we have been using and contributing upstream to OpenStack. Certainly, it’s safe to say we know OpenStack inside and out. We’re currently running the latest OpenStack release, Train, and have been since the day of its release.

We offer a fully managed solution so you can focus on the core competencies of your business while we take care of the rest. Convinced you should build your cloud on OpenStack? We’re here to help. Contact us today to learn more about our fully managed solution.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 3 Reasons Why You Should Build Your Cloud On OpenStack appeared first on VEXXHOST.

by Angela Bruni at March 17, 2020 05:34 PM


Community Blog Round Up 17 March 2020

Oddbit writes two incredible articles – one about configuring passwordless consoles for raspberry pi and another about configuring open vswitch with nmcli while Carlos Camacho publishes Emilien Macchi’s deep dive demo on containerized deployment sans Paunch.

A passwordless serial console for your Raspberry Pi by oddbit

legendre on #raspbian asked:

How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh

In this article, we’ll walk through one way of implementing this configuration.

Read more at https://blog.oddbit.com/post/2020-02-24-a-passwordless-serial-console/

TripleO deep dive session #14 (Containerized deployments without paunch) by Carlos Camacho

This is the 14th release of the TripleO “Deep Dive” sessions. Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

Read more at https://www.anstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html

Configuring Open vSwitch with nmcli by oddbit

I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.

Read more at https://blog.oddbit.com/post/2020-02-15-configuring-open-vswitch-with/

by Rain Leander at March 17, 2020 03:23 PM

March 16, 2020


Cloud Computing In The Finance Industry

Finance Industry Cloud Computing

The relationship between cloud computing and the financial industry benefits both immensely. Firstly, a substantial 60% of banks and financial institutions are using the power of the cloud as a tool to improve their current systems and operating models. That means 60% of financial institutions are investing in some form of cloud strategy. Banking is among one of the top three industries to be spending the most on public cloud services. That being said, public cloud is set to become a dominant infrastructure model in the finance world after 2020. Moreover, this growth in spending is predicted to reach a CAGR of 23% in 2020, over the span of the last 5 years.

Most noteworthy, the majority of asset management CEO’s believe that cloud computing will be strategically important to their organization. Therefore, this is why many financial organizations are comfortable in adopting cloud computing. We’ve compiled a few ways in which cloud computing is having a significant impact on the financial industry. From increasing resilience in security to improving scale, the financial industry can only profit from innovations in the cloud. Curious to learn more? Let’s dive straight in.

Resilient Security

When it comes to the financial industry, security is a huge priority. For example, confidential customer data, financial figures, and other classified information needs to be protected. Above all, ensuring that data protection is never compromised is a high priority for any financial institution. That being said, many industries, including financial, are making the move to cloud computing in order to strengthen their security infrastructure. Additionally, traditional IT setups are vulnerable to simple cyber attacks, phishing emails, etc. In order to fight the potential for data breaches, cloud computing gives high resilience through security architecture. There are critical security checks at regular intervals ensuring that all confidential data remains safe.

Flexibility, Scaling And Mobility

Regardless of which sector of the financial industry your business belongs to, be it retail banking, investment banking, insurance, or any other – cloud computing can be elastic enough to provide flexibility, scale and mobility to suit any of them.

Cloud computing can act as a base for several applications, meaning that the financial industry now has access to valuable tools that they otherwise wouldn’t. Banks are able to scale up or down depending on the needs of the business. Certainly, this means that a financial company can quickly scale up during peak season without worrying about investing in hardware in advance. For instance, even smaller banking institutions can increase efficiency while lowering costs through the public cloud’s pay-as-you go options.

In terms of mobility, the power of the cloud provides secure banking for employees that may have to work remotely for a variety of reasons. Similarly, their smartphones, laptops and tablets become valuable tools for real-time monitoring and analysis. This is in addition to the benefit of access to company emails, business applications and CRM tools that are valuable assets to their daily duties.

Finally, the role of cloud computing in the financial industry is only going to grow in the upcoming years. Thinking about implementing a private cloud strategy in your business? Trust the experts at VEXXHOST to guide you through every step of the way. We have been using and contributing to OpenStack since 2011 and are OpenStack certified. Contact us today to learn more about how we can help get your cloud off the ground.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Cloud Computing In The Finance Industry appeared first on VEXXHOST.

by Angela Bruni at March 16, 2020 07:23 PM


How to upload an OpenStack disk image to Glance

Glance is an image service that allows you to discover, provide, register, or even delete disk and/or server images. It is a fundamental part of managing images on OpenStack and TripleO (which stands for "OpenStack-On-OpenStack").

by jpatete at March 16, 2020 07:00 AM

March 15, 2020

Christopher Smart

Using network namespaces with veth to NAT guests with overlapping IPs

Sets of virtual machines are connected to a virtual bridges (e.g. virbr0 and virbr1) and as they are isolated, can use the same subnet range and set of IPs. However, NATing becomes a problem because the host won’t know which VM to return the traffic to.

To solve this problem, we can use network namespaces and some veth (virtual Ethernet) devices for each private network we want to NAT. The veth devices come as an interconnected pair of interfaces (think patch cable) and we will use two pairs for each network namespace; one which patches into the provider network (e.g. and another which patches to the virtual machine’s private network (e.g.

By providing each private network with is own unique upstream routable IP and applying NAT rules inside each namespace separately we can avoiding any conflict.

Configuration for multiple namespace NAT

Create a provider bridge

You’ll need a bridge to a physical network, which will act as your upstream route (like a “provider” network).

ip link add name br0 type bridge
ip link set br0 up
ip link set eth0 up
ip link set eth0 master br0

Create namespace

We create our namespace to patch in the veth devices and hold the router and isolated NAT rules. As this is for the purpose of NATing multiple private networks, I’m making it sequential and calling this nat1 (for our first one, then I’ll call the next one nat2).

ip netns add nat1

First veth pair

Our first veth peer interfaces pair will be used to connect the namespace to the upstream bridge (br0). Give them a name that makes sense to you; here I’m making it sequential again and specifying the purpose. Thus, peer1-br0 will connect to the upstream br0 and peer1-gw1 will be our routable IP in the namespae.

ip link add peer1-br0 type veth peer name peer1-gw1

Adding the veth to provider bridge

Now we need to add the peer1-br0 interface to the upstream provider bridge and bring it up. Note that we do not set an IP on this, it’s a patch lead. The IP will be on the other end in the namespace.

brctl addif br0 peer1-br0
ip link set peer1-br0 up

First gateway interface in namespace

Next we want to add the peer1-gw1 device to the namespace, give it an IP on the routable network, set the default gateway and bring the device up. Note that if you use DHCP you can do that, here I’m just setting an IP statically to and gateway of

ip link set peer1-gw1 netns nat1
ip netns exec nat1 ip addr add dev peer1-gw1
ip netns exec nat1 ip link set peer1-gw1 up
ip netns exec nat1 ip route add default via

Second veth pair

Now we create the second veth pair to connect the namespace into the private network. For this example we’ll be connecting to virbr0 network, where our first set of VMs are running. Again, give them useful names.

ip link add peer1-virbr0 type veth peer name peer1-gw2

Adding the veth to private bridge

Now we need to add the peer1-virbr0 interface to the virbr0 private network bridge. Note that we do not set an IP on this, it’s a patch lead. The IP will be on the other end in the namespace.

brctl addif virbr0 peer1-virbr0
ip link set peer1-virbr0 up

Second gateway interface in namespace

Next we want to add the peer1-gw2 device to the namespace, give it an IP on the private network and bring the device up. I’m going to set this to the default gateway of the VMs in the private network, which is

ip link set peer1-gw2 netns nat1
ip netns exec nat1 ip addr add dev peer1-gw2
ip netns exec nat1 ip link set up dev peer1-gw2

Enable NAT in the namespae

So now we have our namespace with patches into each bridge and IPs on each network. The final step is to enable network address translation.

ip netns exec nat1 iptables -t nat -A POSTROUTING -o peer1-gw1 -j MASQUERADE
ip netns exec nat1 iptables -A FORWARD -i peer1-gw1 -o peer1-gw2 -m state --state RELATED,ESTABLISHED -j ACCEPT
ip netns exec nat1 iptables -A FORWARD -i peer1-gw2 -o peer1-gw1 -j ACCEPT

You can see the rules with standard iptables in the netspace.

ip netns exec nat1 iptables -t nat -L -n

Test it

OK so logging onto the VMs, they should a local IP (e.g., a default route to and have upstream DNS set. Test that they can ping the gateway, test they can ping the DNS and test that they can ping a DNS name on the Internet.

Rinse and repeat

This can be applied for other virtual machine networks as required. There is no-longer any need for the VMs there to have unique IPs, they can overlap eachother.

What you do need to do is create a new network namespace, create two new sets of veth pairs (with a useful name) and pick another IP on the routable network. The virtual machine gateway IP will be the same in each namespace, that is

by Chris at March 15, 2020 06:41 AM

March 13, 2020


4 (More) Reasons Why You Need OpenStack Consulting

In the last blog, we covered 5 reasons why you need OpenStack consulting. From increased productivity to simpler integration of new projects, there are many reasons why good guidance makes all the difference. Today we’re back again with another 4 reasons why you should seriously be considering OpenStack consulting services. Keep reading to learn about how our open-source consulting services can elevate your organization or enterprise.

Troubleshoot With Agility With OpenStack Consulting Services

If an incident arises you want to be sure that you have someone on your side. No one wants to find themselves scrambling for help when things seem off. Whenever a problem strikes you can alert the VEXXHOST team through our ticketing service. One of our OpenStack engineers will respond quickly and efficiently to provide you with guidance on your issue. This means that you’ll be able to restore your service quickly, which we know can make all the difference.

Get The Right Advice On Tools

When you work with an OpenStack consultant that you trust you’re able to benefit from good recommendations for the best software solutions for your business or organization. We take the time to guide our clients towards the right releases for their goals and discuss honestly which should be avoided. We also advise on the right compatibility between projects and tools so that your business can meet its strategic goals.

Learn Valuable Best Practices

Learning and keeping up with best practices is a simple way to ensure that your private cloud is secure and running as it should be. The experts at VEXXHOST are here to guide you to ensure that you are getting the most out of your OpenStack powered cloud. With our consulting services, we provide you with a Best Practice Review, which is an overview of your infrastructure. This means we carefully review your hardware and networking so that our team can answer any questions and guide you in the right direction.

Get The Right Advice For Updates

With the right OpenStack consulting you’ll know when is the best time to upgrade your OpenStack cloud to a newer release. There are new updates to OpenStack every 6 months, which means that your OpenStack consultant will keep you up to date on what your cloud needs and the best way to implement the relevant updates. Ultimately, with the right consulting, you’ll never miss the opportunity to update again. Need a hand with updates? Our team of experts offer OpenStack upgrades solutions as well as OpenStack consulting services.

OpenStack consulting is about finding a solution that’s just for you and OpenStack upgrades are about ensuring that your cloud is working at its best. Your enterprise or organization is one of a kind, so should your cloud solution. Trust the experts at VEXXHOST because we know OpenStack inside and out. We’ve been using and contributing upstream to OpenStack since 2011, therefore it’s safe to say open-source is a part of who we are. Have you had OpenStack consulting on your mind? Contact us today to learn more about our services and how our OpenStack consulting services can make a difference for your enterprise.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 4 (More) Reasons Why You Need OpenStack Consulting appeared first on VEXXHOST.

by Angela Bruni at March 13, 2020 02:52 PM

March 12, 2020


5 Reasons Why You Need OpenStack Consulting

We at VEXXHOST know OpenStack inside and out. This is because we’ve been using and contributing upstream to OpenStack since 2011. Open-source technologies are a part of who we are. Trust us when we say that we can help you create and optimize your OpenStack powered cloud strategy. From upgrading to the latest release, Train, to starting Kubernetes, we’re here to advise you every step of the way.

Thinking about OpenStack consulting? We’ve compiled 5 reasons why your enterprise or organization needs OpenStack consulting and how the right guidance can elevate and optimize your business. Ready to get started? Let’s dive in!

Improve Your Infrastructure

Architecting and strategizing the best plan for your infrastructure is one of the many ways that OpenStack consulting can help make a noticeable difference in your bottom line. Improving your infrastructure is a recipe for enhanced efficiency where it matters most. Moreover, utilizing OpenStack’s projects and services with the right guidance can give your enterprise or organization better insights and help you implement the right plan for migration and storage solutions.

See Increased Productivity

If you’re looking for a seamless workflow you’ll need to spend some time focusing on your design and applications. The right OpenStack consultant will help you optimize your applications and infrastructure for maximum productivity and better performance. We are there for our clients throughout the consultation process. From reviewing your current architecture and the suitability of your applications, we then help you determine what open-source cloud solution will help you optimize for growth.

Simpler Implementation Of New Projects

With the right OpenStack consulting painful implementations are a thing of the past. If you have an OpenStack powered private cloud our experts can help you decide which OpenStack projects are best suited to your individual requirements. Whatever your needs, there is an OpenStack project to help get you there. We’re here as OpenStack consultants to help you achieve your goals with agility and flexibility.

No More Heavy Lifting

Having the right consultancy services means that you can experience increased efficiency and flexibility faster. It’s easy when you trust us to find out your individual requirements, build, test and deploy your OpenStack powered cloud. It doesn’t end there though. VEXXHOST is there to keep watch over the management and maintenance of your cloud environment. We also proactively monitor your cloud and generate reports at short and regular intervals, meaning that you can focus on what’s important to your brand or business while we take care of the rest. As a result, we want to revolutionize the delivery and operation of your cloud. Consequently, making the experience as enjoyable as it is painless.

Getting Started With OpenStack Consulting

When you trust VEXXHOST with your OpenStack consulting needs you’ll never need to manage your cloud solution on your own. We will be actively by your side to guide you through the management and flow of your services and cloud environment. From creating strategies for success to implementing the right tools and services, we will review your needs and provide the best guidance possible.

No two clouds in the sky are the same. Certainly, the same can be said about your OpenStack powered cloud. With the right advice, you can elevate your cloud all the while reducing both time and resources.

Finally, whether you’re looking to upgrade to the latest OpenStack release, want to get started with an OpenStack powered cloud, or need some advice on your strategy, we’re here to help. Contact us today to learn more about how our OpenStack consulting services can make all the difference.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 5 Reasons Why You Need OpenStack Consulting appeared first on VEXXHOST.

by Angela Bruni at March 12, 2020 03:53 PM

March 11, 2020


Which Industries Are Using OpenStack?

Industries such as financial services, education, and healthcare are all turning to OpenStack powered cloud solutions for a reason. Public and private cloud environments allow all industries to benefit from the dependability, agility, and scalability of OpenStack. Whether you’re thinking of starting your own FinTech enterprise or want to get more information out, OpenStack has you covered.

Today we’re going to review some of the industries that are using OpenStack. Then take a look at how OpenStack is making an impact on their daily operations.

Financial Services

Financial services are looking towards OpenStack to stay abreast of current trends. Banks and other financial institutions are constantly working towards new upgrades to their IT infrastructure for things such as online banking or mobile payments. Open-source is a lucrative opportunity for financial enterprises. OpenStack allows for their IT Infrastructure to be as agile as it is scalable. Moreover, OpenStack allows financial institutions access to a better platform to optimize their services and provide a better quality banking experience.

Specialty financial services such as automated 24-hour support have never been easier thanks to OpenStack powered private clouds. Banks are able to digitally update themselves and provide a positive impact across all financial industries.

Educational Services and Education Management

More and more educational institutions are utilizing cloud technologies to better their classrooms and overall management. From virtual classrooms, more cultural exposure, increased expansive virtual training or even lower software and hardware costs, there is a definite place for an OpenStack powered cloud solution in the field of education.

An OpenStack powered cloud gives a modern approach to learning. Cloud infrastructure allows students to use better course management tools, such as Moodle, an open-source software package created to facilitate effective online learning for educators. Moreover, this makes room for students to access a variety of resources such as grades, discussion forms, and more. As a result, using OpenStack in the classroom does away with old fashioned filling systems and brings in new ways of collaborative learning.

Healthcare Services

The healthcare industry is using OpenStack to help make a difference in the quality of life of their patients around the world. Innovations in OpenStack powered clouds are having transformative effects on healthcare. From better control of scalability of data to faster access to crucial medical records, the role of the cloud in healthcare is estimated to grow at a CAGR of 15% by 2025. This growth touches several facets of the healthcare system, from health services, clinics, and even pharmacies.

Being able to utilize the power of OpenStack during emergencies allows medical professionals to be better prepared. Doctors in training can receive real-time guidance through technology and can receive life-saving information when it matters most. Certainly, the healthcare industry has many reasons to embrace OpenStack. Open-source allows for better flexibility, access when it matters most and secure storage for confidential records.

OpenStack For Your Industry

Have you been thinking about adopting OpenStack in your industry? We’re here to help. No matter what industry your organization or enterprise is a part of, we can help you find a unique OpenStack powered cloud that works for your needs. Contact us today to learn more about why your industry should be using OpenStack and how we can get you there.

Would you like to know about Private Cloud and what it can do for you? Download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post Which Industries Are Using OpenStack? appeared first on VEXXHOST.

by Angela Bruni at March 11, 2020 07:27 PM

OpenStack Superuser

Women of Open Infrastructure – Reflections on Women’s History Month

I love data, but I also have to admit that data feels cold and impersonal at times when we talk about societal issues and our personal experiences. Don’t get me wrong. Data is straight forward and it directs us to look at situations in a macro level, but what resonates with me the most is the stories and anecdotes that people tell and experience themselves. In honor of International Women’s History month, I want to share with you some of the personal stories that mean a lot to me and how these stories greatly influenced the woman I have become and the journey in tech I have pursued.

I was hired by the OpenStack Foundation as a marketing intern during my senior year in college. I remember that the OpenStack Summit Vancouver 2018 happened during my second week at the job. At the Vancouver Summit, which is my first technical conference, I packed my schedule back-to-back with numerous technical sessions and hoped to absorb as much information as possible in my “non-technical” world.

When I was rushing from one Summit session to the next, I passed by the screening of the Chasing Grace Project, a documentary series about women in tech. From the pay gap, online harassment to the decision to leave or stay in tech and the role of male allies, Jennifer Cloer, the executive producer and director of the Chasing Grace Project, explores both the economic and emotional toll the pay gap has on women in tech and their desire to stay in tech in one of her 22-minute episodes. My plan was to only stop by to have a glimpse and take a short five-minute break before rushing to my next Summit session. Little did I know that this short “tech session” in the Vancouver Summit inspired me to dive more into the topic of women in tech and wrote my graduation thesis on the practices of how tech companies help to close gender disparity. Although I missed the next session that I planned to go, Jennifer’s passion heavily influenced me as a college student who wants to pursue a career in tech.

Learning should not be stopped after college graduation, neither should my drive in getting to know more successful women in the tech industry who empower others in many ways. At my last Open Infrastructure Summit Shanghai in November 2019 with the OpenStack Foundation, I was introduced by Jonathan Bryce, the Executive Director of the OpenStack Foundation, to meet an Intel intern, Joy Liu, and her mentor, Ruoyu Ying, who is an Intel engineer working on OpenStack. Joy is 17 years old and is an enthusiast of computer science in her high school. She not only received the travel support to come to the Open Infrastructure Summit Shanghai but also delivered a session with her mentor on how an edge framework can help object recognition.

Although we only had a short talk before their session at the marketplace theater, I was impressed by her technical knowledge at such a young age. Her unstoppable passion in tech and open source community is contagious. I feel such an honor to be able to connect with these young aspiring women at the Summit. Even though I’m not a technical person, Joy’s enthusiasm for edge computing, AI, and computer programming inspired me to explore more areas that I’m not familiar with. It has been almost five months since I have met Joy and Ruoyu, but the strong female energy has resonated with me since then.

One of the best parts about my job is working with strong and successful women who don’t hold back their opinions and strive to be the best version of themselves in the industry. It’s difficult to imagine yourself achieving something if you haven’t seen someone else do it. I’m very grateful to be involved in this inclusive and diverse open source industry that everyone is welcomed to participate. Just like Ruoyu said “although this is a male-dominated industry, this is still an open space for everyone to join in. People are going to respect and listen to your ideas no matter what gender you are, as long as you are good in a certain field.” In this Women’s History Month, I encourage you to keep striving for macro-level change in women in tech while also making a small impact every day in addressing all forms of inequality that we witness and experience. This Women’s History Month is for you.

Let us celebrate with you!

We are collecting stories from women about their experiences in tech and highlighting their impressive community contributions. Please reach out to me if you are interested in participating in a short Q+A or would like to nominate someone to tell their stories. Please email sunny@openstack.org and we are looking forward to hearing from you and hear from fearless women you know!

The post Women of Open Infrastructure – Reflections on Women’s History Month appeared first on Superuser.

by Sunny Cai at March 11, 2020 03:00 PM

March 10, 2020


How OpenStack Is Optimizing For Growth In 2020

OpenStack is one of the fastest-growing open-source communities in the world. It’s forecast to have a global market revenue of $5.63 billion USD in 2020 and $6.73 billion USD in 2021. These are no small figures for a reason. Moreover, OpenStack is one of the three most active open-source projects in the world. Alongside Linux Kernel and Chromium, OpenStack continues to be one of the most influential open-source cloud software available today. It’s evident that OpenStack has a mindset for agile yet steady growth.

We’re here to break down some of the ways that OpenStack is optimizing for growth in 2020. From strengthening already tight community bonds to trailblazing with leadership, we’re excited to see what innovations come from OpenStack in 2020.

Communication & Community

Communication is crucial. OpenStack is experiencing some major growth. Not just in terms of the projected global market revenue increase but also in terms of new developers and users joining OpenStack. The community is getting bigger and with that comes more voices to keep in consideration. Between OpenStack, The OpenStack Foundation, board members, the OpenStack community and more, there are a lot of opinions to consider.

One way in which OpenStack stays ahead of the game is to be transparent as an organization. By keeping in mind the diverse backgrounds and opinions of their vibrant community they are working towards being scalable but also inclusive.


That being said, the leadership of OpenStack is definitely a factor in growth optimization for the open-source project in 2020. They know that they’re growing as a project and with that, there are new challenges for them, the growing user-base and the community. Similarly, the inclusivity of OpenStack and emphasis on diversity means that anyone can rise to leadership roles. They are currently working on creating clear paths for new leaders and to streamline their processes. Keeping in mind obstacles such as time zones can help open OpenStack to new leaders in countries like China, which have so much to offer the community. Above all, it doesn’t matter if you’re in Canada, China, or Brazil, there’s a place for you to grow in OpenStack.

The efforts made by the leadership at OpenStack in terms of diversity and inclusivity are good signs and will help nurture growth in 2020.


Finally, with every new release, OpenStack continues to improve priorities for users and developers. When OpenStack released Train, the twentieth version of the world’s most deployed open-source infrastructure software, there were some major enhancements. They put ease of use, reliability, and security front and center. Finding advancements in Artificial Intelligence (AI) and Machine Learning (ML) use cases while enhancing security and improving resource management. This magnum opus took more than 1,100 contributors from more than 50 countries and 160 organizations. Collaboration and innovation are at the very core of OpenStack.

Certainly, it’s evident that OpenStack is going places. We at VEXXHOST are proud to offer OpenStack powered public and private clouds to suit the individual needs of your organization or enterprise. We also offer enterprise-grade, fully managed solutions to our clients based on their strategic goals. Think we should work together to make your OpenStack goals a reality? Contact us today to start optimizing your own business for growth with OpenStack.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post How OpenStack Is Optimizing For Growth In 2020 appeared first on VEXXHOST.

by Angela Bruni at March 10, 2020 06:00 PM

Fleio Blog

Fleio 2020.03: Timed Openstack pricing rules, automatically set VAT exempt, process clients cron information in dashboard and more

Fleio version 2020.03 is now available! The latest version was published today, 2020-03-10. Timed Openstack pricing rules With 2020.03 we have implemented start and end date for Openstack pricing rules. This will allow you to configure certain prices for different resources, within a desired time span. Mostly, this is helpful when you want to update […]

by Marian Chelmus at March 10, 2020 12:07 PM

March 06, 2020


Why No Business Is Too Small For OpenStack

If you think your business is too small for OpenStack then it’s time to think again. There are many benefits for small businesses or organizations that adopt an OpenStack cloud. OpenStack allows you to create a scalable cloud that can enable you to think big, start small, and grow fast. It can feel intimidating to get started with OpenStack but we are here to help you through every step of the way.

In this blog, we are going to explain why no business is too small for OpenStack. No matter what size your enterprise is, you can utilize OpenStack as your cloud provider. You can rest assured that OpenStack is a solid foundation for your cloud solutions.

The Value Of The OpenStack Powered Cloud

Did you know that OpenStack is the leading open-source option for building public cloud environments? Not only that, but VEXXHOST runs the largest OpenStack public cloud in Canada. When it comes to a scalable solution, it’s important to see the value of the cloud.

Some start-ups opt for a public cloud that can be rented by the hour with a higher utilization rate. Starting out with a public cloud is an economical option for small enterprises as large instances are available at lower costs. There’s room to scale up or scale down depending on a change in funding and the needs of the business. In contrast, enterprises with more stable workloads that are scaling up operations rapidly might encounter a break-even point, i.e. reaching a point beyond which a private cloud solution is more cost-effective. The agility and customizability of the private cloud prevent data accumulation and cloud sprawl which mostly has significant cost-saving impacts for large scale businesses.

At the end of the day, small businesses have very different requirements than large enterprises. Despite some limitations, they also need room to grow. OpenStack is appealing because it offers both public and private cloud solutions for organizations starting out to well-established businesses. If you’re unsure about what open-source cloud might be right for you, contact our team of experts to help determine your requirements.

Think Big, Start Small, Grow Fast

Think big, start small, and grow fast is a worthy mantra for small businesses that want to adopt an OpenStack cloud. In today’s quick-moving business climate everyone is looking to utilize their IT department for growth against a highly competitive market. In order to survive businesses need to be able to adapt and transform to become more agile and innovative. This is where OpenStack comes in.

As we mentioned before, OpenStack is dedicated to creating software that is scalable and usable. It’s up to businesses to determine their requirements and work from there. Having a clear idea of workloads is a good place to start. It’s important to give some thought to the use case and plan accordingly for them. This way your business will be prepared for deployment, management, and the short and long term benefits of working with OpenStack.

Another bonus of OpenStack for small businesses is that there are no vendor lock-ins. Meaning that you can migrate between a public and private cloud, or expand into a multi-cloud ecosystem with ease. Avoiding proprietary cloud protects users from getting locked into their platform. When you’re looking for growth, that’s the last thing that you need.

We at VEXXHOST have been using and contributing to upstream open source technologies since 2011. We are proud to say that we know OpenStack inside out. That being said, we can help architect and optimize your cloud strategy with OpenStack powered infrastructure, running the latest release, Train, and Kubernetes. Need some help with requirements for your small, medium, or large business? We have you covered. Contact one of our experts today to see how we can make your OpenStack cloud a reality.

Would you like to know about Private Cloud and what it can do for you? Download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post Why No Business Is Too Small For OpenStack appeared first on VEXXHOST.

by Angela Bruni at March 06, 2020 07:24 PM

Rackspace Developer Blog

Announcing the launch of OpenStack Train v20

Rackspace is launching OpenStack® Train v20, which is the latest open-source software release as part the OpenStack Ansible project.

This release allows you to deploy or upgrade the Rackspace Private Cloud powered by OpenStack portfolio to a community-supported version of OpenStack. This release addresses backporting, bug fixes, and vulnerabilities.

As a co-founder of OpenStack, our community involvement remains integral to the overall trajectory and consistency of this open-source technology. Support for OpenStack Train v20 is an integral part of our effort to provide the most current platforms and technologies for our customers.

OpenStack Train v20 has provided enhanced capabilities for Core Services including enhanced security and data protection, new artificial intelligence (AI) and machine learning support, and improved resource management and tracking. These enhancements enable you to modernize their experience, infrastructure, and applications.


OpenStack Train v20 includes enhancements to the following OpenStack services:

  • Identity (Keystone)

    -  Improves integration with WebSSO
    -  Deprecates Keystone policies
  • Image (Glance)

    -  Improves integration with Cinder and Barbican
  • Block Storage (Cinder):

    - Adds option to control Glance image compression and improve
      hardware-based compression
    - Improves Pure Storage, Dell EMC, and NetAPP drivers
  • Compute (Nova):

    - Adds live migration support with NUMA, pinned CPU and huge-pages,
      SR-IOV enabled hosts
    - Adds framework to support AMD Secure Encrypted Virtualization (SEV)
    - Adds support for VPMEM to provide data persistence for guest RAM
      across reboots
  • Networking (Neutron):

    - Adds support for SmartNIC in ML2/OVS driver
    - Enables abilty to update segmentation ID on OVS bound ports
    - Adds support for L3 conntrack helper
    - Improves notification with Baremetal service (Ironic)
    - Adds support for OVS DPDK representer
  • Orchestration (Heat):

    - Improves support for CoreOS Ignition config
    - Improves support for QOS Rules with direction property
  • Dashboard (Horizon):

    - Allows users to change a password upon first use
    - Enables Cinder backup of snapshots
    - Adds experimental support of Django 2.2
    - Adds support for multi-attachment of volumes
  • Baremetal Service (Ironic):

    - Adds support for building software RAID
    - Adds support of Dell iDRAC WS-MAN RAID interface
    - Adds support of BIOS interface of ilo, ilo5, redfish hardware types\
    - Improves iPXE integration
    - Improves support for HTTP Proxy Headers
  • DNS as Service (Designate):

    - Improves PowerDNS pool management during AFXR requests

Learn more about Rackspace OpenStack Private Cloud

Visit www.rackspace.com and click Sales Chat to get started.

Use the Feedback tab to make any comments or ask questions.

March 06, 2020 12:01 AM

March 05, 2020


What Are The Four Opens Of OpenStack?

The Four Opens, otherwise known as ‘The OpenStack Way’ are a set of fundamental guidelines that were created by the OpenStack community for the OpenStack community. They are in place to ensure that users receive all the benefits associated with open source software, all the while engaging with the cloud computing community to contribute to innovations in the future of OpenStack.

Since the inception of these four principles, OpenStack has grown into one of the three most active open source projects in the world. Alongside Linux Kernel and Chromium, OpenStack continues to be one of the most influential open-source cloud software available today.

The vibrant OpenStack community has taken on The Four Opens as guiding principals. Today we’re going to break down what The Four Opens are and what they mean for the global community of OpenStack.

Open Source

The OpenStack Foundation commits to creating open-source software that is both usable and scalable. What does this mean? OpenStack does not produce “open core” software, but rather a software that is not limited by performance.

It’s crucial that any developed software under The Four Opens must be released under an open-source license. This ensures that users can study a program, make necessary changes to improve it and once approved, redistribute the original or modified version depending on the changes. This is important because it allows other users in the community to benefit from the work. A community that works together only creates a body of work that is stronger.

Open Community

A community means a feeling of connection with others as the result of sharing common attitudes, interests, and passions. The OpenStack community is no different. OpenStack focuses on one of the core goals of maintaining a healthy and vibrant user and developer community. They work to ensure that the community is inclusive as it is diverse. It’s important that anyone can rise to a leadership position with the right skill and work ethic.

It’s crucial to establish common goals in the Open Community so that this principal creates strong connections between users and developers. It makes for better open-source software and harmony amongst the varied individuals within the community. It’s complex as it is dynamic. The community is what has made OpenStack what it is today.

Open Development

OpenStack does public code reviews and has public roadmaps. This means that participation is transparent as well as simpler. Thus allowing users to follow the development process and participate from the earliest stage possible. Transparent and inclusive development enables everyone to participate on equal footing. Since publicly accessible services are visible to the public, everyone is able to overview development activities without needing to sign up for a service. This keeps the community accountable and open to new developments. These high standards in Open Development mean that the process of evaluating contributions is egalitarian and fair.

Open Design

OpenStack is deeply committed to an open design process. This means that Open Design helps to enable a transparent and open process for planning and designing software. Open Design is not about one individual designing software and its feature road-map, but accepting that OpenStack is community-driven and so should its software.

When something as important as the design comes to the table it’s difficult not to have an opinion. Despite difficulties in the process, it’s better to create a slower, long term process. Especially when working with a larger group. The community works together to agree on a design that will lead to a better product that’ll benefit all users of OpenStack.

We at VEXXHOST are enthusiastic members of the OpenStack Foundation. We are on a mission to help you better understand OpenStack. Want to learn more about a private cloud solution for your business or organization? Contact us today to learn more about how an OpenStack powered cloud can elevate your business. The sky is the limit!

The post What Are The Four Opens Of OpenStack? appeared first on VEXXHOST.

by Angela Bruni at March 05, 2020 09:31 PM

StackHPC Team Blog

Scaling up: Monasca Performance Improvements

Monasca project mascot

At StackHPC, we use Kolla-Ansible to deploy Monasca, a multi-tenant monitoring-as-a-service solution that integrates with OpenStack, which allows users to deploy InfluxDB as a time-series database. As this database fills up over time with an unbounded retention period, it is not surprising that the response time of the database will be different to when it was initially deployed. Long term operation of Monasca by our clients in production has required a proactive approach to keep the monitoring and logging services running optimally. To this end, the problems we have seen are related to query performance which has been directly affecting our customers and other Monasca users. In this article, we tell a story of how we overcame these issues and introduce an opt-in database per tenant capability we pushed upstream into Monasca for the benefit of our customers and the wider OpenStack community who may be dealing with similar challenges of monitoring at scale.

The Challenges of Monitoring at Scale

Our journey starts at a point where the following disparate issues (but related at the same time in the sense that they are all symptoms of a growing OpenStack deployment) were brought to our attention:

  • When a user views collected metrics on a Monasca Grafana dashboard (which uses Monasca as the data source), firstly, it aims to dynamically obtain a list of host names. This query was not respecting the time boundary that can be selected on the dashboard and instead was scanning results from the entire database. Naturally, this went unnoticed when the database was small, but as the cardinality of the collected metrics grew over time (345 million at its peak on one site - that is 345 million unique time series), the duration of this query was taking up to an hour before eventually timing out. In the mean time it would be blocking resources for additional queries.
  • A user from a new OpenStack project would experience the same delay in query time against the Monasca API as a user from another project with a much larger metrics repository. This is because Monasca currently implements a single InfluxDB database by default and project scoped metrics are filtered using a WHERE statement. This was a clear bottleneck.
  • Last but not the least, all metrics which were being gathered were subject to the same retention policy. InfluxDB has support for multiple retention policies per database. To keep things further isolated, it is also possible to have a database per tenant, each with its own default retention policy. Not only does this increase the portability of projects, it also removes the overhead of filtering results by project each time a query is executed, naturally improving performance.

To address these issues, we implemented the following quick fixes, and while they alleviate the symptoms in the short term, we would not consider either of them sustainable or scalable solutions as they will soon require further manual intervention:

  • Disabling dynamic host name lookup by providing a static inventory of host names (which could be automated at deploy time for static projects). However, for dynamic inventories, this approach relies on manual update of the inventory.
  • Deleting metrics with highly variable dimensions, which contribute disproportionately to increasing the database cardinality (larger cardinality leads to increased query time for InfluxDB, although other Time Series Databases, e.g. TimescaleDB claim not to be affected in a similar way). Many metric sources expose metrics with highly variable dimensions and avoiding this is an intrinsically hard problem and not one confined to Monasca. For example, sources like cAdvisor expose a lot of metrics with highly variable dimensions by default and one has to be judicious about which metrics to scrape. In our Kolla-Ansible based deployment, the low hanging fruits were mostly metrics matching the regex pattern log.* originating from the OpenStack control plane useful for triggering alarms and then for a finite time horizon for auditing. However, since all data is currently stored under the same database and retention policy (since the Monasca API currently does not have a way of setting per project retention policies), it is not possible to define project specific data expiry date. For example, we were able to reduce 345 million unique time series down to a mere 227 thousand, 0.07% of the original by deleting these log metrics (deleting at a rate of 7 million series per hour for a total of 49 hours). Similarly, at another site, we were able to cut down from 2 million series to 186 thousand, 9% of the original (deleting at a rate of 29 thousand series per hour for 77 hours). In both cases, we managed to significantly cut down the query time from a state where queries were timing out down to a few seconds. However, employing database per tenancy with fine control over retention period remained the holy grail for delivering sustained performance.

Towards Greater Performance

Our multi-pronged approach to make the monitoring stack more performant and resilient can be summarised in the following ways:

  • The first part of our effort to improve the situation is by introducing a database per tenancy feature to Monasca. The enabling patches affecting monasca-{api,persister} projects have now merged upstream and are available from OpenStack Train release. This paves the way for using an instance of InfluxDB per tenant to further decouple the database back-end between tenants. In summary, these changes enable end users to:
    • Enable a database for tenant within a single InfluxDB instance on an opt-in basis by setting db_per_tenant to True in monasca-{api,persister} configuration files.
    • Set a default retention policy by defining default_retention_hours in monasca-persister configuration file. Further development of this thread would involve giving project owners the ability to set retention policy of their tenancy via the API.
    • Migrate an existing monolithic database to a database per tenant model using an efficient migration tool we proudly upstreamed.
  • We also introduced experimental changes to limit the search results to the query time window selected on the Grafana dashboard. The required changes spanning several projects (monasca-{api,grafana-datasource,tempest-plugin}) have all merged upstream and also available from Openstack Train release. Since the only option previously was to search the entire database, queries targeting large databases were timing out which can now be avoided. The only caveat with this approach is that the results are approximate, i.e., the accuracy of the returned result is determined by the length of the shardGroupDuration which resolves to 1 week by default when the retention policy is infinite. This defaults to 1 day when the retention policy is 2 weeks. Considering that the earlier behaviour was to scan the entire database, this approach yields a considerable improvement, despite a minor loss in precision.

These additional features have allowed us to further reduce the query time to less than a second in a large, 100+ node deployment with 1 year retention policy; a dramatic improvement compared to queries without any time boundary where our users were frequently hitting query timeouts. Additionally, we have facilitated a more sustainable way to manage the life-cycle of data being generated and consumed by different tenants. For example, this allows tenancy for the control plane logs to have a short retention duration.

A Well-Rehearsed Migration Strategy

Existing production environments hoping to reap the benefit of capabilities we have discussed so far may also wish to migrate their existing monolithic database to a database per tenant model. A good migration tool requires a great migration strategy. In order to ensure minimal disruptions for our customers, we rehearsed the following migration strategy in a pre-production environment before applying the changes in production.

First of all, carry out a migration of the current snapshot of the database up to a desired --migrate-end-time-offset, e.g. 52 weeks into the past. This is much like a Virtual Machine migration, we start by syncing the majority of the data across which requires a minimum free disk space equivalent to the current size of the database. The following example is relevant to Kolla-Ansible based deployments:

docker exec -it -u root monasca_persister bash
source /var/lib/kolla/venv/bin/activate
pip install -U monasca-persister
docker exec -it -u root monasca_persister python /var/lib/kolla/venv/lib/python2.7/site-packages/monasca_persister/tools/influxdb/db-per-tenant/migrate-to-db-per-tenant.py \
--config-file /etc/monasca/persister.conf \
--migrate-retention-policy project_1:2,project_2:12,project_3:52 \
--migrate-skip-regex ^log\\..+ \
--migrate-time-unit w \
--migrate-start-time-offset 0 \
--migrate-end-time-offset 52

The initial migration is likely to take some time depending on the amount of data being migrated and the type of disk under the hood. While this is happening, the monasca_persister container is inserting new metrics into the original database which will need re-syncing after the initial migration is complete. Take a note of the length of time this phase of migration takes as this will determine the portion of the database that will need to be remigrated. You will be able to see that a new database with project specific retention policy of 2w has been created as follows for project_1:

docker exec -it influxdb influx -host -database monasca_project_1 -execute "SHOW RETENTION POLICIES"

name    duration shardGroupDuration replicaN default
----    -------- ------------------ -------- -------
2w      336h0m0s 24h0m0s            1        true

Once the initial migration is complete, stop the monasca_persister container and confirm that it has stopped. For deployments with multiple controllers, you will need to ensure this is the case on all nodes.

docker stop monasca_persister
docker ps | grep monasca_persister

Once the persister has stopped, nothing new is getting written to the original database while any new entries are being buffered on Kafka topics. It is a good idea to backup this database as this point for which InfluxDB provides a handy command line interface:

docker exec -it influxdb influxd backup -portable /var/lib/influxdb/backup

Upgrade Monasca containers to OpenStack Train release with database per tenancy features. For example, Kayobe/Kolla-Ansible users can run the following Kayobe CLI command which also ensures that the new versions of monsasca_persister containers are back up and running on all the controllers writing entries to a database per tenant:

kayobe overcloud service reconfigure -kt monasca

Populate the new databases with the missing database entries (minimum is 1 unit of time). InfluxDB automatically prevents duplicate entries therefore it is not a problem if there is an overlap in the migration window. In the following command, we assume that the original migration took less than a week to complete therefore set --migrate-end-time-offset to 1:

docker exec -it -u root monasca_persister python /var/lib/kolla/venv/lib/python2.7/site-packages/monasca_persister/tools/influxdb/db-per-tenant/migrate-to-db-per-tenant.py \
--config-file /etc/monasca/persister.conf \
--migrate-retention-policy project_1:2,project_2:12,project_3:52 \
--migrate-skip-regex ^log\\..+ \
--migrate-time-unit w \
--migrate-start-time-offset 0 \
--migrate-end-time-offset 1


This development work was generously funded by Verne Global who are already using the optimised capabilities to provide enhanced services for hpcDIRECT users.

Contact Us

If you would like to get in touch we would love to hear from you. Reach out to us on Twitter or directly via our contact page.

by Bharat Kunwar at March 05, 2020 06:00 PM


How to build a simple edge cloud: Q&A

Last week we held a webinar explaining the basics behind creating edge clouds, including a live demo, but we didn't have enough time for all of the questions. So as is our tradition, here are the Q&As, including those we didn't get to on the call.

by Nick Chase at March 05, 2020 02:46 PM

March 03, 2020


What Should You Look For In An OpenStack Provider?

It’s time to talk about the importance of the right OpenStack provider. We don’t need to tell you that implementing OpenStack is a big deal. Running applications and storing data in a cloud model is a great way to increase flexibility within your business or organization. This, in turn, can help your business thrive all the while saving on IT and operational costs.

OpenStack is an open-source technology that enables a set of tools to build and manage a cloud computing platform. Taking advantage of OpenStack’s architecture can help deploy your organization’s cloud quickly and efficiently.

When making the decision to implement OpenStack you want to be sure that you’re making the right decisions. You want a vendor that is OpenStack certified and that will support your unique needs during deployment and beyond. You also want to be sure that your OpenStack solution is as flexible as you need, from upgrades to minor changes, you want a vendor that is there for you.

At the end of the day, the advantages of OpenStack are evident. When your business or organization is ready to deploy OpenStack you have to make sure that you have an OpenStack provider that you can trust. We’re here today to break down what you should look for in an OpenStack vendor and give you a few tips and tricks along the way. Let’s dive in.

Is Your OpenStack Provider Certified?

When your OpenStack provider is certified it means that their cloud service contains OpenStack software that has been validated through testing to provide API compatibility for OpenStack core services. The provider’s product must pass specific tests to ensure that they are providing an up to date and complete version of the software. This secures software compatibility.

We at VEXXHOST are OpenStack certified. Not only that but we contribute and use OpenStack and have been since OpenStack’s second release, Bexar, in 2011.

How Will They Help Support You Through OpenStack Deployment?

Before you start with the deployment of OpenStack your organization or business needs to ask themselves some questions. What sort of support model are you looking for? Is your IT department able to continue to run the cloud after deployment? What about upgrades? Should you train an in-house team? It’s easy to get carried away. Take the time to discuss with your potential provider what support models they offer during and after deployment. This will ensure that you have everything that you need to run and maintain OpenStack.

Thinking of deploying an OpenStack powered private cloud? Our private cloud offerings have better delivery and operations processes, with the best efficiency possible. Meaning that your company’s infrastructure works to perform better and learn faster. From gathering requirements, architecting, testing, deploying and even the management and maintenance of your OpenStack private cloud, we have you covered. Leverage our experience and expertise to help your business grow with OpenStack.

How Much Flexibility Can Be Built Into Your OpenStack Solution?

You need to make sure your OpenStack provider is flexible. When you choose the right vendor it’s important that they are willing to work with multiple different OpenStack distributions. They need to be able to accommodate the hardware, strategy and OpenStack projects that will benefit your cloud solution best. You’re not a one size fits all kind of enterprise so why should your infrastructure be? The bottom line is, whatever it is you need, you need to make sure your vendor is flexible enough to work with you to provide it.

The team at VEXXHOST is here to help you create a one of a kind OpenStack solution. From helping you determine your requirements, to choosing the right OpenStack projects, we are there for you every step of the way.

Thinking of taking the leap in implementing OpenStack? You don’t have to do it alone. Trust the experts at VEXXHOST to help guide you. Whether you’re looking for an OpenStack provider or to upgrade your current infrastructure, contact us today to learn more about how we can get you there.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post What Should You Look For In An OpenStack Provider? appeared first on VEXXHOST.

by Angela Bruni at March 03, 2020 08:51 PM

Christopher Smart

Using Ansible to define and manage KVM guests and networks with YAML inventories

I wanted a way to quickly spin different VMs up and down on my KVM dev box, to help with testing things like OpenStack, Swift, Ceph and Kubernetes. Some of my requirements were as follows:

  • Define everything in a markup language, like YAML
  • Manage VMs (define, stop, start, destroy and undefine) and apply settings as a group or individually
  • Support different settings for each VMs, like disks, memory, CPU, etc
  • Support multiple drives and types, including Virtio, SCSI, SATA and NVMe
  • Create users and set root passwords
  • Manage networks (create, delete) and which VMs go on them
  • Mix and match Linux distros and releases
  • Use existing cloud images from distros
  • Manage access to the VMs including DNS/hosts resolution and SSH keys
  • Have a good set of defaults so it would work out of the box
  • Potentially support other architectures (like ppc64le or arm)

So I hacked together an Ansible role and example playbook. Setting guest states to running, shutdown, destroyed or undefined (to delete and clean up) are supported. It will also manage multiple libvirt networks and guests can have different specs as well as multiple disks of different types (SCSI, SATA, Virtio, NVMe). With Ansible’s –limit option, any individual guest, a hostgroup of guests, or even a mix can be managed.

Managing KVM guests with Ansible

Although Terraform with libvirt support is potentially a good solution, by using Ansible I can use that same inventory to further manage the guests and I’ve also been able to configure the KVM host itself. All that’s really needed is a Linux host capable of running KVM, some guest images and a basic inventory. The Ansible will do the rest (on supported distros).

The README is quite detailed, so I won’t repeat all of that here. The sample playbook comes with some example inventories, such as this simple one for spinning up three CentOS hosts (and using defaults).

      ansible_python_interpreter: /usr/bin/python

This can be executed like so.

curl -O https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
sudo mv -iv CentOS-7-x86_64-GenericCloud.qcow2 /var/lib/libvirt/images/

git clone --recursive https://github.com/csmart/virt-infra-ansible.git
cd virt-infra-ansible

ansible-playbook --limit kvmhost,simple ./virt-infra.yml

There is also a more detailed example inventory that uses multiple distros and custom settings for the guests.

So far this has been very handy!

by Chris at March 03, 2020 10:42 AM

OpenStack Superuser

StarlingX community interview: how StarlingX shines in the starry sky of open-source projects in China

StarlingX is a pilot project supported by the OpenStack Foundation that was announced in May 2018. The project integrates well-known open source projects including OpenStack, Ceph, Kubernetes, and more to create a cloud platform that is fine-tuned for edge and IoT use cases.

The StarlingX community recently participated in the 10th China Open Source Hackathon and won the China Excellent Open Source Project Award at the 9th China Cloud Computing Standards and Application Conference in 2019. Last year, the project has made 3,359 changes committed by 147 authors from more than 10 organizations.

Superuser got a chance to interview a StarlingX community member in China, Austin Sun from Intel. He talked about how the StarlingX community is doing in China, the obstacles they are facing, the best practices they use to overcome the challenges, and what they are looking forward to in the StarlingX 4.0 release in 2020. Without further ado, let’s dive into it. 

How have you participated in or contributed to the open infrastructure community?

I am a developer on StarlingX projects, so my major work is developing new features and fixing bugs. In the meantime, I am also taking the role of project lead on the StarlingX Distro project team, mainly coordinating work like project meetings, task prioritization, and issue tracking. To promote StarlingX, I’ve participated in various events sponsored by the OpenStack Foundation, such as PTG, meet-up, hackathon, Summits, and so on.

How did you get started working with StarlingX?

In the past years of my career, I had mainly worked on closed-source products or projects. However, in May 2018, I got a chance to join an open-source project when the Intel StarlingX team was recruiting internal developers. It opened a door for me to join this amazing world where everybody can make contributions in many ways, and at the same time, everybody can be supported by each other.

Why do you think organizations use StarlingX?

In my point of view, StarlingX is not just an open-source project, it’s more like a close-to-product-ready solution. If users need to have a software platform to manage their edge sites, they can take advantage of StarlingX from day-0 installation and provisioning, day-1 operation and day-N administration and maintenance. On the other hand, StarlingX is open to everybody, so users can participate in the community, bring their requirements and feedback, and partner with other members to drive StarlingX moving forward.

How can interested contributors get started/get involved in StarlingX?

As said all the time, StarlingX welcomes everyone and many kinds of contributions, from code to documentation, from reports to user/dev experience feedback, from testing to sharing case stories. There are many ways, and here I just named a few. StarlingX provides many communication channels, including regular project meetings, mailing list, and IRC.

How do you help others to connect the Chinese community with the global community? What challenges have you overcome?

Our team is in China, so one of our missions is to help the Chinese community to develop the software, contribute code, documentation, and more.  Most of the StarlingX project meetings are held late at night in China, so the presence and participation for the Chinese community members are quite challenging. To overcome these obstacles, together with other community members (like friends in 99cloud) in China, we made some initiatives, such as engaging with other Chinese community members at the meet-ups, hands-on workshops ad-hoc tech meetings in Chinese, translating some documents to Chinese, and continuously interacting in WeChat groups (just like a 24/7 on-call services for and by everyone)

What are some areas that the community can improve on?

StarlingX is one big project, and there are many dependencies among sub-projects. We definitely need to make modules less-coupled and make sure the community is moving in this direction.

In addition, it would be great to have a sandbox environment available for developers that the community is looking into and also welcoming anyone to help to build it. 

What are some highlights of the StarlingX 4.0 release? Things you are looking forward to?

Based on my observation from the community discussions, enhancements to the Distributed Cloud feature will be definitely the number one highlight. Other than that, we will also have some exciting features like Kata Containers support and containerized Ceph in 4.0.

All these cool stuff will make StarlingX a better solution and satisfy more of our partners and customers in the ecosystem.

Learn about the project, how to contribute and support the community at starlingx.io. Join these channels to get involved and contribute:

The post StarlingX community interview: how StarlingX shines in the starry sky of open-source projects in China appeared first on Superuser.

by Sunny Cai at March 03, 2020 07:00 AM

March 02, 2020


How To Explain OpenStack To A 10 Year Old

At its base, OpenStack is open-source software for creating private and public clouds. The OpenStack community consists of thousands of egalitarian developers in tight collaboration with users. We are not exaggerating when we say hundreds of the world’s biggest and brightest brands rely on open source to keep their businesses running the way they should. OpenStack also helps to reduce costs and increase efficiency. With the right support, it doesn’t matter if you’re an avid do-it-yourselfer or prefer to have your cloud solution managed by experts, anyone can take advantage of OpenStack powered products.

Our aim of this blog is to explain OpenStack to you in simple terms. In other words, so simple that you could theoretically take your learnings and explain them to a 10-year-old. Or you can take your learnings and inspire to upgrade to OpenStack. If you are, we’re here to help.


The origin story is well known. Two small groups of engineers, one from Rackspace and the other for NASA collaborated to create what we know today as OpenStack. This group project began in 2010. That project was to collaborate on open-source software that could run large computer systems. They wanted this open-source software to be transparent and encourage active participation throughout the community.

However, the more people heard about the project, the more people wanted to help out. Since then, the community has continued to grow. Developers from around the world work together on a six-month release cycle with developmental milestones. We at VEXXHOST joined the OpenStack community in its second release, Bexar, in 2011. Above all, it’s been exciting to watch the open source community continue to come together and grow.

The OpenStack Ecosystem

In order to familiarize yourself with the platform, it’s best to understand the basic individual projects and services that make up the cloud ecosystem. There are many more optional services, but we are going to look at six of their core services to start.

Keystone is an authentification and authorization component. It is in charge of projects and the first element that should be installed. Every single OpenStack powered cloud has Keystone built into it.

Nova is in charge of everything from instance sizing, creation, management location. Nova is fundamental to your cloud and it’s considered to be one of the most important aspects of the cloud for a reason. It makes the computing resources of your cloud. We don’t need to tell you that it’s an enormous role.

Neutron is complicated yet also extremely powerful. Its role is to create virtual networks inside of an OpenStack cloud. You should be thinking everything from creating virtual networks, routers, firewalls, and beyond. It’s a powerhouse for a reason.

Glance has the power to upload OpenStack compatible images. These images can either be stored locally or on object storage. Overall, Glance works to manage server images for your cloud.

Cinder is a Block Storage provider for your OpenStack cloud. Thanks to Cinder, end-users receive a self-service API to request and use resources without needing knowledge of where storage is being deployed.

Swift is highly available and distributed. What does this mean? It means that Swift does the work of providing Object Storage as a service to your OpenStack powered cloud.

These are just the start of OpenStack and you can expand your OpenStack environment as needed. Adding more advanced levels of functions to your cloud through the numerous OpenStack projects is just one of many possibilities for your OpenStack powered cloud.

How To Start

With our OpenStack consulting services you’ll always have an expert by your side. Whether you’re an avid do-it-yourselfer or an IT expert, to ensure success in any transition to an OpenStack powered cloud you need to make sure you find an expert you can trust. Moreover, we’re here to guide you through every step of your journey and help you get set up with the very best OpenStack cloud environment to suit your unique needs.

Contact us today to learn about how we can help get you started with OpenStack.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post How To Explain OpenStack To A 10 Year Old appeared first on VEXXHOST.

by Angela Bruni at March 02, 2020 07:32 PM

OpenStack Superuser

Collaborations cross industries: OpenStack Neutron and Discovery Open Science Initiative

Last year an informal collaboration was started between OpenStack Neutron project members dealing with performance (aka neutron-perf team, now merged to the core team activities since the last gathering in China) and members of the Discovery Open Science Initiative, a project aiming at a fully decentralized IaaS

The main goal of this collaboration is to identify potential bottlenecks in the Neutron API and plan a performance testing. Since 2016, the Discovery initiative has acquired expertise on testing using OpenStack as reference middleware.

To this aim, they developed EnOS, a tool to deploy, customise, and benchmark OpenStack targeting reproducible experimentations, Leveraging the OpenStack Kolla Ansible project, EnOS enables the execution of performance stress workloads on OpenStack for postmortem analysis. It supports large scale platforms such as the French dedicated testbed for research Grid’5000 or Vagrant configurations for local testing performed during development. Results of some experiment campaigns have been shared to the OS community in several summits.  During the Berlin summit, Miguel Lavalle (former Neutron PTL) and Discovery members exchanged ideas on the ongoing work related to the performance and scalability challenges of the Neutron project. 

Since the initial conversation, Neutron members defined Rally scenarios examples to isolate their potential concerns. They also implemented a way to get access to detailed internals information, focusing on the messages exchanges and database access. At the same time, Discovery members have released new versions of EnOS taking into account the Neutron feedback.

After some preliminary tests, it looks that Neutron and Discovery members are ready to take the next step combining their knowledge and resources. This may enable a new way to benchmark Neutron. So far testing in the OpenStack projects has been performed mainly to validate the specifications and avoid regressions. Now together a test plan at large scale may be defined with hundreds of nodes including different scenarios such as for network partition, fault aggregation, and stress load. The main idea is to identify the major limits of the whole infrastructure and evaluate the required modifications to ensure the evolution of OpenStack.

These activities show a bit of the history, the reusing intention of the available features of different components of OpenStack, and the relevance of the OpenStack gatherings. Collaborations like this one let diverse actors meet and collaborate crossing traditional limits of academic and industrial communities.

The post Collaborations cross industries: OpenStack Neutron and Discovery Open Science Initiative appeared first on Superuser.

by Miguel Lavalle and Javier Rojas Balderrama at March 02, 2020 02:00 PM

February 28, 2020

OpenStack Superuser

Introducing Zuul for improved CI/CD

Jenkins is a marvelous piece of software. As an execution and automation engine, it’s one of the best you’re going to find. Jenkins serves as a key component in countless continuous integration (CI) systems, and this is a testament to the value of what its community has built over the years. But that’s what it is­­—a component. Jenkins is not a CI system itself; it just runs things for you. It does that really well and has a variety of built-ins and a vibrant ecosystem of plugins to help you tell it what to run, when, and where.

CI is, at the most fundamental level, about integrating the work of multiple software development streams into a coherent whole with as much frequency and as little friction as possible. Jenkins, on its own, doesn’t know about your source code or how to merge it together, nor does it know how to give constructive feedback to you and your colleagues. You can, of course, glue it together with other software that can perform these activities, and this is how many CI systems incorporate Jenkins.

It’s what we did for OpenStack, too, at least at first.

If it’s not tested, it’s broken

In 2010, an open source community of projects called OpenStack was forming. Some of the developers brought in to assist with the collaboration infrastructure also worked on a free database project called Drizzle, and a key philosophy within that community was the idea “if it’s not tested, it’s broken.” So OpenStack, on day one, required all proposed changes of its software to be reviewed and tested for regressions before they could be approved to merge into the trunk of any source code repositories. To do this, Hudson (which later forked to form the Jenkins project) was configured to run tests exercising every change.

A plugin was installed to interface with the Gerrit code review system, automatically triggering jobs when new changes were proposed and reporting back with review comments indicating whether they succeeded or failed. This may sound rudimentary by today’s standards, but at the time, it was a revolutionary advancement for an open source collaboration. No developer on OpenStack was special in the eyes of CI, and everyone’s changes had to pass this growing battery of tests before they could merge—a concept the project called “project gating.”

There was, however, an emerging flaw with this gating idea: To guarantee two unrelated changes didn’t alter a piece of software in functionally incompatible ways, they had to be tested one at a time in sequence before they could merge. OpenStack was complicated to install and test, even back then, and quickly grew in popularity. The rising volume of developer contributions coupled with increasing test coverage meant that, during busy periods, there was simply not enough time to test every change that passed review. Some longer-running jobs took nearly an hour to complete, so the upper bound for what could get through the gate was roughly two dozen changes in a day. The resulting merge backlog showed a new solution was required.

Enter Zuul

During an OpenStack CI meeting in May 2012, one of the CI team members, James Blair, announced that he’d “been working on speculative execution of Jenkins jobs.” Speculative execution is an optimization most commonly found in the pipelines of modern microprocessors. Much like the analogy with processor hardware, the theory was that by optimistically predicting positive gating results for changes recently approved but that had not yet completed their tests, subsequently approved changes could be tested concurrently and then conditionally merged as long as their predecessors also passed tests and merged. James said he had a name for this intelligent scheduler: Zuul.

Within this time frame, challenges from trying to perform better revision control for Jenkins’ XML job configuration led to the creation of the human-readable YAML-based Jenkins Job Builder templating engine. Limited success with the JClouds plugin for Jenkins and cumbersome attempts to use jobs for refreshing cloud images of single-use Jenkins slaves ended with the creation of the Nodepool service. Limited log-storage capabilities resulted in the team adding separate external solutions for organizing, serving, and indexing job logs and assuming maintainership of an abandoned secure copy protocol (SCP) plugin replacing the less-secure FTP option that Jenkins provided out of the box. The OpenStack infrastructure team was slowly building a fleet of services and utilities around Jenkins but began to bump up against a performance limitation.

Multiplying Jenkins

By mid-2013, Nodepool was constantly recycling as many as 100 virtual machines registered with Jenkins as slaves, but this was no longer enough to keep up with the growing workload. Thread contention for global locks in Jenkins thwarted all attempts to push past this threshold, no matter how much processor power and memory was thrown at the master server. The project had offers to donate additional capacity for Jenkins slaves to help relieve the frequent job backlog, but this would require an additional Jenkins master. The efficient division of work between multiple masters needed a new channel of communication for dispatch and coordination of jobs. Zuul’s maintainers identified the Gearman job server protocol as an ideal fit, so they outfitted Zuul with a new geard service and extended Jenkins with a custom Gearman client plugin.

Now that jobs were spread across a growing assembly of Jenkins masters, there was no longer any single dashboard with a complete view of job activity and results. In order to facilitate this new multi-master world, Zuul grew its own status API and WebUI, as well as a feature to emit metrics through the StatsD protocol. Over the next few years, Zuul steadily subsumed more of the CI features its users relied on, while Jenkins’ place in the system waned accordingly, and it was becoming a liability. OpenStack made an early choice to standardize on the Python programming language; this was reflected in Zuul’s development, yet Jenkins and its plugins were implemented in Java. Zuul’s configuration was maintained in the same YAML serialization format that OpenStack used to template its own Jenkins jobs, while Jenkins kept everything in baroque XML. These differences complicated ongoing maintenance and led to an unnecessarily steep learning curve for new administrators from related communities that had started trying to run Zuuls.

The time was right for another revolution.

The rise of Ansible

In early 2016, Zuul’s maintainers embarked on an ambitious year-long overhaul of their growing fleet of services with the goal of eliminating Jenkins from the overall system design. By this time, Jenkins was serving only as a conduit for running jobs consisting mostly of shell scripts on slave nodes over SSH, providing real-time streaming of job output and copying resulting artifacts to longer-term storage. Ansible was found to be a great fit for that first need; purpose-built to run commands remotely over SSH, it was written in Python, just like Zuul, and also used YAML to define its tasks. It even had built-in modules for features the team had previously implemented as bespoke Jenkins plugins. Ansible provided true multi-node support right out of the box, so the same playbooks could be used for both simulating and performing complex production deployments. An ever-expanding ecosystem of third-party modules filled in any gaps, in much the same way as the Jenkins community’s plugins had before.

A new Zuul executor service filled the prior role of the Jenkins master: it acted on pending requests in the scheduler’s geard, dispatched them via Ansible to ephemeral servers managed by Nodepool, then collected results and artifacts for publication. It also exposed in-progress build output over the classic RFC 742 Name/Finger protocol, streamed in real time from an extension of Ansible’s command output module. Once it was no longer necessary to limit jobs to what Jenkins’ parser could comprehend, Zuul was free to grow new features like distributed in-repository job definitions, shareable between projects with inheritance and secure handling of secrets, as well as the ability to test-drive proposed changes for the jobs themselves. Jenkins served its purpose admirably, but at least for Zuul, its usefulness was finally at an end.

Testing the future

Zuul’s community likes to say that it “tests the future” through its novel application of speculative execution. Gone are the harrowing days of wondering whether the improvement you want to make to an existing job will render it non-functional once it’s applied in production. Overloaded review teams for a massive central job repository are a thing of the past. Jobs are treated as a part of the software and shipped right alongside the rest of the source code, taking advantage of Zuul’s other features like cross-repository dependencies so that your change to part of a job in one project can be exercised with a proposed job change in another project. It will even comment on your job changes, highlighting specific lines with syntax problems as if it were another code reviewer giving you advice.

These were features Zuul only dreamed of before, but which required freedom from Jenkins so that it could take job parsing into its own hands. This is the future of CI, and Zuul’s users are living it.

As of early 2019, the OpenStack Foundation recognized Zuul as an independent, openly governed project with its own identity and flourishing community. If you’re into open source CI, consider taking a look. Development on the next evolution of Zuul is always underway, and you’re welcome to help. Find out more on Zuul’s website.

Are you a Zuul user? Please take a few moments to fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public.

The post Introducing Zuul for improved CI/CD appeared first on Superuser.

by Jeremy Stanley at February 28, 2020 02:30 PM

February 27, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

The updated OpenStack Certified OpenStack Administrator (COA) exam became available in October 2019, and we have seen a resurgence in interest by both the global community and our training partners. Thousands of Stackers have demonstrated their skills and proven their expertise, helping organizations identify top talent in the industry. The COA exam remains a critical and respected certification for anyone working on OpenStack.

New to OpenStack? Has your certification expired? Do you want to prove your OpenStack skills? Take a look at the OpenStack Docs, then head over to the OpenStack Training Marketplace and find one of our qualified Training Partners. They will help guide you through all of the critical pieces you need to succeed in taking the COA. The current exam is based on OpenStack Rocky and this is an opportunity to not only get a certification, but also get familiar with one of the more recent OpenStack releases.

The exam itself covers all of the core compute, storage, image, and networking services necessary to run OpenStack. To kickstart your preparation for the COA exam, we are offering a 20% discount when you purchase through the OpenStack website. Use code COARECERT2020 through February 29, 2020.

Once you’ve completed the exam, it’s time to get involved with the OpenStack Community:

  • Go to openstack.org/community to learn how you can be a part of the community through code contribution, user groups, or even becoming a mentor for new users.
  • Meet other community members at one of our upcoming events. OpenDev+PTG in Vancouver and the Open Infrastructure Summit in Berlin provide an opportunity to share thoughts about the software, swap war stories, and generally get to know folks working on OpenStack.

The OpenStack Foundation would like to thank Mirantis for administering the exam and the dozens of global training partners who are supporting the next wave of OpenStack talent.

OpenStack Foundation (OSF)

Airship: Elevate your infrastructure

  • Congrats to the newest members of the Airship Technical and Working Committees, Andrew Leasck (TC) and Ian Pittwood (WC)!
  • Airship Blog Series 6 – Armada Growing Pains is out! Armada provides a way to synchronize a Helm (Tiller) target with an operator’s intended state, consisting of several charts, dependencies, tests, and overrides using a single file or directory with a collection of Armada declarations. This allows operators to define many charts, potentially with different namespaces for those releases, and their overrides in a central place. With a single command, they can deploy and/or upgrade all of them where applicable.
  • Catch up on the latest Airship news, including 2.0 plans on the blog.

Kata Containers: The speed of containers, the security of VMs

OpenStack: Open source software for creating private and public clouds

  • Brian Rosmaita reported an issue that, in rare conditions, may cause data loss in Cinder volumes for the OpenStack Train release. Train deployments are advised to deploy the described workaround to avoid any issue.
  • The project teams have just passed the ussuri-2 milestone, in preparation for a final Ussuri release on May 13. Please read this release countdown email from Sean McGinnis for information on upcoming release cycle deadlines!
  • Every cycle, our community sets common goals for the next OpenStack release. Ghanshyam Mann just started the process for the Victoria development cycle, which will start after Ussuri is released in May. We are interested in goals that have user-visible impact and make OpenStack easier to operate. If you are interested in proposing a goal, please write down your idea on the Victoria goals etherpad!
  • Interested in investing in OpenStack development, but don’t know where to maximize impact and returns? The OpenStack Technical Committee just refreshed its Investment opportunities list for 2020. Please have a look and don’t hesitate to reach out!

StarlingX: A fully featured cloud for the distributed edge

  • The next StarlingX Contributor Meetup is approaching fast! The event will be held on March 3-4 in Chandler, Arizona. Sign up on the etherpad if you are interested in joining or monitor the starlingx-discuss mailing list for updates if you cannot make it this time.

Zuul: Stop merging broken code

  • Learn about the history and evolution of Zuul in a recently featured article at opensource.com.
  • Zuul is working to become the official gating CI for the Gerrit project; you can follow the progress on their repo-discuss list, and help if you’re interested.

Find OSF at these Open Infrastructure Community Events


  • March 17: OpenInfra Day Turkey
    • Don’t miss the session that will be presented by Thierry Carrez, VP of Engineering at OpenStack Foundation
    • CFP will be closing on March 1st
  • March 27-28: HDC.Cloud – OSF Booth





For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at February 27, 2020 02:00 PM


Securing applications at the Edge with Trusted Docker Containers

Mirantis has partnered with Intel to secure the last mile in Docker Enterprise Platform to hardware primitives in Trusted Platform Module (TPM), leveraging Intel Platform Trust Technology (Intel PTT).

by Marc Meunier at February 27, 2020 12:30 AM

February 24, 2020


Mirantis will continue to support and develop Docker Swarm

Here at Mirantis, we're excited to announce our continued support for Docker Swarm, while also investing in new features requested by customers.

by Rick. Pugh at February 24, 2020 10:55 PM

February 18, 2020

Carlos Camacho

TripleO deep dive session #14 (Containerized deployments without paunch)

This is the 14th release of the TripleO “Deep Dive” sessions

Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

You can access the presentation.

So please, check the full session content on the TripleO YouTube channel.

Please check the sessions index to have access to all available content.

by Carlos Camacho at February 18, 2020 12:00 AM

February 17, 2020

CSC.fi Cloud Blog

CentOS8 images published in Pouta as a tech preview

There are now CentOS-8 images available in Pouta!

There are some minor issues with the upstream CentOS8 images, so, for now, they are considered to be in "tech preview".

We have solved the one we have found so far by temporarily modifying the image to use "cloud-user" and remove the resolv.conf leftovers.

Basic information about our images can be found on docs.csc.fi

One issue is that /etc/resolv.conf sometimes has a nameserver defined from the build of the image. There is an open CentOS bug report about this: https://bugs.centos.org/view.php?id=16948

by Unknown (noreply@blogger.com) at February 17, 2020 05:48 AM

February 14, 2020

StackHPC Team Blog

SR-IOV Networking in Kayobe

A Brief Introduction to Single-Root I/O Virtualisation (SR-IOV)

In a virtualised environment, SR-IOV enables closer access to underlying hardware, trading greater performance for reduced operational flexibility.

This involves the creation of virtual functions (VFs), which are presented as a copy of the physical function (PF) of the hardware device. The VF is passed-through to a VM, resulting in bypass of the hypervisor operating system for network activity. The principles of SR-IOV are presented in slightly greater depth in a short Intel white paper, and the OpenStack fundamentals are described in the Neutron online documentation.
A VF can be bound to a given VLAN, or (on some hardware, such as recent Mellanox NICs) it can be bound to a given VXLAN VNI. The result is direct access to a physical NIC attached to a tenant or provider network.

Note that there is no support for security groups or similar richer network functionality as the VM is directly connected to the physical network infrastructure, which provides no interface for injecting firewall rules or other externally managed packet handling.
Mellanox also offer a more advanced capability, known as ASAP2, which builds on SR-IOV to also offload Open vSwitch (OVS) functions from the hypervisor. This is more complex and not in scope for this investigation.

Setup for SR-IOV

Aside from OpenStack, deployment of SR-IOV involves configuration at many levels.

  • BIOS needs to be configured to enable both Virtualization Technology and SR-IOV.

  • Mellanox NIC firmware must be configured to enable the creation of SR-IOV VFs and define the maximum number of VFs to support. This requires the installation of the Mellanox Firmware Tools (MFT) package from Mellanox OFED.

  • Kernel boot parameters are required to support direct access to SR-IOV hardware:

    intel_iommu=on iommu=pt
  • A number of VFs can be created by writing the required number to a file under /sys, for example: /sys/class/net/eno6/device/sriov_numvfs

    NOTE: There are certain NIC models (e.g. Mellanox Connect-X 3) that do not support management via sysfs, those need to be configured using modprobe (see modprobe.d man page).

  • This is typically done as a udev trigger script on insertion of the PF device. The upper limit set for VFs is given by another (read-only) file in the same directory.

As a framework for management using infrastructure-as-code principles and Ansible at every level, Kayobe provides support for running custom Ansible playbooks on the inventory and groups of the infrastructure deployment. Over time StackHPC has developed a number of roles to perform additional configuration as a custom site playbook. A recent addition is a Galaxy role for SR-IOV setup

A simple custom site playbook could look like this:

- name: Configure SR-IOV
  hosts: compute_sriov
    - include_role:
        name: stackhpc.sriov
    - name: reboot
      include_tasks: tasks/reboot.yml
      tags: reboot

This playbook would then be invoked from the Kayobe CLI:

(kayobe) $ kayobe playbook run sriov.yml

Once the system is prepared for supporting SR-IOV, OpenStack configuration is required to enable VF resource management, scheduling according to VF availability, and pass-through of the VF to VMs that request it.


An additional complication might be that hypervisors use bonded NICs to provide network access for VMs. This provides greater fault tolerance. However, a VF is normally associated with only one PF (and the two PFs in a bond would lead to inconsistent connectivity).

Mellanox NICs have a feature, VF-LAG, which claims to enable SR-IOV to work in configurations where the ports of a 2-port NIC are bonded together.

Setup for VF-LAG requires additional steps and complexities, and we'll be covering it in greater detail in another blog post soon.

Nova Configuration

Scheduling with Hardware Resource Awareness

SR-IOV VFs are managed in the same way as PCI-passthrough hardware (eg, GPUs). Each VF is managed as a hardware resource. The Nova scheduler must be configured not to schedule instances requesting SR-IOV resources to hypervisors with none available. This is done using the PciPassthroughFilter scheduler filter.

In Kayobe config, the Nova scheduler filters are configured by defining non-default parameters in nova.conf. In the kayobe-config repo, add this to etc/kayobe/kolla/config/nova.conf:

available_filters = nova.scheduler.filters.all_filters
enabled_filters = other-filters,PciPassthroughFilter

(The other filters listed may vary according to other configuration applied to the system).

Hypervisor Hardware Resources for Passthrough

The nova-compute service on each hypervisor requires configuration to define which hardware/VF resources are to be made available for passthrough to VMs. In addition, for infrastructure with multiple physical networks, an association must be made to define which VFs connect to which physical network. This is done by defining a whitelist (pci_passthrough_whitelist) of available hardware resources on the compute hypervisors. This can be tricky to configure if the available resources are different in an environment with multiple variants of hypervisor hardware specification. One solution using Kayobe's inventory is to define whitelist hardware mappings either globally, or in group variables or even individual host variables as follows:

# Physnet to device mappings for SR-IOV, used for the pci
# passthrough whitelist and sriov-agent configs
  p4p1: physnet2

This state can then be applied by adding a macro-expanded term to etc/kayobe/kolla/config/nova.conf:

{% raw %}
passthrough_whitelist = [{% for dev, physnet in sriov_physnet_mappings.items() %}{{ (loop.index0 > 0)|ternary(',','') }}{ "devname": "{{ dev }}", "physical_network": "{{ physnet }}" }{% endfor %}]
{% endraw %}

We have used the network device name in for designation here, but other options are available:

  • devname: network-device-name
    (as used above)
  • address: pci-bus-address
    Takes the form [[[[<domain>]:]<bus>]:][<slot>][.[<function>]].
    This is a good way of unambiguously selecting a single device in the hardware device tree.
  • address: mac-address
    Can be wild-carded.
    Useful if the vendor of the SR-IOV NIC is different from all other NICs in the configuration, so that selection can be made by OUI.
  • vendor_id: pci-vendor product_id: pci-device
    A good option for selecting a single hardware device model, wherever they are located.
    These values are 4-digit hexadecimal (but the conventional 0x prefix is not required).

The vendor ID and device ID are available from lspci -nn (or lspci -x for the hard core). The IDs supplied should be those of the physical function (PF) not the virtual functions, which may be slightly different.

Neutron Configuration

Kolla-Ansible documents SR-IOV configuration well here: https://docs.openstack.org/kolla-ansible/latest/reference/networking/sriov.html.
See https://docs.openstack.org/neutron/train/admin/config-sriov.html for full details from Neutron's documentation.
For Kayobe configuration, we set a global flag kolla_enable_neutron_sriov in etc/kayobe/kolla.yml:
kolla_enable_neutron_sriov: true

Neutron Server

SR-IOV usually connects to VLANs; here we assume Neutron has already been configured to support this. The sriovnicswitch ML2 mechanism driver must be enabled. In Kayobe config, this is added to etc/kayobe/neutron.yml:

# List of Neutron ML2 mechanism drivers to use. If unset the kolla-ansible
# defaults will be used.
  - openvswitch
  - l2population
  - sriovnicswitch

Neutron SR-IOV NIC Agent

Neutron requires an additional agent to run on compute hypervisors with SR-IOV resources. The SR-IOV agent must be configured with mappings between physical network name and the interface name of the SR-IOV PF. In Kayobe config, this should be added in a file etc/kayobe/kolla/config/neutron/sriov_agent.ini. Again we can do an expansion using the variables drawn from Kayobe config's inventory and extra variables:

{% raw %}
physical_device_mappings = {% for dev, physnet in sriov_physnet_mappings.items() %}{{ (loop.index0 > 0)|ternary(',','') }}{{ physnet }}:{{ dev }}{% endfor %}
exclude_devices =
{% endraw %}

by Michal Nasiadka at February 14, 2020 11:00 AM

February 12, 2020

OpenStack Superuser

OpenStack Case Study: CloudVPS

CloudVPS is one of the largest Dutch independent OpenStack providers that delivers advanced cloud solutions. With a team of 15 people, CloudVPS is one of the first in Europe to get started with OpenStack, and they are leading in the development of the scalable open-source platform. 

At the Open Infrastructure Shanghai Summit in November 2019, Superuser got a chance to talk with the OpenStack engineers from the CloudVPS on why they chose to OpenStack for their organization and how they use OpenStack.

What are some of the open source projects you are using?

Currently, we are using OpenStack, Oxwall, Salt, Tungsten Fabric, Gitlab and a few more. We have not yet started to use the open source projects that are hosted by the OpenStack Foundation, but we are planning on it. 

Why do you choose to use OpenStack?

We have used OpenStack for a long time. At the very beginning, we added Hyper V hypervisors for Windows VMs before we built our own orchestration layer. After about three to four years when OpenStack came out, we started our first OpenStack platform to do public cloud. The main reason that we start to use OpenStack is the high growth potential that we see in OpenStack. OpenStack’s features and its community size are big parts of the reason as well. In addition, OpenStack’s stability and maturity are particularly important to us right now. Upgradability is also a key factor for our team. In terms of our partnership with Mirantis, upgradability is the biggest reason why we chose to partner with them instead of doing it ourselves. 

What workloads are you running on OpenStack?

We don’t know the exact workloads, but basically all of it. What we do know is that we see web services on there and also platforms for large newspapers in the Netherlands, Belgium, Germany, and other countries around the world. It really varies, and we have all kinds of workloads. For the newspapers, we have conversion workloads for images. We also have an office automation environment like the Windows machine. There are some customers who run containers on top of it. Overall, there are definitely more workloads, but we don’t know all of it.

How large is your OpenStack deployment?

We have two deployments. In total, we have about over 10,000 instances on it and 400-500 nodes.

Stay informed:

Interested in information about the OpenStack Foundation and its projects? Stay up to date on OpenStack and the Open Infrastructure community today!

The post OpenStack Case Study: CloudVPS appeared first on Superuser.

by Sunny Cai at February 12, 2020 06:11 PM

Stephen Finucane

VCPUs, PCPUs and Placement

In a previous blog post, I’d described how instance NUMA topologies and CPU pinning worked in the OpenStack Compute service (nova). Starting with the 20.

February 12, 2020 12:00 AM

February 10, 2020

SWITCH Cloud Blog

RadosGW/Keystone Integration Performance Issues—Finally Solved?

For several years we have been running OpenStack and Ceph clusters as part of SWITCHengines, an IaaS offering for the Swiss academic community. Initially, our main “job” for Ceph was to provide scalable block storage for OpenStack VMs—which it does quite well. But we also provided S3 (and Swift, but that’s outside the scope of this post) -based object storage via RadosGW from early on. This easy-to-use object storage turned out to be popular far beyond our initial expectations.

One valuable feature of RadosGW is that it integrates with Keystone, the Authentication and Authorization service in OpenStack. This meant that any user of our OpenStack offering can create, within her Project/tenant, EC2-compatible credentials to set up, and manage access to, S3 object store buckets. And they sure did! SWITCHengines users started to use our object store to store videos (and stream them directly from our object store to users’ browsers), research data for archival and dissemination, external copies from (parts of) their enterprise backup systems, and presumably many other interesting things; a “defining characteristic” of the cloud is that you don’t have to ask for permission (see “On-demand self-service” in the NIST Cloud definition)—though as a community cloud provider, we are happy to hear about, and help with, specific use cases.

Now this sounds pretty close to cloud nirvana, but… there was a problem: Each time a client made an authenticated (signed) S3 request on any bucket, RadosGW had to outsource the validation of the request signature to Keystone, which would return either the identity of the authenticated user (that RadosGW could then use for authorization purposes), or a negative reply in case the signature doesn’t validate. Unfortunately, this outsourced signature validation process turns out to bring significant per-request overhead. In fact, for “easy” requests such as reading and writing small objects, this authentication overhead easily dominates total processing time. For a sense of the magnitude, small requests without Keystone validation often take <10ms to complete (according to the logs of our NGinx-based HTTPS server that acts as a front end to the RadosGW nodes. Whereas any request involving Keystone takes at least 600ms.

One undesirable effect is that our users probably wonder why simple requests have such a high baseline response time. Transfers of large objects don’t care much, because at some point the processing time is dominated by Rados/network transfer time of user data.

But an even worse effect is that S3 users could, by using client software that “aggressively” exploited parallelism, put very high load on our Keystone service, to the point that OpenStack operations sometimes ran into timeouts when they needed to use the authentication/authorization service.

In our struggle to cope with this reoccurring issue, we found a somewhat ugly workaround: When we found a EC2 credential in Keystone whose use in S3/RadosGW contributed significant load, we extracted that credential (basically an ID/secret pair) from Keystone, and provisioned it locally in all of our RadosGW instances. This always solved the individual performance problem for that client, response times dropped by 600ms immediately, and load on our Keystone system subsided.

While the workaround fixed our immediate troubles, it was deeply unsatisfying in several ways:

  • Need to identify “problematic” S3 uses that caused high Keystone load
  • Need to (more or less manually) re-provision Keystone credentials in RadosGW
  • Risk of “credential drift” in case the Keystone credentials changed (or disappeared) after their re-provisioning in RadosGW—the result would be that clients would still be able to access resources that they shouldn’t (anymore).

But the situation was bearable for us, and we basically resigned to having to fix performance emergencies every once in a while until maybe one day, someone would write a Python script or something that would synchronize EC2 credentials between Keystone and RadosGW…

PR #26095: A New Hope

But then out of the blue, James Weaver from the BBC contributed PR #26095, rgw: Added caching for S3 credentials retrieved from keystone. This changes the approach to signature validation when credentials are found in Keystone: The key material (including secret key) found in Keystone is cached by RadosGW, and RadosGW always performs signature validation locally.

James’s change was merged into master and will presumably come out with the “O” release of Ceph. We run Nautilus, and when we got wind of this change, we were excited to try it out. We had some discussions as to whether the patch might be backported to Nautilus; in the end we considered that unlikely at the current state, because the patch unconditionally changes the behavior in a way that could violate some security assumptions (e.g. that EC2 secrets would never leave Keystone).

We usually avoid carrying local patches, but in this case we were sufficiently motivated to go and cherry-pick the change on top of the version we were running (initially v14.2.5, later v14.2.6 and v14.2.7). We basically followed the instructions on how to build Ceph, but after cloning the Ceph repo, ran

git checkout v14.2.7
git cherry-pick affb7d396f76273e885cfdbcd363c1882496726c -m 1 -v
edit debian/changelog and prepend:

ceph (14.2.7-1bionic-switch1) stable; urgency=medium

  * Cherry-picked upstream pull #26095:

    rgw: Added caching for S3 credentials retrieved from keystone

 -- Simon Leinen <simon.leinen@switch.ch>  Thu, 01 Feb 2020 19:51:21 +0000

Then, dpkg-buildpackage and wait for a couple of hours…

First Results

We tested the resulting RadosGW package in our staging environment for a couple of days before trying them in our production clusters.

When we activated the patched RadosGW in production, the effects were immediately visible: The CPU load of our Keystone system went down by orders of magnitude.

Screenshot 2020-02-02 at 10.29.54

On 2020-01-27 at around 08:00, we upgraded our first production cluster’s RadosGWs. Twenty-four hours later, we upgraded the RadosGWs on the second cluster. The baseline load on our Keystone service dropped visibly on the first upgrade, but some high load peaks could still be seen. Since the second region was upgraded, no sharp peaks anymore. There is a periodic load increase every night between 03:10 and 04:10, presumably due to some charging/accounting system doing its thing. Probably these peaks were “always” there, but they only became apparent once we started deploying the credential-caching code.

The 95th-percentile latency of “small” requests (defined as both $body_bytes_sent and $request_length being lower than 65536 was reduced from ~750ms to ~100ms:



Conclusion and Outlook

We owe the BBC a beer.

To make the patch perfect, maybe it would be cool to limit the lifetime of cached credentials to some reasonable value such as a few hours. This could limit the damage in case credentials should be invalidated. Though I guess you could just restart all RadosGW processes and lose any cached credentials immediately.

If you are interested in using our RadosGW packages made from cherry-picking PR #20965 on top of Nautilus, please contact us. Note that we only have x86_64 packages for Ubuntu 18.04 “Bionic” GNU/Linux.

by Simon Leinen at February 10, 2020 08:09 AM

February 05, 2020


Get Your Windows Apps Ready for Kubernetes

Historically, the Kubernetes orchestrator has been focused on Linux-based workloads, but Windows has started to play a larger part in the ecosystem.

by Steven Follis at February 05, 2020 04:45 PM


Migration Paths for RDO From CentOS 7 to 8

In last CentOS Dojo, it was asked if RDO would provide python3 packages for OpenStack Ussuri on CentOS7 and if it would be “possible” in the context of helping in the upgrade path from Train to Ussuri. As “possible” is a vague term and I think the response deserves some more explanation than a binary one, I’ve collected my thoughts in this topic as a way to start a discussion within the RDO community.

Yes, upgrades are hard

We all know that upgrading production OpenStack cloud is complex and depends strongly on each specific layout and deployment tools (different deployment tools may support or not the OpenStack upgrades) and processes. In addition, upgrading from CentOS 7 to 8 requires OS redeploy, which introduces operational complexity to the migration. We are commited to help the RDO community users to migrate their clouds to new versions of OpenStack and/or Operating Systems in different ways:
  • Providing RDO Train packages on CentOS8. This allows users to choose between doing a one-step upgrade from CentOS7/Train -> CentOS8/Ussuri or split it in two steps CentOS7/Train -> CentOS8/Train -> CentOS8/Ussuri.
  • RDO maintains OpenStack packages during the whole upstream maintenance cycle for the Train release, this is until April 2021. Operators can take some time to plan and execute their migration paths.
Also the Rolling Upgrades features provided in OpenStack allows one to keep agents running in compute nodes in Train temporarily after the controllers have been updated to Ussuri using Upgrade Levels in Nova or built-in backwards compatibility features in Neutron and other services.

What “Supporting a OpenStack release in a CentOS version” means in RDO

Before discussing the limitations and challenges to support RDO Ussuri on CentOS 7.7 using python 3, I’ll describe what supporting a new RDO release means:


  • Before we can start building OpenStack packages we need to have all required dependencies used to build or run OpenStack services. We use the libraries from CentOS base repos as much as we can and avoid rebasing or forking CentOS base packages unless it’s strongly justified.
  • OpenStack packages are built using DLRN in RDO Trunk repos or CBS using jobs running in post pipeline in review.rdoproject.org.
  • RDO also consumes packages from other CentOS SIGs as Ceph from Storage SIG, KVM from Virtualization or collectd from OpsTools.


  • We run CI jobs periodically to validate the packages provided in the repos. These jobs are executed using the Zuul instance in SoftwareFactory project or Jenkins in CentOS CI infra and deploy different configurations of OpenStack using Packstack, puppet-openstack-integration and TripleO.
  • Also, some upstream projects include CI jobs on CentOS using the RDO packages to gate every change on it.


  • RDO Trunk packages are published in https://trunk.rdoproject.org and validated repositories are moved to promoted links.
  • RDO CloudSIG packages are published in official CentOS mirrors after they are validated by CI jobs.

Challenges to provide python 3 packages for RDO Ussuri in CentOS 7


  • While CentOS 7 includes a quite wide set of python 2 modules (150+) in addition to the interpreter, the python 3 stack included in CentOS 7.7 is just the python interpreter and ~5 python modules. All the missing ones would need to be bootstraped for python3.
  • Some python bindings are provided as part of other builds, i.e. python-rbd or python-rados is part of Ceph in StorageSIG, python-libguestfs is part of libguestfs in base repo, etc… RDO doesn’t own those packages so commitment from the owners would be needed or RDO would need to take ownership of them in this specific release (which means maintaining them until Train EOL).
  • Current specs in Ussuri tie python version to CentOS version. We’d need to figure out a way to switch python version in CentOS 7 via tooling configuration and macros.


  • In order to validate the python3 builds for Ussuri on CentOS 7, the deployment tools (puppet-openstack, packstack, kolla and TripleO) would need upstream fixes to install python3 packages instead of python2 for CentOS 7. Ideally, new CI jobs should be added with this configuration to gate changes in those repositores. This would require support from the upstream communities.


  • Alternatives exist to help operators in the migration path from Train on CentOS 7 to Ussuri on CentOS 8 and avoid a massive full cloud reboot.
  • Doing a full supported RDO release of Ussuri on CentOS 7 would require a big effort in RDO and other projects that can’t be done with existing resources:
    • It would required a full bootstrap of python3 dependencies which are pulled from CentOS base repositoris in python 2.
    • Other SIGs would need to provide python3 packages or, alternatively, RDO would need to maintain them for this specific release.
    • In order to validate the release upstream deployment projects would need to support this new python3 Train release.
  • There may be chances for intermediate solutions limited to a reduced set of packages that would help in the transition period. We’d need to hear details from the interested community members about what would be actually needed and what’s the desired migration workflow. We will be happy to onboard new community members with interest in contributing to this effort.
We are open to listen and discuss what other options may help the users, come to us and let us know how we can do it.

by amoralej at February 05, 2020 02:04 PM

February 04, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on 2019 OpenStack Foundation Annual Report

The OSF community had a productive year, merging 58,000 code changes to produce open source infrastructure software like Airship, Kata Containers, StarlingX, and Zuul, along with the third most active OSS project in the world, OpenStack. With 100,000 members and millions more visiting OSF websites in 2019 to get involved, the community made huge strides in addressing the $7.7 billion market for OpenStack and more than $12 billion combined OpenStack & containers markets in the future.

Each individual member, working group, SIG, and contributor was instrumental in continuing to support the OSF mission: helping people build and operate open infrastructure. The OSF 2019 Annual Report was published today highlighting the achievements across the community and the goals for the year ahead.

Let’s break down some of the highlights of last year:

  • The OSF confirmed three new open infrastructure projects to complement OpenStack in powering the world’s open infrastructure;
  • OpenStack is one of the top three most active open source projects in number of changes, and is projected to be a $7.7 billion USD market by 2023;
  • Some of the world’s largest brands—AT&T, Baidu, Blizzard Entertainment, BMW, China UnionPay, Walmart, and Volvo among others—shared their open source infrastructure use cases and learnings;
  • Upstream contributors continued to prioritize cross-project integration with open source projects including Ceph, Kubernetes, Ansible, and Tungsten Fabric.
  • New contributors were on-boarded through multiple internship and mentoring programs as well as OpenStack Upstream Institute, which was held in seven countries last year!

The OSF would like to extend a huge thanks to the global community for all of the work that went into 2019 and is continuing in 2020 to help people build and operate open source infrastructure. Check out the full OSF 2019 Annual Report on the OpenStack website!

OpenStack Foundation (OSF)

  • The results for the 2020 election of Individual Directors are in! Congratulations to all the elected 2020 OpenStack Foundation Board of Directors! Check out the results.
  • New Event! OpenDev+PTG
    • June 8-11 in Vancouver, BC
    • OpenDev + PTG is a collaborative event organized by the OpenStack Foundation gathering developers, system architects, and operators to address common open source infrastructure challenges.
    • Registration is now open!
    • Programming Committee information is available now. Sponsorship information will be coming soon.
  • The next Open Infrastructure Summit will be held this fall on October 19-23 in bcc Berlin Congress Center, Germany. Registration and sponsorships will be available soon, stay tuned for details!

Airship: Elevate your infrastructure

  • Airship Blog Series 5 – Drydock and Its Relationship to Cluster API – As part of the evolution of Airship 1.0, an enduring goal has remained supporting multiple provisioning backends beyond just bare metal. This includes those that can provision to third-party clouds and to other use cases like OpenStack VMs as well as enable you to bring your own infrastructure. Read how Drydock is used to accomplish this.
  • Check out the Airship YouTube playlist and see the Airship content that you might have missed at the Shanghai Summit.
  • Interested in getting involved? Check out this page.

Kata Containers: The speed of containers, the security of VMs

  • Kata Containers 1.9.4 and 1.10.0 releases are available now! The 1.10.0 release highlights on initial support for Cloud Hypervisor, HybridVsock support for cloud hypervisor and firecracker, updated Firecracker version to v0.19.1 and better rootless support for firecracker. This release also deprecates bridged networking model.
  • 2019 was a breakthrough year with production deployments and many milestones of Kata Containers. Check out what the community had accomplished in the past year and Kata Containers’ project update in the 2019 OpenStack Foundation Annual Report!
  • Looking for the 2020 Architecture Committee meeting agenda? See this meeting notes etherpad.

OpenStack: Open source software for creating private and public clouds

  • The community goals for the Ussuri development cycle have been finalized: dropping Python 2.7 support (championed by Ghanshyam Mann), and project-specific PTL and contributor documentation(championed by Kendall Nelson). Those should be completed by the Ussuri release, which is scheduled to happen on May 13, 2020.
  • The name for the release after Ussuri has been selected. It will be called Victoria, after the capital of British Columbia, where our next PTG will happen.
  • Special Interest Groups regroup users and developers interested in supporting a specific use case for OpenStack. Two new SIGs were recently formed. The Large Scale SIG wants to push back scaling limits within a given cluster and document better configuration defaults for large scale deployments. The Multi-arch SIG wants to better support OpenStack on CPU architectures other than x86_64. If you’re interested in those topics, please join those SIGs!
  • Interested in getting involved in the OpenStack community, but you don’t know where to start? Want to jump into a project, but you don’t know anyone? The First Contact SIG can help! For more information, you can check out their wiki page. They have regular biweekly meetings and hang out in the #openstack-dev and #openstack-upstream-institute IRC channels ready to answer your questions!

StarlingX: A fully featured cloud for the distributed edge

  • StarlingX 3.0 is now available! It integrates the Train version of OpenStack, adds improvements to the areas of container and hardware acceleration support, and delivers a new functionality called Distributed Cloud architecture. Check out the release notes for further details or download the ISO image and start playing with the software!
  • The community has been focusing on increasing test coverage and running a remote hackathon. Check their etherpad for more details and keep an eye out for updates on the starlingx-discuss mailing list.
  • The next StarlingX Community Meetup is taking place on March 3-4 in Chandler, Arizona. If you would like to attend on site please register on the planning etherpad as soon as you can!

Zuul: Stop merging broken code

  • The Zuul Project Lead position has been renewed, and the maintainers have chosen James Blair to lead them through the 2020 term.
  • A significant overhaul of Zuul’s service documentation is underway, with the goal of making it easier for users to find the information they need.
  • December and January saw four minor releases of Zuul (3.12.0-3.15.0) and two for Nodepool (3.10.0 and 3.11.0). Among a slew of other improvements, these switched the default Ansible version from 2.7 to 2.8, added support for the latest version (2.9), deprecated the most recent EOL version (2.6) and removed support for its predecessor (2.5). This follows a more consistent Ansible support lifecycle plan, which is in the process of being formalized.

Find the OSF at these upcoming Open Infrastructure community events



  • May 4: OpenStack Day DR Congo



  • July 15: Cloud Operator Day Japan


  • October 19-23: Open Infrastructure Summit Berlin

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you! If you have any feedback, news or stories that you want to share, reach us through community@openstack.org.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at February 04, 2020 04:29 PM

February 01, 2020

Stephen Finucane

Will Someone *Please* Tell Me Whats Going On? (Redux)

This was a talk I gave at FOSDEM 2020. I had previously given this talk at PyCon Limerick. The summary is repeated below. Software rarely stands still (unless it’s TeX).

February 01, 2020 12:00 AM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
April 09, 2020 11:37 PM
All times are UTC.

Powered by: