November 12, 2018

OpenStack Superuser

How to navigate the OpenStack Summit Berlin agenda

I love data. With the OpenStack Summit Berlin agenda going live this morning, I decided to take a look at some of the math behind November’s event. More than 100 sessions and workshops covering 35 open source projects over nine tracks—that’s a lot to cover in three days. It makes it even more challenging to build an onsite schedule while still providing yourself a chance to navigate the hallway track and collaborative Forum sessions—which will be added to the schedule in the upcoming weeks.

So who exactly can you collaborate with in Berlin? Represented by Summit speakers alone—there are 256 individuals from 193 companies and 45 countries that you may run into during the hallway track.

Before I start, I want to say a big thank you to the programming committee members who worked very hard creating the Summit schedule. It’s not an easy task—taking over 750 submissions from over 500 companies and turning into content that fits within 100 speaking slots.

Now, to take full advantage of the incredible talks that are planned for November, I wanted to share a few tips that I find helpful when putting my schedule together.

Start with the 101

Whether it’s your first Summit or you’re new to a project and want to get involved, there are a lot of sessions and workshops for you. You can either search for sessions that are tagged as 101 or you can filter the schedule for sessions marked as beginner. If there’s a particular project where you want to begin contributing, project on-boarding sessions will be added soon.

If this is your first Summit, I would recommend planning to attend some of the networking opportunities that are planned, including the opening night Open Infrastructure Marketplace Mixer.

Find the users

If there is anything I love more than data, it’s meeting new users and catching up with those I know. This makes the case study tag one of my most frequently used filters. If you are like me and enjoy learning how open infrastructure is being used in production, the Berlin Summit will not disappoint. From BMW sharing its CI/CD strategy with Zuul to Adobe Advertising Cloud sharing its OpenStack upgrade strategy, there are a lot of users sharing their open infrastructure use cases and strategies.

There are a few new case studies that have really caught my eye and have already landed on my personal schedule:

Filter by use case

Whether you’re interested in edge computing, CI/CD or artificial intelligence (AI), the Summit tracks provide a way to filter the sessions to find operators, developers and ecosystem companies pursuing that use case case.

Sessions are counted by track based on the number of submissions that are received during the call for presentations (CFP) process. For the Berlin Summit, here is the track breakdown by number of sessions:

Search the relevant open source projects

It was not a typo earlier when I mentioned that there are over 35 open source projects covered by the sessions at the Summit. Whether you’re trying to find one of the 45 Kubernetes sessions or a TensorFlow session on AI, the project-specific tags enable you to meet the developers and operators behind these projects.

Here are the top 10 open source projects and the number of sessions you can explore for each project:

Now, it’s time to start building your schedule. The official Summit mobile app will be available in the upcoming weeks, but you can still build your personal schedule in the web browser. Stay tuned on Superuser as we will feature top sessions by use case in the upcoming weeks and a few content themes spread across all nine tracks.

Photo // CC BY NC

The post How to navigate the OpenStack Summit Berlin agenda appeared first on Superuser.

by Allison Price at November 12, 2018 09:46 AM

SUSE Conversations

Make Today’s Data Explosion Painless with SDS from SUSE

As an IT manager at an enterprise company, you’ve done the research and become convinced that software-defined storage (SDS) is the only storage approach that can handle today’s data explosion. And now you’re faced with helping decide which SDS solution is right for your company. It’s a good time to stop and ask, “What is […]

The post Make Today’s Data Explosion Painless with SDS from SUSE appeared first on SUSE Communities.

by Larry Morris at November 12, 2018 04:00 AM

November 09, 2018

OpenStack Superuser

OpenStack and Kubernetes: Competing or complementary?

Many people are trying to figure out how containers and Kubernetes fit in with OpenStack. Here’s the perspective of Sardina Systems’ Mihaela Constantinescu.

For some context: Sardina is an award-winning company headquartered in London that developed a technology to automate HPC operations in large-scale cloud data centers, such as collecting utilization metrics, driving scalable aggregation and consolidation of data plus optimizing resource demand to resource availability. Sardina offers FishOS, an OpenStack and Kubernetes cloud platform that aims for zero-downtime operations.

The why of containers

Container technology serves two key functions: software packaging and kernel privilege segmentation. Kubernetes extends on these key functionalities further to enables programmable, flexible, rapidly deployable environments.
While some OpenStack distributions have chosen to deploy OpenStack in a containerized manner using Kubernetes, Sardina believes the benefits of this deployment approach can also be attained by coupling a smart deployer and well-engineered RPM packages.
For example, the FishOS Deployer provides a solution to easily migrate OpenStack management services from one node to another, or to flexibly upgrade or downgrade software packages. These capabilities enable broad audience of operators to be able to confidently deploy, operate and upgrade FishOS OpenStack platforms, without dictating in-depth understanding of Kubernetes as prerequisites.

Benefits for service consumers and operators

For service consumers like developers working in enterprise environments, Kubernetes’ support for programmable, agile and rapidly deployable environments with self-service degree of control is very valuable. With OpenStack Magnum, FishOS enables Operators in enterprises to easily provide multi-tenanted Kubernetes environments, with proven security assurances.
With FishOS, service consumers also gain from persistent block storage, software defined storage and software defined networking. While FishOS supports a broad range of storage options, FishOS provides Ceph as the default storage option. With integration between Ceph and Kubernetes in FishOS, users can benefit from persistent storage without extra complexities.

Containers and Kubernetes vs. OpenStack? Or containers and Kubernetes and/with OpenStack?

At times, containers and Kubernetes have been positioned as replacements for OpenStack or seen as competing with OpenStack. While some of use cases may overlap, one is not the replacement for the other. Rather, they could work together to deliver greater value to both service consumers and operators.
FishOS supports both running Kubernetes clusters within VMs and on bare metal servers. Some have viewed VMs as additional unnecessary overhead when running Kubernetes clusters, in favor for running Kubernetes on bare metal servers instead. Typically, in organizations where the service consumer and operator are loosely coupled, in relative terms, it would make sense to run Kubernetes clusters within VMs, to benefit from the strong security segregation of VMs, as well as reliability and resilience afforded by VMs. The greater security, reliability and resilience benefits come at the price of KVM overhead, typically seen as approximately 4 percent of peak system performance. Is 4 percent too high a price to pay?
Conversely, in organizations with a tightly coupled relationship between the service consumer and operator, it would viable to run Kubernetes clusters on bare-metal servers to gain better performance, though potentially being exposed in the event of any security glitch or encountering down time in the event of faults in the data center.

What’s next, and a challenge

To show Sardina’s support for OpenStack and its open-source model, we’re offering free no-charge access to the FishOS Deployer for a limited period of time. Please visit www.sardinasystems.com for more info or contact us at info@sardinasystems.com .

Here’s a challenge: If you find a use case that cannot be met without Kubernetes, get in touch with Sardina Systems. We’ll give you a free ticket to the next OpenStack Summit.

Mihaela Constantinescu will also be at the Summit in Berlin. Here’s how to contact her.

 

The post OpenStack and Kubernetes: Competing or complementary? appeared first on Superuser.

by Superuser at November 09, 2018 03:07 PM

RDO

Changing Default Blog Post Style to Left Instead of Justified

Bacon ipsum dolor amet ground round cow kevin tail buffalo tongue. Bacon biltong kevin, beef ribs shoulder capicola spare ribs meatloaf swine jerky sirloin. Beef ribs tenderloin porchetta short loin tri-tip pancetta pork chop pork strip steak pork loin flank corned beef turkey spare ribs meatball. Landjaeger picanha filet mignon, beef kielbasa spare ribs cow swine. Bacon ipsum dolor amet ground round cow kevin tail buffalo tongue. Bacon biltong kevin, beef ribs shoulder capicola spare ribs meatloaf swine jerky sirloin. Beef ribs tenderloin porchetta short loin tri-tip pancetta pork chop pork strip steak pork loin flank corned beef turkey spare ribs meatball. Landjaeger picanha filet mignon, beef kielbasa spare ribs cow swine.

Brisket jerky beef ribs swine. Tenderloin pig cupim ham hock, short loin bresaola biltong pork loin shank sirloin andouille turkey. Sirloin ribeye biltong t-bone venison. Chuck tail strip steak, swine kevin meatball andouille leberkas frankfurter capicola cupim cow. Cow turkey capicola, pastrami short ribs fatback kevin. Shoulder shankle flank short ribs t-bone. Sirloin tongue biltong, shoulder boudin landjaeger salami leberkas picanha flank swine tenderloin ground round t-bone prosciutto. Tenderloin t-bone jerky jowl, swine beef kevin ribeye doner pig biltong leberkas chicken. Kevin t-bone frankfurter chicken, short loin pork ball tip meatloaf shank meatball swine short ribs ham hock.

Fatback shoulder chuck landjaeger doner shank pork loin biltong sirloin beef short loin cow kielbasa swine. Hamburger bresaola porchetta short loin, leberkas cupim buffalo prosciutto meatloaf filet mignon ham turkey t-bone andouille. Leberkas meatloaf filet mignon rump. Venison t-bone leberkas, landjaeger beef ribs shank drumstick meatloaf kevin burgdoggen. Brisket jerky beef ribs swine. Tenderloin pig cupim ham hock, short loin bresaola biltong pork loin shank sirloin andouille turkey. Sirloin ribeye biltong t-bone venison. Chuck tail strip steak, swine kevin meatball andouille leberkas frankfurter capicola cupim cow. Cow turkey capicola, pastrami short ribs fatback kevin. Shoulder shankle flank short ribs t-bone. Sirloin tongue biltong, shoulder boudin landjaeger salami leberkas picanha flank swine tenderloin ground round t-bone prosciutto. Tenderloin t-bone jerky jowl, swine beef kevin ribeye doner pig biltong leberkas chicken. Kevin t-bone frankfurter chicken, short loin pork ball tip meatloaf shank meatball swine short ribs ham hock.

Fatback shoulder chuck landjaeger doner shank pork loin biltong sirloin beef short loin cow kielbasa swine. Hamburger bresaola porchetta short loin, leberkas cupim buffalo prosciutto meatloaf filet mignon ham turkey t-bone andouille. Leberkas meatloaf filet mignon rump. Venison t-bone leberkas, landjaeger beef ribs shank drumstick meatloaf kevin burgdoggen.

by Rain Leander at November 09, 2018 01:54 PM

November 08, 2018

SUSE Conversations

Six months in and I am l loving it here at SUSE. I am off to Ceph Day Berlin and the OpenStack Summit!

My six month anniversary, how time flies, is fast approaching, November 14 to be exact. And I’ll be at OpenStack Summit in Berlin to celebrate the day. I’ve learned a lot in a short period of time and I’m really excited about heading off to Berlin this coming weekend. I will be participating, along with […]

The post Six months in and I am l loving it here at SUSE. I am off to Ceph Day Berlin and the OpenStack Summit! appeared first on SUSE Communities.

by Mike Dilio at November 08, 2018 08:52 PM

OpenStack Superuser

Get involved with diversity events at the Berlin Summit

You’ll get the big picture with keynotes, dive in with the working group or up to speed over lunch with diversity and inclusion efforts in the community at the Summit.

The Berlin events represent a pivot from the Women of OpenStack (WOO) to a focus on diversity. WOO has been folded into the Diversity and Inclusion Working Group, you can check out the events and biweekly meeting schedule here.

“WOO was seeing a decline in active participants before and after the move to IRC, while the D&I WG was maintaining a core groups of participants,” Amy Marrich, a former WOO member who now runs the D&I WG, tells Superuser. “There have always been comments from allies about not attending things like the WOO networking lunch because they didn’t want to intrude. We definitely didn’t want to exclude anyone. Folding WoO into the D&I WG hopefully will allow us to continue doing the work we were doing but also allow others to feel welcome. The Mentoring Cohorts is one of the activities formerly under WoO that moved over and we’re hoping people will realize it’s for everyone and not just for women with the move. Summit activities like Speed Mentoring and Git and Gerrit have moved over as well but have always been more openly attended.”

Here are the Berlin Summit events:

Mentoring to Foster Diversity

Joseph Sandoval, Adobe Systems, manager, will be keynoting on Wednesday. Sandoval who’s been in tech for a quarter century, realized 15 years ago there just weren’t enough women and people of color. He’ll be detailing how to change that.
Details here.

Diversity and Inclusion WG Update

Meet with members of the Diversity Working Group to learn the history behind the group, what it’s currently working on and help drive where the group and OpenStack is going to improve diversity and inclusion. Amy Marrich, Linux Academy, OpenStack course author, will lead the session. Details here.

Diversity Networking Lunch

Join the Diversity Working Group for lunch to meet and network with the open source community to discuss best strategies for supporting each other. Sponsored by Intel, the group will celebrate accomplishments over the past year, break into small groups to discuss updating our initiatives and discuss other topics related to diversity and growing our community. Intel’s Melissa Evers-Hood and Madhuri Kumari will lead the session. Details here.

Speed Mentoring Lunch

If you’re new to OpenStack or would like some mentoring, this session is a great icebreaker and a way to get know new and experienced people in the open-source community. We plan to divide the session between career, technical and community mentoring. Mentees will be organized into small groups and each group will have several 15 minute mentoring sessions. In your small group, you’ll get to know a bit about a mentor and have an opportunity to ask them a question or two about how you can grow your career, get involved in the community, and make the most of the summit. Then, after  15 minutes, a new mentor will cycle to your group and the process will repeat. Interested in being a mentor?
Fill out this form: https://openstackfoundation.formstack.com/forms/txl_speed_mentoring_mentor
Details here.

The post Get involved with diversity events at the Berlin Summit appeared first on Superuser.

by Superuser at November 08, 2018 03:05 PM

StackHPC Team Blog

Monasca comes to Kolla

Back near the dawn of time in December, 2016, Sam Yaple created a spec to add Monasca containers to Kolla. Aeons later Kolla-Ansible finally supports deploying Monasca out-of-the-box. Much like crossing the Magic Roundabout in Swindon, many things had to line up to make it happen. The ground was paved by adding support for Apache Kafka, Zookeeper, Storm and Logstash. Then came the Monasca services, rolled out one-by-one until the Fluentd firehose was coupled up to the Monasca Log API. The CI system creaked, the lights went dim and the core reviewers groaned as Zuul unleashed a colossal chunk of Ansible. No longer did one have to carefully deploy, configure and maintain an uncountable number of services. Injurious crashes were reduced by three quarters and sanity returned to the Monasca sysadmins. So what exactly did the end result look like?

Monasca overview

At this stage you might be thinking that Graphviz has just exploded on your screen, or even, has anyone keeled over and died from looking at the diagram? But if you defocus your eyes a little further, you'll see that there can actually be three, or even more of everything. Three APIs, three instances of almost anything you can see, with traffic pirouetting through a Kafka cluster in between. The only things which spoil the fun are InfluxDB which requires an enterprise license for clustering, and the Monasca fork of Grafana, which just doesn't seem to play nicely with load balancing.

So what is it like to run this monster in production? Does it deliver? Why on Earth would you want to do it? We actually have some compelling reasons which we'll summarise below:

  • Horizontally scalable
    We love working with small deployments, and supporting these matters greatly to us, but in the world of HPC, machines can get really huge. Indeed, it's not uncommon for small deployments to morph into large ones, and with Monasca, no matter where you start, you can seamlessly scale with demand.
  • Multi-tenant
    Add value to your OpenStack deployment. Through the power of automation it's true that you could stamp out a monitoring and logging solution per tenant without too much fuss. However, it's hard to beat simply logging in via a public endpoint with your OpenStack credentials.
  • Highly available / fault tolerant
    Kolla Monasca has been designed to provide a single pane of glass for monitoring the health of your OpenStack deployment. If a wheel falls off, we don't want you scrambling for the spare tyre. All critical monitoring and logging services can be deployed in a highly available and fault tolerant configuration.
  • Support for push-metrics
    In big systems there are often complex interactions and understanding these is part of the art of HPC. What's more, complex interactions don't tend to happen at fixed time intervals. Support for push-metrics allows users to stream batches of data into Monasca with a sampling frequency of whatever they like. So whether you're tuning traffic flows in your network fabric, or optimising your MPI routine, Monasca has you covered.

So without further ado, we're going to hand you over to the Kolla documentation. Unlike the Magic Roundabout you'll have two paths to follow: The brave can enable Monasca in their existing Kolla Ansible deployment, and the cautious can choose to deploy Monasca standalone and integrate, if they wish, with an external instance of Keystone which doesn't need to be provided by Kolla. We hope that you like it, and most of all we hope that you find it useful.

by Doug Szumski at November 08, 2018 11:00 AM

Aptira

Decoupling Path Computation Engine (PCE) and Switch Control Functions

Aptira Decoupling Path Computation Engine (PCE) and Switch Control Functions

Allowing Path Computation Engine (PCE) scalability without impact to the switch control plane.

As we saw in parts 1 and 2 of this series, large scale global network implementations involve complexities of size and geographical distribution, but they inherently also have less reliable network infrastructure for control, and most SDN controllers struggle with this. 

A key strength of OpenKilda is the decoupling of the Path Computation Engine from the switch control functionality. This decoupling minimises its vulnerability to high-latency, unreliable control planes that cause other SDN controllers to fall into a constant state of churn from trying to re-converge the network. 

Although complex, the two most important and computationally expensive parts of any SDN Controller are the Path Computation Engine (PCE) and the South-Bound Interface (SBI). 

The PCE is tasked with calculating routes for different traffic through the infrastructure given both hard connection requirements and current state of the links between switches, while also considering constraints such as bandwidth and quality of service (QoS). 

The SBI handles connecting to the deployed switches and updating their flow tables in accordance with the configuration calculated by the PCE. Additionally, the SBI is responsible for collecting telemetry from the switches, which is passed as an input to the PCE process. 

As discussed in earlier posts, when an SDN controlled WAN footprint grows, the overhead of maintaining the switch’s flow tables and returning telemetry feeds means separating the PCE from the core modules of the SDN Controller becomes necessary due to load alone. 

There are however a number of other reasons to separate the functions that are not as immediately obvious. 

With geographically diverse or high latency WAN implementations there is often extra information to be taken into account when calculating paths, such as WAN link cost and congestion. This information is not available through the SBI telemetry feed and must be fed into the PCE some other way. How do we ingest the extra information from external systems without needing to plumb it deep into our production network? 

Decoupling the PCE allows it to be deployed further from the switches than the SBI, logically closer to external systems (portals, databases, users) we don’t want near the switch layer, and physically closer to enterprise storage solutions. The former means we can gather information without compromising our data plane environment, sending only path updates to the SBI. The latter gives us access to storage for telemetry data, useful for capacity management, visualization, and operational support tools. 

As an additional bonus, the ability the horizontally scale the PCE infrastructure, coupled with access to historical data, opens pathways to tools from Big Data and Machine Learning disciplines. Statistical modelling, long-term trend analysis and policy enforcement can all be added over time without affecting our underlying control of the switches themselves. 

With ever more complex networks being deployed, harnessing the best available tools is critical. By separating SBI and PCE functionality we allow a much more flexible approach to adopting tools without affecting our ability to control the data plane. This vital functionality, along with being able to manage and process large amounts of data whether it be OpenFlow messages or telemetry using web-scale packages like Kafka, all together make OpenKilda the only SDN controller that can be classed as truly Web-Scale. 

Remove the complexity of networking at scale.
Learn more about our SDN & NFV solutions.

Learn More

The post Decoupling Path Computation Engine (PCE) and Switch Control Functions appeared first on Aptira.

by Aptira at November 08, 2018 08:25 AM

November 07, 2018

SUSE Conversations

QSC AG: A Customer Success Story with SUSE® OpenStack Cloud and SUSE Enterprise Storage

QSC AG wanted to help its colocated customers increase the flexibility and value of their services by enabling easier access to cloud computing resources. By deploying SUSE® OpenStack Cloud with SUSE Enterprise Storage, the company created a hybrid cloud that uniquely links its customers’ colocated infrastructure with cloud resources running in the same data centers. […]

The post QSC AG: A Customer Success Story with SUSE® OpenStack Cloud and SUSE Enterprise Storage appeared first on SUSE Communities.

by Mike Dilio at November 07, 2018 09:36 PM

OpenStack—The Next Generation Software-defined Infrastructure for Service Providers

Many service providers face the challenge of competing with the pace of innovation and investments made by  hypercloud vendors. You constantly need to enable new services (e.g., containers, platform as a service, IoT, etc.) while remaining cost competitive. The proprietary cloud platforms used in the past are expensive and struggle to keep up with emerging […]

The post OpenStack—The Next Generation Software-defined Infrastructure for Service Providers appeared first on SUSE Communities.

by jvonvoros at November 07, 2018 06:00 PM

OpenStack Superuser

How to automatically deploy Zenko and MetalK8s on OpenStack

Our sales and support engineering teams have developed a set of Heat scripts and a Murano app to deploy MetalK8s and Zenko on OpenStack. These tools allow them to quickly get an environment running to show how Zenko can manage data across multiple cloud storage systems. The support engineering team also uses it to run stress tests and sizing experiments.

The scripts are currently used on Scality’s Platform9 private OpenStack instance but should be general enough to run on any Heat-powered cloud. If you try it out or have any questions, please come talk to us at the Berlin Summit.

The Stack/Application deploys the required minimum system to run Zenko. By default, it does not deploy anything else than the instances, but it can be configured to install Metalk8s only, or Metalk8s and Zenko. It’s available under an Apache 2.0 license on https://github.com/scality/zenko-heat-template/

Get your command line going

You’ll need a functioning environment with an OpenStack CLI tool to run against your OpenStack environment for this to work.

Deploying the template only

openstack stack create –parameter key_name=<key_to_use> –parameter network= -t template.yaml

Deploying the template and metalk8s only

openstack stack create –parameter key_name=<key_to_use> –parameter install=metalk8sonly –parameter zenko_version=1.0.1 –parameter network= -t template.yaml

Deploying the full stack (instances, metalk8s and Zenko)

openstack stack create –parameter key_name=<key_to_use> –parameter install=both –parameter zenko_version=1.0.1 –parameter network= -t template.yaml

Murano packages

The included script, create-nurano-package.sh, will generate a zip file that can be used to deploy the stack as well. the Platform9 UI allows creating applications based on Murano packages, which can then be used to deploy zenko/metalk8s as an application by all your authorized Platform9 users.

The post How to automatically deploy Zenko and MetalK8s on OpenStack appeared first on Superuser.

by Superuser at November 07, 2018 04:46 PM

OpenStack @ NetApp

Understanding volume migration on OpenStack: Intercluster volume migration

Welcome to the final part in this three-part series on Cinder volume migration. So far, we have explored the basics of migrating volumes and the migration of a volume between backends that reside on the same cluster. In this post, we extend the concept of moving volumes to backends that are on different clusters. If ... Read more

The post Understanding volume migration on OpenStack: Intercluster volume migration appeared first on thePub.

by Bala RameshBabu at November 07, 2018 04:02 PM

Understanding volume migration on OpenStack: Intracluster volume migration

In part two of this series, we will look at an example scenario, where a volume is migrated between backends that lie on the same cluster. In case you haven’t been through part 1 already, you should definitely do so and obtain a general overview of Cinder volume migration. This post examines the configurations necessary ... Read more

The post Understanding volume migration on OpenStack: Intracluster volume migration appeared first on thePub.

by Bala RameshBabu at November 07, 2018 04:02 PM

Understanding volume migration on OpenStack

One of the cool features offered by OpenStack is the migration of Cinder volumes across backends. Volumes can be moved within an ONTAP cluster, between ONTAP clusters and between an ONTAP and SolidFire cluster. The good thing about migrating volumes is that the operation is transparent to the end user. From the user’s perspective, they ... Read more

The post Understanding volume migration on OpenStack appeared first on thePub.

by Bala RameshBabu at November 07, 2018 04:01 PM

StackHPC Team Blog

Kubernetes, HPC and MPI

Convergence of HPC and Cloud will not stop at the infrastructure level. How can applications and users take the greatest advantage from cloud-native technologies to deliver on HPC-native requirements? How can we separate true progress from a blind love of the shiny?

The last decade has continued the rapid movement toward the consolidation of hardware platforms around common processor architectures and the adoption of Linux as the defacto base operating system, leading to the emergence of large scale clusters applied to the HPC market. Then came the adoption of elastic computing concepts around AWS, OpenStack, and Google Cloud. While these elastic computing frameworks have been focused on the ability to provide on-demand computing capabilities, they have also introduced the powerful notion of self-service software deployments. The ability to pull from any number of sources (most commonly open source projects) for content to stitch together powerful software ecosystems has become the norm for those leveraging these cloud infrastructures.

The quest for ultimate performance has come at a significant price for HPC application developers over the years. Tapping into the full performance of an HPC platform typically involves integration with the vendor’s low-level “special sauce”, which entails vendor lock-in. For example, developing and running an application on an IBM Blue Gene system is significantly different than HP Enterprise or a Cray machine. Even in cases where the processor and even the high-speed interconnects are the same, the operating runtime, storage infrastructure, programming environment, and batch infrastructure are likely to be different in key respects. This means that running the same simulations on machines from different vendors within or across data centers requires significant customization effort. Further, the customer is at the mercy of the system vendor for software updates to the base operating systems on the nodes or programming environment libraries, which in many cases significantly inhibits a customer’s ability to take advantage of the latest updates to common utilities or even entire open source component ecosystems.

For these and other reasons reasons, HPC customers are now clamoring for the ability to run their own ‘user defined’ software stacks using familiar containerized software constructs.

The Case for Containers

Containers hold great promise for enabling the delivery of user-defined software stacks. We have covered the state of HPC containers in a previous post.

Cloud computing users are given the freedom to leverage a variety of pre-packaged images or even build their own images and deploy them into their provisioned compute spaces to address their specific needs. Container infrastructures have taken this a step further by leveraging the namespace isolation capabilities of contemporary Linux kernels to provide light-weight, efficient, and secure packaging and runtime environment in which to execute sophisticated applications. Container images are immutable and self-sufficient, which make them very portable and for the most part immune to the OS distribution on which they are deployed.

Kubernetes - Once More Unto the Breach...

Over recent years, containerization (outside of HPC) has consolidated around two main technologies, Docker and Kubernetes. Docker provides a core infrastructure for the construction and maintenance of software stacks, while Kubernetes provides a robust container orchestrator that manages the coordination and distribution of containerized applications within a distributed environment.

Kubernetes has risen to the top in the challenge to provide orchestration and management for containerized software components due to its rich ecosystem and scaling properties. Kubernetes has shown to be quite successful for cloud-native workloads, high-throughput computing and data analytics workflows. But what about conventional HPC workloads? As we will discuss below, there are some significant challenges to the full integration of Kubernetes with the conventional HPC problem space but is there a path to convergence?

A Bit of History

To understand the challenges facing the full adoption of open container ecosystems for HPC, it is helpful to present some of the unique needs of this problem space. We’ve provided a survey of the current state of containers in HPC in a previous blog post.

Taxonomy of HPC Workloads

Conventionally, HPC workloads have been made up of a set of purpose-driven applications designed to solve specific scientific simulations. These simulations can consist of a series of small footprint short-lived ‘experiments’, whose results are aggregated to obtain a particular target result; or large-scale, data-parallel applications that can execute across many thousands of nodes within the system. These two types of applications are commonly referred to as capability and capacity applications.

Submitted Jobs vs Requested Cores

Data from an operational HPC cluster demonstrating that dominant usage of this resource is for sequential or single-node multi-threaded workloads. What is not shown here is that the large-scale parallel workloads have longer runtimes, resulting in a balanced mix of use cases for the infrastructure.

Capability computing refers to applications built to leverage the unique capabilities or attributes of an HPC system. This could be a special high performance network with exceptional bisection bandwidth to support large scale applications, nodes with large memory capacity or specialized computing capabilities of the system (e.g., GPUs) or simply the scale of the system that enables the execution of extreme-scale applications. Capacity computing, on the other hand, refers to the ability of a system to hold large numbers of simultaneous jobs, essentially providing extreme throughput of small and modest sized jobs from the user base.

There are several critical attributes that HPC system users and managers demand to support an effective infrastructure for these classes of jobs. A few of the most important include:

  1. High Job Throughput

    Due to the significant financial commitment required to build and operate large HPC systems, the ability to maximize these resources on the solution of real science problems is critical. In most HPC data centers, accounting for the utilization of system resources is a primary focus of the data center manager. For this reason, much work has been expended on the development of Workload Managers (WLMs) to efficiently and effectively schedule and manage large numbers of application jobs on to HPC systems. These WLMs sometimes integrate tightly with system vendor capabilities for advanced node allocation and task placement to ensure most effective use of the underlying computing resource.

  2. Low Service Overhead

    For research scientists, time to solution is key. One important example is weather modeling. Simulations have a very strict time deadline as results must be provided in a timely way to release to the public. The amount of computing capacity available to apply to these simulations directly impacts the accuracy, granularity and scope of the results that can be produced.

    Such large-scale simulations are commonly referred to as data parallel applications. These applications typically process a large data set manageable pieces, spread in parallel across many tasks. Parallelism occurs both within nodes and between nodes - for which data is exchanged between tasks over high speed networking fabrics using communication libraries such as Partitioned Global Address Space (PGAS) or Message Passing Interface (MPI).

    These distributed applications are highly synchronized and typically exchange data after some fixed period of computation. Due to this synchronization, they are very sensitive to, amongst other things, drift between the tasks (nodes). Any deviation by an individual node will often cause a delay in the continuation of the overall simulation. This deviation is commonly referred to as jitter. A significant amount of work has been done to mitigate or eliminate such effects within HPC software stacks. So much so, that many large HPC system manufacturers have spent significant resources to identify and eliminate or isolate tasks that have the potential to induce jitter in the Linux kernels that they ship with their systems. As customers reap direct benefit from these changes, it would be expected that any containerized infrastructure would be assumed to carry forward similar benefits. This would presume that any on-node presence supporting container scheduling or deployment would present minimal impact to the application workload.

  3. Advanced Scheduling Capabilities

    Many HPC applications have specific requirements relative to where they are executed within the system. Where each task (rank) of an application may need to communicate with specific neighboring tasks and so prefer to be placed topologically close to these neighbors to improve communication with these neighbors. Other tasks within the application may be sensitive to the performance of the I/O subsystem and as such may prefer to be placed in areas of the system where I/O throughput or response times are more favorable. Finally, individual tasks of an application may require access to specialized computing hardware, including nodes with specific processor types attached processing accelerators (e.g., GPUs). What’s more, individual threads of a task are scheduled in such a way as to avoid interference by work unrelated to the user’s job (e.g., operating system services or support infrastructure, such as monitoring). Interference with the user’s job by these supporting components has a direct and measurable impact on overall job performance.

St George and the Dragon (Wikipedia, public domain)

The Role of PMI(x)

The Message Passing Interface (MPI) is the most common mechanism used by data-parallel applications to exchange information. There are many implementations of MPI, ranging from OpenMPI, which is a community effort, to vendor-specific MPI implementations, which integrate closely with vendor-supplied programming environments. One key building block on which all MPI implementations are built is the Process Management Interface (PMI). PMI provides the infrastructure for an MPI application to distribute the information about all of the other participants across an entire application.

PMI is a standardized interface which has gone through a few iterations each with improvements to support increased job scale with reduced overhead. The most recent version, PMIx is an attempt to develop a standardized process management library capable of supporting the exchange of connection details for applications deployed on exascale systems reaching upwards of 100K nodes and a million ranks. The goal of the project is to achieve this ambitious scaling without compromising the needs of more modest sized clusters. In this way, PMIx intends to support the full range of existing and anticipated HPC systems.

Early evaluation of launch performance in the wire-up phase of PMIx is quite illuminating as can be seen from this SuperComputing '17 presentation. This presentation shows the performance advantages in launch times as the number of on-node ranks increases by utilizing a native PMIx runtime TCP interchange to distribute wire-up information rather than using Slurm’s integrated RPC capability. The presentation then goes on to show how an additional two orders of magnitude improvement by leveraging native communication interfaces of the platform through the UCX communication stack. While this discussion isn’t intended to focus on the merits of one specific approach over another for launching and initializing a data parallel application, it does help to illustrate the sensitivity of these applications to the underlying distributed application support infrastructure.

Dürer's Rhinoceros (Wikipedia, public domain)

Full Integration of Open Container Frameworks with Conventional HPC Workflows

There are projects underway with the goal of integrating Kubernetes with MPI. One notable approach, kube-openmpi, uses Kubernetes to launch a cluster of containers capable of supporting the target application set. Once this Kubernetes namespace is created, it is possible to use kubectl to launch and mpiexec applications into the namespace and leverage the deployed OpenMPI environment. (kube-openmpi only supports OpenMPI, as the name suggests).

Another framework, Kubeflow, also supports execution of MPI tasks atop Kubernetes. Kubeflow’s focus is evidence that the driving force for MPI-Kubernetes integration will be large-scale machine learning. Kubeflow uses a secondary scheduler within Kubernetes, kube-batch to support the scheduling and uses OpenMPI and a companion ssh daemon for the launch of MPI-based jobs.

While approaches such as kube-openmpi and kubeflow provide the ability to launch MPI-based applications as Kubernetes jobs atop a containerized cluster, they essentially replicate existing *flat earth* models for data-parallel application launch within the context of an ephemeral container space. Such approaches do not fully leverage the flexibility of the elastic Kubernetes infrastructure, or support the critical requirements of large-scale HPC environments, as described above.

In some respects, kube-openmpi is another example of the fixed use approach to the use of containers within HPC environments. For the most part there have been two primary approaches. Either launch containers into a conventional HPC environment using existing application launchers (e.g., Shifter, Singularity, etc.), or emulate a conventional data parallel HPC environment atop a container deployment (à la kube-openmpi).

While these approaches are serviceable for single-purpose environments or environments with relatively static or purely ephemeral use cases, problems arise when considering a mixed environment where consumers wish to leverage conventional workload manager-based workflows in conjunction with a native container environment. In cases where such a mixed workload is desired, the problem becomes how to coordinate the submission of work between the batch scheduler (e.g., Slurm) and the container orchestrator (e.g., Kubernetes).

Another approach to this problem is to use a meta-scheduler that coordinates the work across the disparate domains. This approach has been developed and promoted by Univa through their Navops Command infrastructure. Navops is based on the former Sun Grid Engine, originally developed by Sun Microsystems, then acquired by Oracle, and eventually landing at Univa.

While Navops provides an effective approach to addressing these mixed use coordination issues, it is a proprietary approach and limits the ability to leverage common and open solutions across the problem space. Given the momentum of this space and the desire to leverage emerging technologies for user-defined software stacks without relinquishing the advances made in the scale supported by the predominant workload schedulers, it should be possible to develop cleanly integrated, open solutions which support the set of existing and emerging use cases.

He, over all the starres doth raigne, that unto wisdome can attaine...

What Next?

So what will it take to truly develop and integrate a fully open, scalable, and flexible HPC stack that can leverage the wealth of capabilities provided by an elastic infrastructure? The following presents items on our short list:

  1. Peaceful Coexistence of Slurm with Kubernetes

    Slurm has become the de facto standard for open management of conventional HPC batch-oriented, distributed workloads. Likewise, Kubernetes dominates in the management of flexible, containerized application workloads. Melding these two leading technologies cleanly in a way that leverages the strengths of each without compromising the capabilities of either will be key to the realization of the full potential of elastic computing within the HPC problem space.

    Slurm already integrates with existing custom (and ostensibly closed) frameworks such as Cray’s Application Launch and Provisioning System (ALPS). It has been proven through integration efforts such as this that there is significant gain to be made by leveraging capabilities provided by such infrastructures. ALPS has been designed to manage application launch at scale and manage the runtime ecosystem (including network and compute resources) required by large, hero-class applications.

    Like these scaled job launchers, Kubernetes provides significant capability for placement, management, and deployment of applications. However, it provides a much richer set of capabilities to manage containerized workflows that are familiar to those who are leveraging cloud-based ecosystems.

    While the flexibility of cloud computing allows users to easily spin up a modest-sized set of cooperating resources on which to launch distributed applications, within a conventional HPC infrastructure, designed for the execution of petascale and (coming soon) exascale applications, there are real resource constraints at play that require a more deliberate approach at controlling and managing the allocation and assignment of these resources.

    The ability to manage such a conventional workload-based placement strategy in conjunction with emerging container-native workflows has the potential of significantly extending the reach and broadening the utility of high performance computing platforms.

  2. Support for Elasticity within Slurm

    Slurm is quite effective in the management of the scheduling and placement of conventional distributed applications onto nodes within an HPC infrastructure. As with most conventional job schedulers, Slurm assumes that it is managing a relatively static set of compute resources. Compute entities (nodes) can come and go during the lifetime of a Slurm cluster. However, Slurm prefers that the edges of the cluster be known apriori so that all hosts can be aware of all others. In other words, the list of compute hosts is distributed to all hosts in the cluster when the Slurm instance is initialized. Slurm then manages the workload across this set of hosts. However, management of a dynamic infrastructure within Slurm can be a challenge.

  3. Mediation of Scheduler Overhead

    There is a general consensus that there are tangible advantages to the use of on-demand computing to solve high performance computing problems. There is also general consensus that the flexibility of an elastic infrastructure brings with it a few undesirable traits. The one that receives the most attention is added overhead. Any additional overhead has a direct impact on the usable computing cycles that can be applied by the target platform to the users’ applications. The source of that overhead, however, is in the eye of the beholder. If you ask someone focused on the delivery of containers, they would point to the bare-metal or virtual machine infrastructure management (e.g., OpenStack) as a significant source of this overhead. If you were to ask an application consumer attempting to scale a large, distributed application, they would likely point at the container scheduling infrastructure (e.g., Kubernetes) as a significant scaling concern. For this reason, it is common to hear comments like, “OpenStack doesn’t scale”, or “Kubernetes doesn’t scale”. Both are true… and neither are true. It really depends on your perspective and the way in which you are trying to build the infrastructure.

    This attitude tends to cause a stovepiping of solutions to address specific portions of the problem space. What is really needed is a holistic view, covering a range of capabilities and solutions and a concerted effort to provide integrated solutions. An ecosystem that exposes the advantages of each of the components of elastic infrastructure management, containerized software delivery, and scaled, distributed application support, while providing seamless coexistence of familiar workflows across these technologies would provide tremendous opportunities for the delivery of high performance computing solutions into the next generation.

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Kitrick Sheets at November 07, 2018 12:00 PM

Mirantis

Proof of Concept: A waste of time and money?

A proof of concept is a crucial part of any cloud project, and it shouldn’t be slapped together just to tick off a checkbox.

by Christian Huebner at November 07, 2018 01:39 AM

November 06, 2018

OpenStack Superuser

Must-see sessions on edge computing at the Berlin Summit

Join the people building and operating open infrastructure at the OpenStack Summit Berlin in November. The Summit schedule features over 200 sessions organized by use cases including: artificial intelligence and machine learning, high performance computing, edge computing, network functions virtualization, container infrastructure and public, private and multi-cloud strategies.

Here we’re highlighting some of the sessions you’ll want to add to your schedule about edge computing. Check out the entire offering here.

Beyond the Hype: Edge Computing Working Group update

The Edge Computing Working Group has been busy defining use cases — from smart remote cameras, streaming content, augmented reality, gaming —- and hammering out new ways of deploying emerging technology networks. Join this panel session to hear more how about the group has engaged with the larger OpenStack and Open Source communities over the past year on a number of projects including:

  • Keystone development to support edge clouds
  • Glance support for federated models
  • Related projects such as Cyborg, Airship and StarlingX
  • Data synchronization with edge clouds
  • Work with adjacent communities: OPNFV edge cloud project, more communities to come
  • Vendor and user collaboration

Details here.

Orchestration and management for edge application with ONAP

In 2018, Akraino was proposed to be the first open-source collaborative project exclusively for edge clouds in Linux Foundation. Akraino is a framework which integrates projects like ONAP, StarlingX etc. ONAP project handles orchestration and life cycle management on the top layer. StarlingX project is expected to serve as the edge cloud platform which integrates several OpenStack services on the bottom layer. In the presentation, Yang Yan (CMCC) and Shane Wang (Intel) will introduce the joint integration work across the work flow of edge deployment and the internal mechanism of life cycle management. Details here.

Living on the Edge: Combining OpenStack, Kubernetes and Tungsten Fabric to make edge computing a reality

Edge compute deployments require optimized network and policy control, with a focus on reduced footprint.
This presentation by Marc Rapoport, Juniper Networks,  will focus on the challenges introduced by these large-scale distributed deployments and on the latest enhancements introduced in Tungsten Fabric, OpenStack and Kubernetes to deliver an optimized architecture for the edge compute use case.
Details here.

Don’t Touch That! Addressing edge infrastructure management

Edge infrastructure management has many unique challenges including restricted access, number of sites, limited connectivity and amount of available overhead for management infrastructure. Each of these presents significant hurdles for running distributed sites and many operators face all of them. Rob Hirschfeld, RackN CEO, will examine these and other edge IT management challenges with an eye towards pragmatic solutions and industry parallels. He’ll spend extra time looking at how cloud deployment approaches like immutable infrastructure, blue/green deployments and continuous integration can be applied at the edge.
Hirschfeld brings his unique hardware dev-ops perspective and the opinions of his guest from “the latest shiny” – his edge-focused podcast.
Details here.

Infrastructure and network APIs at the edge

While initial software at the edge will consist of wireless as well as wireline access and core network functions, the real innovations will be driven by third-party applications such as IoT, media, analytics, AR/VR etc. Pervasive edge optimized software development and fast deployment of these 3rd party applications, however, will require open APIs towards the edge infrastructure and network services. In addition to ease of onboarding, these APIs will enable ways to use information about the network and the available resources.
In this presentation, Haseeb Akhtar (Ericsson) and Gnanavelkandan Kathirvel (AT&T) will share:

  • Key architectural options of exposing infrastructure (e.g., OpenStack, Kubernetes, hardware etc.) and network (e.g., 5G RAN, Core etc.) APIs at the edge.
  • API requirements of third-party applications at the edge. What are their wants vs. needs?
  • The role of OpenStack, Kubernetes, ONAP etc. to include infrastructure information in the APIs.
  • A potential list of network APIs that can be used by these third-party applications.

Details here.

See you at the OSF Summit in Berlin, November 15-18 2018! Register here.

The post Must-see sessions on edge computing at the Berlin Summit appeared first on Superuser.

by Superuser at November 06, 2018 05:22 PM

Trinh Nguyen

Viet OpenStack first webinar 5 Nov. 2018


Yesterday, 5 November 2018, at 21:00 UTC+7, about 25 Vietnamese developers attended the very first webinar of the Vietnam OpenStack User Group [1]. This is part of a series of Upstream Contribution Training based on the OpenStack Upstream Institute [2]. The topic is "How to contribute to OpenStack". Our target is to guide new and potential developers to understand the development process of OpenStack and how they are governed.

The webinar was planned to do in Google Hang Out but with the free version, only maximum 10 people can join the video call. So, we decided to use Zoom [3]. But, because it limits to 45m per meeting for the free account, we did 2 sessions for the webinar. Thank the proactive and supports of the Vietnam OpenStack User Group administrators, the webinar went very well. Whatever works.

I uploaded the training's content on GitHub [4] and will update it based on the attendee's feedbacks. A couple feedbacks I got after the webinar are:
  • Should have exercises
  • Find a more stable webinar tool
  • The training should happen earlier
  • The topics should be simpler for new contributors to follow
You can find the recorded videos of the webinar here:

Session 1: https://youtu.be/k3U7MjBNt-k




Session 2: https://youtu.be/nIkmIgTvfd4




We continue to gather feedback from the attendees and plan for the second webinar next month.

References:

[1] https://www.meetup.com/VietOpenStack/events/hpcglqyxpbhb/
[2] https://docs.openstack.org/upstream-training/upstream-training-content.html
[3] https://zoom.us
[4] https://github.com/dangtrinhnt/vietstack-webinars

by Trinh Nguyen (noreply@blogger.com) at November 06, 2018 01:04 PM

Searchlight weekly report - Stein R-23



The week after Stein-1 milestone is a little bit quiet. There are only some trivial fix on spec typos, tox version, etc. 

The main target of Stein-2 milestone for Searchlight is to develop the use cases so that Searchlight can attract more users as well as contributors. The effort is lead by Thuy Dang [1]. Also, we need to keep up with the community's goal, the upgrade checker. The effort is being processed by the goal's champion [2]. I will review and merge the patch this week if it goes well.


References:

by Trinh Nguyen (noreply@blogger.com) at November 06, 2018 05:01 AM

November 05, 2018

Arie Bregman

OpenStack: Heat Python Tutorial

In this tutorial, we’ll focus on how to interact with OpenStack Heat using Python.  Before deep diving into Heat Python examples, I suggest being familiar with Heat itself and more specifically:

  • Templates
  • Basic operations: create/delete/update stack

Still here? let’s go 🙂

Set up Heat client

In order to work with Heat, we need first to create a heat client.

from heatclient import client as heat_client
from keystoneauth1 import loading
from keystoneauth1 import session

kwargs = {
'auth_url': ,
'username':,
'password': ,
'project_name': ,
'user_domain_name': ,
'project_domain_name': 
}

loader = loading.get_plugin_loader('password')
auth = loader.load_from_options(**kwargs)
sess = session.Session(auth=auth, verify=False)

client = heat_client.Client('1', session=sess, endpoint_type='public', service_type='orchestration')

Note: if for some reason you are using auth v2 and not v3, you can drop user_domain_name and project_domain_name.

You should be able to use your heat client now. Let’s test it.

List Stacks

for stack in client.stacks.list():
    print(stack)

Stack 
u'description':u'',
u'parent':None,
u'deletion_time':None,
u'stack_name':u'default',
u'stack_user_project_id':u'48babe632349f9b87ac3513',
u'stack_status_reason':u'Stack CREATE completed successfully',
u'creation_time':   u'2018-10-25T17:02:52   Z',
u'links': 
 
u'href':         u'https://my-server',
u'rel':u'self'
}
],
u'updated_time':None,
u'stack_owner':None,
u'stack_status':u'CREATE_COMPLETE',
u'id':u'b90d0e57-05a8-4700-b2f9-905497abe673',
u'tags':None
}>

The list method provides us with a generator that returns Stack objects. Each Stack object contains plenty of information. Information like the name of the stack, if it’s a nested stack you’ll get details on the parent stack, creation time and probably the most useful one – stack status which allows us to check if the stack is ready to use.

Create a Stack

In order to create a stack, we first need a template that will define how our stack would look like. I’m going to assume here that you read the template guide and you have a basic (or complex) template ready for use.

To load a template, heat developers have provided us with the get_template_content method

from heatclient.common import template_utils
import yaml

template_path = '/home/mario/my_template'

# Load the template
_files, template = template_utils.get_template_contents(template_path)

# Searlize it into a stream
s_template = yaml.safe_dump(template)

client.stacks.create(stack_name='my_stack', template = s_template)

Stack with parameters

In reality, there is a good chance your template includes several parameters that you have to pass when creating the stack. For example, take a look at this template

heat_template_version: 2013-05-23
description: My Awesome Stack

parameters:
  flavor:
    type: string
  image:
    type: string

In order for the stack creation to be completed successfully, we need to provide the parameters flavor and image. This will require a slight change in our code

parameters = {'flavor': 'm1.large', 'image': 'Fedora-30'}

client.stacks.create(stack_name='my_stack', template = s_template, parameters=parameters)

We created a dictionary with the required parameters and passed it to the stack create method. When more parameters added to your template, all you need to do is to extend the ‘parameters’ dictionary, without modifying the create call.

Inspect stack resources

Inspecting the stack as we previously did, might not be enough in certain scenarios. Imagine you want to use some resources as soon as they ready, regardless of overall stack readiness. In that case, you’ll want to check what is the status of a single resource. The following code will allow you to achieve that

stack = client.stacks.get("my_stack")
res = client.resources.get(stack.id, 'fip')
if res.resource_status == 'CREATE_COMPLETE':
    print("You may proceed :)")

So what did just happened? first, we need to obtain the ID of our stack. In order to do that we use the stacks get method by passing our stack’s name.

Now that we have the stack ID we can use it and the resource name we are interested in (‘fip’) to get the resource object.

Once we get the resource object, we can use ‘resource_status’ to check the if the stack creation has been completed and proceed accordingly.

Stack outputs

A better way to get quickly the output we interested in is the outputs section in Heat templates.

outputs:
server_ip:
  value: {get_attr: [floating_ip, floating_ip_address]}
server_ip2:
  value: {get_attr: [floating_ip2, floating_ip_address]}

In the above example, we are providing the user the information about two floating IPs of two different servers. We can then access this information with Python this way

print(my_stack.outputs)

[{u'output_value': u'10.2.224.22', u'output_key': u'server_ip2', u'description': u'No description given'}, {u'output_value': u'10.2.230.22', u'output_key': u'server_ip', u'description': u'No description given'}]

IP = stack.output[0]['output_value']
print(IP)

10.2.224.22

As you can see, each output represented by its own item in the ‘outputs’ list and is a better method (in my opinion at least) to access information quickly than inspecting the resources.

by Arie Bregman at November 05, 2018 08:41 PM

OpenStack Superuser

Yes it blends: Vanilla Forums and private clouds

Moving from the public to the private cloud can be a nightmare of competing technologies and business priorities. As online community software provider Vanilla Forums grew its business, they realized that the private cloud — and OpenStack — was where they wanted to be. Luckily, the company was able to make the jump fairly easily, with COO Tim Gunter tasked with leading the transition.

Initially, the Vanilla Forums team set up their online servers using a public cloud. “It was a single server doing the web service and a single server with a database,” Gunter says, “and off they went.”

Gunter joined soon after, and the company grew from there, evolving how it hosts infrastructure. “It became pretty obvious after a few years hosted on the public cloud that we were paying too much for what we were getting,” Gunter adds. They consulted with their provider who recommended switching to a private cloud, still hosted by them, powered by OpenStack.

Vanilla Forums initially came to life in 2005, with a beta version released in 2002 by Mark O’Sullivan to support his online graphic design and developer community. The open source online forum software gained traction with those user communities through its initial release in 2005 and even powered the Mozilla add-on repository comment system for a while. Version two was written by O’Sullivan and Todd Burry in 2009 and the two ended up making a company out of the technology with the help of a Colorado startup incubator. The company now counts Electronic Arts, HootSuite, Patagonia and Adobe as customers.

The Vanilla Forums team adopted the Grizzly release in its cub days, saving Vanilla Forums a ton of money. “Right out [of] the gate we cut our hosting bill nearly in half by switching to a private cloud,” says Gunter. “We continue to grow and it’s much cheaper to add an entire hypervisor than to add a hypervisor’s worth of public cloud VMs, so we’re happy with it. Overall, it’s been much more stable than our public cloud VMs were.”

Like many folks in development, Gunter is self-taught. He’s built a strong relationship with VEXXHOST, who manages Vanilla Forums’ two private clouds, one in Montreal and one in the US. Gunter counts on the personal support he receives from VEXXHOST’s CEO Mohammed Naser, too.

“I don’t really know what I don’t know,” Gunter says. He bounces potential solutions off Naser, who offers his perspective with solutions that other VEXXHOST customers have successfully employed. “It’s more consultative than just a service provider.”

Gunter and his team also noticed that as hardware got cheaper, their price did not. “We started looking around to see if we could move to a different provider, potentially somewhere in Canada. We were looking to pay in Canadian dollars instead of American dollars, saving a bit of money and fostering a closer relationship with our hosting provider,” he said.

Vanilla Forums uses OpenStack to host their software-as-a-service product, a web application written in PHP. It runs on Nginx and MySQL on logical clusters so the company can host customers based on their specific technical, security, and privacy needs. Gunter estimates there are currently about 50 different clusters, comprised  of cache servers, web servers, databases, etc. grouped together serving websites as the main workload. Vanilla Forums also hosts some secondary workloads, namely a real-time asynchronous message queue, an analytics platform and a dynamically-generated icon service along the lines of Gravatar.

As Europe’s General Data Protection Regulation (GDPR) takes effect, EU customers need to work with companies that fit the legislation’s requirements around privacy and data security. “Canada has been designated by the EU as having ‘adequate privacy laws,’ which means that European companies are okay with us hosting their data here,” Gunter says. “The reasons vary, but increasingly people care where their data lives.”

Vanilla Forums started with just three hypervisors five years ago and expanded to a total of 27 now in two different regions. “We were initially only using Nova, so just VMs, but we’re now using Octavia and Barbican as well,” Gunter says. “We’re looking into some other stuff that OpenStack offers. We’re making use of floating IPs. As OpenStack matures and its features become stable, we evaluate them and see whether it makes sense to either add them or use them to replace what we’ve got. Load balancers are a great example of that.”

Gunter hosts most of his company’s SSL on Cloudflare, but still supports communication to his origin servers over SSL. The Cloudflare origin certificate gets hosted in Barbican, which is then used by the Octavia load balancers. This makes for a convenient way to store the certificate without additional servers or adding it to config management systems.

Early on, Vanilla Forums configured their Nginx servers to serve the certificates directly. They’d have to do an SSL handshake and negotiation, which could slow requests down as well as increase CPU load, which reduced the company’s ability to serve concurrent page views. “Putting that on OpenStack allow us do it outside of our actual VMs,” says Gunter. “It’s still being done by our environment, but it’s offloaded from something that we have to look at all the time and care about. It’s offloading tech debt, in a way.”

OpenStack allows the team to offload SSLs and SNI as well. “Using those two together, being able to spin up a load balancer at will and just connect the SSL certs that are already in Barbican to that load balancer saves us just an incalculable amount of time,” Gunter notes. “We previously had to send a ticket to our provider with the SSL cert, paste it into the ticket body and say, ‘Please give us a new IP and please put this cert with it, and let us know when that’s ready.’ It could be the end of the next day before they could get it done.”

What used to be an incredibly slow, manual process, is now taken care of on-site in three minutes with Octavia.

The ultimate benefits of using OpenStack, says Gunter, come down to cost. Even something as simple as having direct access to their environment and build tools lets a much smaller team stay on top of an expanding business. Even with six times its original hosting footprint, the company hasn’t had to grow its ops team.

In addition, staying vendor-agnostic allows Gunter and his team to move vendors when their business demands it. The transition from an earlier vendor to VEXXHOST was easier than he expected. “It took me half an afternoon, and suddenly we were compatible with this other OpenStack cloud,” he says. “I don’t think it would have been that way if we were on someone’s proprietary cloud.”

When the team needs to provision a new cluster, things are much improved. Instead of opening up a ticket with their vendor, they can do it in-house. “I can now do it in 10 minutes or so by myself, and it will be fully ready to roll,” Gunter notes. No vendor tickets, no outside delays.

He’s not a starry-eyed OpenStack evangelist, either, knowing that the technology still has room for improvement.  “It’s not an easy tool or architecture to use,” Gunter admits. “It’s very complicated, it doesn’t work out of the box, you have to do a ton of configuration, and it doesn’t just sit there and work on its own. You have to constantly mind it and care for it. The release frequency of new versions is rapid, so you need to constantly be patching it and deploying the latest version. It can be a full-time job for any environment of any size.”

Since much of OpenStack is still under development, the team has run into bugs and inconsistencies, too, as well as problems with documentation, conflicts between features and modules, and things that are not quite fully finished. At one point, Vanilla Forums’ Octavia implementation could shut down the entire OpenStack environment, thanks to a bug in the module’s health monitoring code. It was fixed relatively quickly but ended up representing two weeks of lost time.

The team has faced other challenges along the way, including an issue with the security rules not getting applied to VMs. To fix it, they’d have to log into Openstack, add a random security group to the VM, then delete it, at which point everything would start to work again. “It was frustrating because you’d build a cluster and then one of the VMs would just not work, and you’d spin your wheels for a day trying to figure it out,” says Gunter. “OpenStack is so complicated that no one [at our provider] could really figure out why that happened.”

Vanilla Forums is open source itself, so they’re no stranger to challenges like these, and would want it no other way. “We’re big contributors to the open source community,” Gunter says. “All of the work that we put into our core product goes straight to open source and is on GitHub for others to use.”

The company uses PHP, Nginx, and MySQL–all open source products. Plus, they contribute their enhancements to modules like the Fluentd log aggregation tool. “We’ve had to build several plugins for that in order to make it useful for us,” Gunter says, “and we’ve made those open source.”

Gunter and his team don’t contribute directly to OpenStack’s upstream efforts, but they do work with VEXXHOST, who does. Gunter’s advice for anyone considering an OpenStack implementation? “I would say do it,” he says. “If you like money, do it.”

The COO also suggests that companies hire an OpenStack expert to manage its implementation. “You’re going to need an OpenStack expert or find a third-party vendor like VEXXHOST to manage it for you,” he says. “It’s not something you can install and forget.”

According to Gunter, OpenStack works better the more services you combine within it. “We’re starting to realize [that fact] now with Barbican and Octavia and Nova all working together. It’s designed for its features to talk to each other and be more than the sum of the parts,” he says.

He describes OpenStack as an attractive, modern environment. “A lot of developers are keen to work on it, especially considering that it’s embracing containers ” says Gunter. “It can potentially have an impact on hiring, if you say, ‘We use OpenStack and we’re an OpenStack shop.’ You’re going to get candidates who you might not have gotten if you were using VMware, [for example].”

The future looks pretty bright for Vanilla Forums. On the technical front, the team is looking to deploy Gnocchi and use it to replace their third-party analytics service. They’re also talking to VEXXHOST about installing Magnum and getting Kubernetes running to work with containers.

Adding new capacity and regions has been made easier by OpenStack and VEXXHOST’s management. “The way that OpenStack works now, [regions] have been actually a lot easier to manage than I had initially thought with the previous provider,” Gunter said. “Deploying a second data center was something that I was dreading, and it’s actually been quite easy to do.”

Ultimately, the ability to grow as a company without incurring much more staff or infrastructure cost keeps Gunter and his company in the OpenStack game. Moving from the public to the private cloud was only the first step, of course, but working with VEXXHOST on the OpenStack platform made the transition and subsequent growth pretty smooth.

VEXXHOST’s CEO Mohammed Naser will be participating in four sessions — from Kubernetes to deployment tools — at the upcoming Berlin Summit. Superuser is always interested in user stories, get in touch: editorATopenstack.org

The post Yes it blends: Vanilla Forums and private clouds appeared first on Superuser.

by Rob LeFebvre at November 05, 2018 03:05 PM

November 02, 2018

OpenStack Superuser

Performance benchmarking of OpenStack-based VIM and VNFs for NFV environments

In addition to reducing CAPEX and requirement for low latency/high bandwidth network, obtaining performance from NFV infrastructure elements is critical for service providers. This post focuses on a case study of evaluation of NFV architecture components i.e. VNFs (virtual network functions) and VIM (virtual infrastructure manager) to deliver best-in-class performance to end users and offer a valid approach approach to active testing.

Why performance benchmarking matters

Most of the CSPs are evaluating or demonstrating the readiness of 5G network in network. Some service providers have already launched 5G in selected cities. As telecom networks go through a transition, NFV,  the core technology driving the 5G implementations is maturing due to active contribution by supporting communities and vendors who are using it to build test cases or solutions to deliver maximum potential benefits for a network.
Now, even with all required technologies and reference models in place to build 5G network, CSPs are still concerned with the end-to-end performance of network services to deliver the best services to end users. And it will be even more important for them because users will be more engaged with connected devices to explore benefits from new age technologies like internet of things,  augmented or virtual reality, autonomous cars, etc. So, the live performance as well as the development environment becomes even more crucial, especially when utilizing network slicing feature supported in 5G; that will require to provide performance for sliced networks having different end to end QoS (quality of services) and QoE (quality of experience) characteristics/measurements. Like low latency, high throughput, less packet loss, etc.

Challenges

There are few challenges associated with NFV when testing its performance. Currently, NFV environments typically built with elements (VNFs, MANO, NFVi) devised by different types of vendors. Like service providers have choices for MANO layers as ONAP, ETSI OSM; VIM can be any proprietary solution or widely used OpenStack; VNFs from different vendors incorporated or chained to build a network services and NFVi constructed using different hardware platform vendors. Such an environment is highly complex and has a major impact on performance of network services and agility to be delivered by service provider.
Service providers must test and benchmark the performance of NFV elements. As the VNFs are a critical part of NFV performance of VNFs make the difference in overall NFV operations which have direct impact on network. Mostly VNFs (virtual network functions) come with different resources requirements, because they different characteristics and are provided by different vendors even if all of them share common a NFV infrastructure (NFVi). Apart from VNFs, performance and functionality of VIMs (virtual infrastructure manager) needs to be benchmarked for resources and infrastructure requirements from diverse set of VNFs.

Considerations

There are a few considerations worth making to achieve high performance and throughput from NFV elements, including:
• Performance must be monitored and tested to hunt down any errors
• Provisions in place to quickly get back to normal operations in case of performance degradation
• Performance testing carried out in the design phase to provide infrastructure and resource requirements by VNFs. Validation checks are also needed after deployment to ensure whether allotted resources are meeting the requirements and the VNF delivers the expected performance.
• Dev-ops or a CI/CD approach should be integrated to actively keep track on performance measures and fix patches in run time.

A case study

At Calsoft, we have made a demo focusing on functionality testing and performance benchmarking of OpenStack-based VIM used for VNF deployment and performance testing.
Here are tools and frameworks used as below:
• OPNFV Functest framework for functionality validation
• OPNFV Yardstick for performance benchmarking and health tests
• VNFs used for OpenStack based platform validation: Clearwater Metaswitch IMS, OAI EPC, Juju EPC, and Vyos Router
• Perform end-to-end solution testing with commercially available vEPC VNFs on the cloud.

Results

• We ran over 2,500 test cases from functest test suits and achieved a 95 percent success rate. These tests included OpenStack based VIM testing as well as open source NFVs (vims,vyous-vrouter, juju-epc)
• 90 percent passrate for OPNFV test case for VNFs: vIMS, vEPC and vyos router

The full results are available free with registration here.

 

Interested in NFV? Check out these must-see sessions at the Berlin Summit.

// CC BY NC

The post Performance benchmarking of OpenStack-based VIM and VNFs for NFV environments appeared first on Superuser.

by Sagar Nangare at November 02, 2018 03:44 PM

Chris Dent

Placement Update 18-44

Good morning, it's placement update time.

Most Important

Lately attention has been primarily on specs, database migration tooling, and progress on documentation. These remain the important areas.

What's Changed

Bugs

Specs

Progress continues on reviewing specs.

Main Themes

Making Nested Useful

The nested allocations support has merged. That was the stuff that was on this topic:

There are some reshaper patches in progress.

I suspect we need some real world fiddling with nested workloads to have any real confidence with this stuff.

Extraction

There continue to be three main tasks in regard to placement extraction:

  1. upgrade and integration testing
  2. database schema migration and management
  3. documentation publishing

Most of this work is now being tracked on a new etherpad. If you're looking for something to do (either code or review), there is a good place to look to find something.

The db-related work is getting very close, which will allow grenade and devstack changes to merge.

Other

Various placement changes out in the world.

End

Apologies if this is messier than normal, I'm rushing to get it out before I travel.

by Chris Dent at November 02, 2018 11:44 AM

Trinh Nguyen

At the OpenInfra Day in Vietnam 25th August 2018

Last August in Hanoi, I have had a chance to talk to the OpenStack Vietnam User Group (VietStack) about container monitoring as well as welcome contributors to the Searchlight project. The OpenInfa Day [1] was great and I made friends with many awesome OpenStack developers. You can check out the presentation slides here [2].



Reference:

[1] https://2018.vietopenstack.org/
[2] http://bit.ly/openinfra-vn-25aug2018

by Trinh Nguyen (noreply@blogger.com) at November 02, 2018 02:03 AM

November 01, 2018

OpenStack Superuser

Inside private and hybrid cloud: Must-see sessions at the Berlin Summit

Join the people building and operating open infrastructure at the OpenStack Summit Berlin in November. The Summit schedule features over 200 sessions organized by use cases including: artificial intelligence and machine learning, high performance computing, edge computing, network functions virtualization, container infrastructure and public, private and multi-cloud strategies.

Here we’re highlighting some of the sessions you’ll want to add to your schedule about hybrid and private cloud. Check out the entire offering here.

They didn’t stop to think if they should

Users come to you with a request for the implementation of an additional OpenStack project, configuration change, or some other request, says Sean Carlisle of Rackspace. You do the research, you draft up a plan for implementation, you get the kinks worked out, then you schedule and perform the maintenance. Things seem fine. Then one terrible day you start getting complaints about stability or performance and you realize you’ve made a huge mistake. Your users are using the new functionality in ways it was not designed for, and they want the problems fixed immediately!

A private cloud offers companies almost limitless flexibility and with that flexibility comes the dangers of going ‘off the rails’ in your environment. In this talk, Carlisle will discuss the potential pitfalls of giving in to some of these requests in an attempt to please your users, how to avoid some of these pitfalls and how to guide your users to a solution that will solve their problems and not leave you trying to fix a platform that isn’t broken.
Details here.

OpenStack Policy 101

OpenStack has had the policy file-based access control mechanism since Keystone was first introduced. Despite its maturity, developers, deployers and operators still have some confusion about what options are available and how it all ties together.

Red Hat’s Juan Osorio Robles, Harry Rybacki and Adam Young aim to clear the air by describing the following:

  • How policy works in OpenStack today with respect to developers and operators
  • The motivation behind OpenStack’s oslo.policy library and what it provides
  • How to write policies for your services incorporating oslo.policy
  • How to override a service’s default policies
  • How to use external services to evaluate policies
  • How to write oslo.policy enforcer drivers

Details  here.

Ocado Technology’s robotic warehouses and grocery delivery using OpenStack

Luis Periquito from Ocado, the world’s largest online-only supermarket based in the United Kingdom. will explain how the team uses OpenStack to power their robotic warehouses around the world. The presentation will include the challenges faced in building and deployment in their warehouses. Details here.

Towards fully automated CERN private Cloud

Since 2012, CERN has been running an Openstack private cloud with around 320,000 cores and supports not only the LHC, but also services for the whole laboratory. The team has been been scaling up the infrastructure to cover these computing needs and also increasing the service offering to include file shares, bare metal nodes and container orchestration clusters among others.

The key aspects that allowed us to scale quickly and be able to continuously adapt to user needs are automation and integration into the CERN ecosystem. Cloud architect Jose Castro Leon will review the tools that allows his team to offload most of the heavy-lifting tasks, further delegate administrative operations and react on monitoring alarms. It includes solutions for simplify project, resource management and support operations based on Mistral and Rundeck.

He’ll also look into ongoing work on services like Kubernetes jobs, Vitrage and Watcher that will increase even further the automation provided. Details here.

Workday Private Cloud: Operational and Scaling challenges of growing from 50,000 to 300,000

Workday is a leader in enterprise human resources software-as-a-service (SaaS) solution and has been active in the OpenStack community for many years. Driven by the demand of rapid customer growth and increased security needs, Workday’s OpenStack cloud has grown from a 600-server fleet in 2016 to 4,600 servers by the end of 2018. They’re planning on 45 OpenStack clusters hosting more than 22,000 virtual machines, dispersed across five data centers in different geographical regions using 2PB of memory. A panel that includes Edgar Magana, Imtiaz Chowdhury, Howard Abrams and Sergio de Carvalho will detail how they did it. Details here.

OpenStack, Edge and AI: Creating the Digital Textile Factory at Oerlikon Manmade Fibers

The mass manufacturing of woven textiles began during with the first industrial revolution in the 1700s. Today, the textile industry is experiencing the fourth industrial revolution, a transformation marked by connectivity and emerging technologies such as robotics, artificial intelligence and Industrial Internet of Things (IIoT).

After the keynote, come to this fireside chat with TechCrunch journalist Frederic Lardinois and Oerlikon’s Mario Arcidiacono offer details on this exciting use case. Details here.

Oath Case Study: Zero Trust Security With Athenz

Oath has developed and open sourced a service authentication and role-based authorization system called Athenz to address zero trust principles, including situations where authenticated clients require explicit authorization to be allowed to perform actions, and authorization needs to always be limited to the least privilege required. Come hear from James Penick and Mujibur Wahab how the team is using Athenz to bootstrap instances deployed in both private and public clouds with service identities in the form of short-lived x.509 certificates that allow one service to securely communicate with another. At Oath, every OpenStack instance is powered by Athenz identities at scale. The pair will also discuss Athenz and its integration with OpenStack for RBAC and identity provisioning. Details here.
See you at the OSF Summit in Berlin, November 15-18 2018! Register here.

// CC BY NC

The post Inside private and hybrid cloud: Must-see sessions at the Berlin Summit appeared first on Superuser.

by Superuser at November 01, 2018 03:50 PM

October 31, 2018

OpenStack Superuser

Learn more about StarlingX: The edge project taking flight

If you’re ready to take flight with the OpenStack Foundation’s newest standalone project, StarlingX here’s what you need to know.

StarlingX, an open source project that offers users services for a distributed edge cloud, recently launched its first release. The project builds on existing services in the open source ecosystem by taking components of cutting edge projects such as Ceph, OpenStack and Kubernetes and complementing them with new services like configuration and fault management with focus on key requirements as high availability (HA), quality of service (QoS), performance and low latency.

“We needed to be able to support growing new technologies like edge, NFV, containers, and machine learning with the existing community we already had,” Jonathan Bryce, executive director of the OpenStack Foundation, told SDX Central.

When it comes to edge the debates on applicable technologies are endless and to give answers it is crucial to be able to blend together and manage all the virtual machine (VM) and container based workloads and bare metal environment which is exactly what you get from StarlingX.

Here’s a breakdown of the five main components:

Configuration Management

Users get node configuration and inventory management services with a highlight on supporting auto-discovery and configuration of new nodes, which are key when it comes to deploy and manage large number of remote sites, some of which might be in hard-to-reach areas. This component comes with a Horizon graphical user interface and a command line interface to manage an inventory of CPUs, GPUs, memory, huge pages, crypto/compression hardware and more.

Fault Management

This framework allows users to set, clear and query custom alarms and logs for significant events for both infrastructure nodes as well as virtual resources such as VMs and networks. Users can access the Active Alarm List and Active Alarm Counts Banner from the Horizon GUI.

Host Management

The service provides life cycle management functionality to manage host machines via a REST API interface. This vendor-neutral tool detects host failures and initiates automatic recovery by providing monitoring and alarming for cluster connectivity, critical process failures, resource utilization thresholds and H/W faults. The tool also interfaces with the board management controller (BMC) for out of band reset, power-on/off and H/W sensor monitoring and shares host state with other StarlingX components.

Service Management

The Service Manager provides life cycle management of services by providing high availability (HA) through redundancy models such as N+M or N across multiple nodes. The service supports use of multiple messaging paths to avoid split-brain communication failures as well as active or passive monitoring and allows users to specify the impact of a service failure with a fully data-driven architecture.

Software Management

This service allows users to deploy software updates for corrective content and also new functionality with a consistent mechanism applicable for all layers of the infrastructure stack – from the kernel all the way up to the OpenStack services. The module can perform rolling upgrades including parallelization and support for host reboot with moving workload from the node by using live migration. Access is available from Horizon, a REST API or CLI.

What’s next

At the upcoming Berlin Summit, the StarlingX community will be out in force. There are number of sessions focusing on StarlingX,  in addition to a project update and onboaring session include:

  • “Ask me anything about StarlingX” Moderated by Greg Waines of Wind River Systems.
  • “Comparing Open Edge Projects”  offers  a detailed look into the architecture of Akraino, StarlingX and OpenCord and compares them with ETSI MEC RA. Speakers include 99cloud’s Li Kai, Shuquan Huang and Intel’s Jianfeng JF Ding.
  • “StarlingX CI, from zero to Zuul” Intel’s Hazzim Anaya and Elio Martinez will go over how CI works and how to create new automated new environments to extend functionality and cover new features and test cases that are not covered inside the OSF.
  • “StarlingX Enhancements for Edge Networking” this session will cover the current state of the art as well as gaps in edge networks as well as go over StarlingX’s core projects, core networking features and enhancements for edge.

Get involved

Check out the code: Git repositories: https://git.openstack.org/cgit/?q=stx
Keep up with what’s happening with the mailing lists: lists.starlingx.io
There are also weekly calls you can join: wiki.openstack.org/wiki/StarlingX#Meetings
Or for questions hop on Freenode IRC: #starlingx
You can also read up on project documentation: https://wiki.openstack.org/wiki/StarlingX

// CC BY NC

The post Learn more about StarlingX: The edge project taking flight appeared first on Superuser.

by Nicole Martinelli at October 31, 2018 03:39 PM

Trinh Nguyen

Searchlight at Stein-1



At the last vPTG [1], the Searchlight team decided to do a release for Searchlight projects at the Stein-1 milestone. Even though a discussion of the release management team has agreed to move the 'cycle-with-milestones' to 'cycle-with-rc' model [5], we still need to release this time to evaluate our effort of reviving Searchlight. And finally, we reached the milestone, and today I was doing a release. The patch is being reviewed on gerrit [2].

The projects are versioned as following:
  • searchlight: 6.0.0.0b1
  • searchlight-ui: 6.0.0.0b1
  • python-searchlightclient: 1.4.0
Here are the features and fixes included in this release:
  • ES 5.x support [3]
  • Fix bugs
  • Versioned Nova notifications [4]
  • tox use py3 by default
  • Docs clean up
Awesome!!!!

References:
[1] https://etherpad.openstack.org/p/searchlight-stein-ptg
[2] https://review.openstack.org/#/c/614066/
[3] https://review.openstack.org/#/c/600287/
[4] https://review.openstack.org/#/c/453352/
[5] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html

Update 31st Oct, 2018: Searchlight Stein-1 released!!!! wwoohuu!!! \m/\m/\m/

by Trinh Nguyen (noreply@blogger.com) at October 31, 2018 05:05 AM

October 30, 2018

Emilien Macchi

OpenStack Containerization with Podman – Part 3 (Upgrades)

For this third episode, here are some thoughts on how upgrades from Docker to Podman could work for us in OpenStack TripleO. Don’t miss the first and second episodes where we learnt how to deploy and operate Podman containers.

Edit: the upstream code merged and we finally decided we wouldn’t remove the container during the migration from Docker to Podman. We would only stop it, and then remove containers at the end of the upgrade process. The principles remain the same and the demo is still valid at this point.

I spent some time this week to investigate how we could upgrade the OpenStack Undercloud that is running Docker containers to run Podman containers, without manual intervention nor service disruption. The way I see it as this time (the discussion is still ongoing), is we could remove the Docker containers in Paunch, just before starting the Podman containers and service in Systemd. It would be done per container, in serial.

for container in containers:
    docker rm container
    podman run container
    create systemd unit file && enable service

In the follow demo, you can see the output of openstack undercloud upgrade with a work in progress prototype. You can observe the HAproxy running in Docker, and during the Step 1 of containers deployment, the container is stopped (top right) and immediately started in Podman (bottom right).

You might think “that’s it?”. Of course not. There are still some problems that we want to figure out:

  • Migrate containers not managed by Paunch (Neutron containers, Pacemaker-managed containers, etc).
  • Whether or not we want to remove the Docker container or just stop (in the demo the containers are removed from Docker).
  • Stopping Docker daemon at the end of the upgrade (will probably be done by upgrade_tasks in Docker service from TripleO Heat Templates).

The demo is a bit long as it shows the whole upgrade output. However if you want to see when HAproxy is stopped from Docker and started in Podman, go to 7 minutes. Also don’t miss the last minute of the video where we see the results (podman containers, no more docker containers managed by Paunch, and SystemD services).

Thanks for following this series of OpenStack / Podman related posts. Stay in touch for the next one! By the way, did you know you could follow our backlog here? Any feedback on these efforts are warmly welcome!

by Emilien at October 30, 2018 04:02 PM

OpenStack Superuser

Hallo, Stackers! How to get the most from the Berlin Summit

No, it is not a typo. hello is hallo, morning is morgen in German, while who is wer, and where is wo. The word “system” is quite funny to pronounce. Confused? At least let me try to make it easier for you to prepare for the next OpenStack Summit, in the hipster capital of Europe!

I’m attending the OpenStack Summit for the tenth time in a row this November, so when I was asked to write a guide, I was certain to come up with “The Best Ever OpenStack Summit, Or, Well, Any IT Conference Preparation Guide in History™” with all my accumulated knowledge and experience bringing out the most from such a big event. Then I looked at Ben Silverman’s guide to the previous conference and realized he already did the job. It’s a good compilation of general tips and tricks, so make sure that you read that, too.

In addition to his article, I’ll focus on three things that I’m looking forward to and tips that have helped me in past summits.

Set aside time for preparation

But seriously. I know, you travel a lot, you have your routine, this is your 74th conference, you still have time to look at the schedule/transportation/hotel/etc details. But you do know that there are gotchas.

  • Events start on Saturday!

Check the schedule again – although the main event kicks off on Tuesday, two events take place before it. Hacking the Edge seems like a fun event to get your hands dirty with some OpenStack hacking even on Raspberry PIs. For those who want to learn about how to contribute to the projects, The Upstream Institute’s wonderful team would help during a two-day free course. (Okay, I’m also part of that team, but it is still a wonderful team.) Both events are free but require RSVPs.

  • Some talks are already fully booked

Rooms can be limited, double check whether your talk has an RSVP button in the schedule

  • Organize your schedule

It takes a lot of time, you’re not in the mood, etc. But later you will be thankful for the discipline of your past self. If two talks that interest you overlap, no worries, add both; maybe you can’t make it to one of them, so there’s an alternative. Be nice though; don’t RSVP for parallel talks, you might take away someone else’s chance to attend.

  • Pay attention to the evening events

Free food (and drinks). A lot more to say, but you’ll still focus on the free food (and drinks).

  • Download the Summit app!

You will have an enlightening convo with some random dude in front of a room when you know you should already be running to the next talk, but have no idea where that is. The 8-core 6GB RAM supercomputer in your pocket,  a.k.a. your smartphone ,will tell you where to go in a jiffy. Just install the official OpenStack Foundation Summit app

  • Your hotel is miles away from the venue

Chances are high. The Berlin CityCube is quite far from downtown, so if your accommodation is off, you might need to calculate with a 30-60 minute ride. Berlin has great public transportation though, if you do.

  • Your room rate might not include breakfast

That’s sad, but in Europe, quite common. Might add another 20 minutes to your morning commute.

  • Which airport in Berlin are you flying to/from?

On Wikipedia, there’s a list of the many Berlin airports. As far as I know you either use TXL or SXF, but still, check it before you end up on the wrong one.

My schedule in a nutshell

I LOVE attending the keynotes. The festive atmosphere provided by the thousands of people there always reminds me that OpenStack is not just a bunch of Python files – but one of the most open, cheerful, fantastic communities that you can be part of when it comes to the IT industry. I am sure we will meet there.

As someone who is interested in security and storage-related topics while digging deep in containerization, I’m looking forward to attending talks like these:

Besides my planned schedule, I’d like to share some hacks that helped me in the past:

Pro tip #1: If you know what particular topic you’re interested in, use the schedule filters to shorten the list of talks to your taste. Storage-oriented? Apply tags Cinder, Ceph, Glance. Want to learn more about automation?  Select the track CI/CD.

Pro tip #2: A lot of talks are recorded and will be published on YouTube later – if you cannot attend two talks in conflicting time slots, go to the one that’s not recorded.

Pro tip #3: Add all the talks you are interested in to your schedule. Even if you don’t attend, don’t remove them; after the Summit, on a rainy boring Wednesday afternoon you’ll have a look at your schedule and do your research / watch the video of the missed talk.

Meet your Steve!

Traveling thousands of miles, putting yourself out of your comfort zone just to sit in an overly-air-conditioned room for whole days listening to presentations would be silly, right? The OpenStack summit is so much more than this.

I have made colleagues, business partners and good friends on these events – if you think about it the basis is clearly there: shared interests, common goals, time for brainstorming and storytelling and of course a bit of booze to bring your greatest (ehm, hopefully, the greatest) self out.

And you learn from these connections, from these people a lot  – you might be a programmer, but that sales guy’s stories are hilarious. Already a manager? It’s always cool to go nerd out for a couple hours and talk about Amigas with that engineer behind the booths.

One of my best pals there is Steve, who I might not meet for long months if there are no OpenStack events, but when we do, then we catch up from last time – for long hours. Once even missed out on our VIP invitations just to hang out with each other. No regrets.

What I guarantee: there are another thousand Steves, Janes, Bills (and of course, Marks  😉) to make friends with during the week of an OpenStack Summit. So come to Berlin and find them!

About the author

Mark Korondi is a cloud engineer, consultant, CEE Days organizer and Upstream University instructor. Find him on Twitter at @kmarc.

Superuser is always interested in community content, get in touch at editorATopenstack.org

// CC BY NC

The post Hallo, Stackers! How to get the most from the Berlin Summit appeared first on Superuser.

by Mark Korondi at October 30, 2018 02:09 PM

October 29, 2018

OpenStack Superuser

How to create a self trust In Keystone

Let’s say you are an administrator of an OpenStack cloud. This means you are pretty much all powerful in the deployment. Now, you need to perform some operation, but you don’t want to give it full admin privileges? Why? Well, do you work as root on your Linux box? I hope not. Here’s how to set up a self trust for a reduced set of roles on your token.

First, get a regular token, but use the –debug to see what the project ID, role ID, and your User ID actually are:

In my case, they are … long uuids.

I’ll trim them down both for obscurity as well as the make it more legible. Here is the command to create the trust.

openstack trust create --project 9417f7 --role 9fe2ff 154741 154741

Mine returned:

+--------------------+----------------------------------+
| Field              | Value                            |
+--------------------+----------------------------------+
| deleted_at         | None                             |
| expires_at         | None                             |
| id                 | 26f8d2                           |
| impersonation      | False                            |
| project_id         | 9417f7                           |
| redelegation_count | 0                                |
| remaining_uses     | None                             |
| roles              | _member_                         |
| trustee_user_id    | 154741                           |
| trustor_user_id    | 154741                           |
+--------------------+----------------------------------+

On my system, role_id 9fe2ff is the _member_role.

Note that, if you are Admin, you need to explicitly grant yourself the _member_ role, or use an implied role rule that says admin implies member.

Now, you can get a reduced scope token. Unset the variables that are used to scope the token, since you want to scope to the trust now.

$ unset OS_PROJECT_DOMAIN_NAME 
$ unset OS_PROJECT_NAME 
$ openstack token issue --os-trust-id  26f8d2eaf1404489ab8e8e5822a0195d
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2018-10-18T10:31:57+0000         |
| id         | f16189                           |
| project_id | 9417f7                           |
| user_id    | 154741                           |
+------------+----------------------------------+

This still requires you to authenticate with your userid and password. An even better mechanism is the new Application Credentials API. It works much the same way, but you use an explicitly new password. More about that next time.

This post first appeared on Adam Young’s blog.

For more on Keystone, an OpenStack service that provides API client authentication, service discovery and distributed multi-tenant authorization, check out the project Wiki.

Superuser is always interested in community content – get in touch: editorATopenstack.org

The post How to create a self trust In Keystone appeared first on Superuser.

by Adam Young at October 29, 2018 02:07 PM

October 26, 2018

Mirantis

Will Edge Computing Reverse Network Virtualization Momentum?

With edge, we have closed vendor solutions and open reference architectures, but nothing for VNF and software vendors to build on top of. Changing that is the first step towards virtualized edge.

by Boris Renski at October 26, 2018 07:14 PM

Cisco Cloud Blog

OpenStack Summit: Come for the Beer – Stay For the Cloud

With just over two weeks to go before the OpenStack Summit EU, the Cisco team is starting to get very excited about being back at this awesome event. For...

by Gary Kevorkian at October 26, 2018 03:39 PM

OpenStack Superuser

Inside HPC, GPU, AI : Must-see sessions at the Berlin Summit

Join the people building and operating open infrastructure at the OpenStack Summit Berlin in November. The Summit schedule features over 200 sessions organized by use cases including: artificial intelligence and machine learning, high performance computing, edge computing, network functions virtualization, container infrastructure and public, private and multi-cloud strategies.

Here we’re highlighting some of the sessions you’ll want to add to your schedule about HPC, GPU and AI. Check out all the sessions, workshops and lightning talks focusing on these three topics here.

The AI Thunderdome: Using OpenStack to accelerate AI training with Sahara, Spark and Swift

OpenStack lends itself well to big data problems says Red Hat’s Sean Pryor. He’ll talk about how with Swift and Ceph, data storage is easier than ever. One of the most consequential problems in the big data space is using AI to make sense of ever-increasing data volumes. OpenStack makes this a solvable problem: Data stored in Swift can be accessed by a Sahara cluster, which can use GPU instances to accelerate parallel AI hyperparameter tuning. This ability allows users to spin up and down huge AI training farms at a fraction of the manual effort, and in the end, isn’t that what the cloud is all about? Details here.

NASA Goddard Private Cloud: Genesis and lessons learned

In the fall 2016, NASA Goddard’s NASA Center for Climate Simulation (NCCS) and the Information Technology and Communications Directorate (ITCD) began a collaboration to provide an on-premises private cloud to the entire Goddard community using hardware reclaimed from Discover, the NCCS’ traditional HPC cluster.

The GPC is on track for production availability in October 2018 running Queens, however there are over 30 projects (and growing!) running in the prototype environment on Mitaka.
This from NASA’s Mike Moore will describe the challenges encountered and the innovative solutions devised on this journey including: telemetry/billing, data protection/DR, security, “cloudifying” workloads, containers and guiding HPC users through the paradigm shift to cloud computing. Details here.

Monitoring-as-a-Service in the HPC Cloud

When applications move to the cloud, the first move is to recreate the same platform on software defined infrastructure. This falls short of the true potential of cloud. OpenStack infrastructure can offer so much more – once cloud users become aware of the powerful APIs and services available to them.

In this talk, Stig Telfer of StackHPC Ltd. and Darryl Weaver of Verne Global will describe how to take HPC cloud migration to the next level. They’ll demonstrate the integration of Monasca services for monitoring and logging for performance-focussed deployments. They’ll show how this unlocks best-of-breed performance telemetry for all users, and how this opens new opportunities for users and admins to understand and optimize their applications. Details here.

Cyborg: Accelerate your cloud

As data center workloads evolve to become increasingly compute-intensive, there is a growing need for accelerators. There are a wide variety of accelerators, spanning GPUs, FPGAs, ASICs, and workload-specific ones such as TPUs. The Cyborg project in OpenStack aims to ease the adoption and lifecycle management of these diverse accelerator types.

Cyborg and Nova developers have put together an architecture to enable offload to various accelerators says Intel’s Sundar Nadathur. The architecture includes FPGAs, which have unique needs for programming and bitstream management. In this presentation, we will look at use cases for offloads to devices in general, programming models for FPGAs, and the representation of devices (including FPGAs) in Placement. Nadathur will take a close look at the scheduling of instances that need accelerators. He’ll detail the archictecture of os-acc, a library for Nova compute to interact with Cyborg. Finally, we will present the current status of Cyborg development. Details here.

See you at the OSF Summit in Berlin, November 15-18 2018! Register here.

Franck V.

The post Inside HPC, GPU, AI : Must-see sessions at the Berlin Summit appeared first on Superuser.

by Superuser at October 26, 2018 02:19 PM

Chris Dent

Placement Update 18-43

A placement update for you.

Most Important

Same as last week: The major factors that need attention are managing database migrations and associated tooling and getting the ball rolling on properly producing documentation. More on both of these things in the extraction section below.

Matt has sent out an email seeking volunteers from OpenStack Ansible or TripleO to get placement upgrade tooling in one of those projects.

Bugs

I guess it is because of various people doing upgrades, and some of the downstream projects starting to take more advantage of placement, but there's been a raft of interesting bugs recently. Many related to some of the more esoteric aspects of the ProviderTree handling in the resource tracker, the SQL in the placement service, or management of global state in WSGI servers. Initially this is a bit frustrating, but it's also a good thing: Finding and fixing bugs is the beating heart of an open source project. So thanks to everyone finding and fixing them.

Specs

The spec review sprint happened and managed to get some specs merged, so this list should have shrunk some.

Main Themes

Making Nested Useful

Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. This is creeping ahead. We recently confirmed that end-to-end success with nested providers is priority one for resource provider related work.

There's a topic for reshaper that still has some open patches:

Extraction

There continue to be three main tasks in regard to placement extraction:

  1. upgrade and integration testing
  2. database schema migration and management
  3. documentation publishing

There's been some good progress here. The grenade job works and is ready to merge independent of other things. The related devstack change is still waiting on the database management that's part of (2). As mentioned above, volunteers from OSA or TripleO are being recruited.

That db management is making some good headway with a working alembic setup but the tooling to use it needs to be formalized. The command line hack has been updated to use the alembic setup.

We have work in progress to tune up the documentation but we are not yet publishing documentation (3). The plan here is to incrementally improve things as we have attention and discover things. One goal with this is to keep the process moving and use followups to avoid nitpicking each other too much.

Other

Various placement changes out in the world.

End

It's tired around here.

by Chris Dent at October 26, 2018 01:36 PM

Trinh Nguyen

Searchlight weekly report - Stein R-25


There were a couple activities of the Searchlight team last week which are:

  • Making Searchlight work with the versioned Nova notifications [1]: This feature originated from a Nova feature [4] proposed for the Pike release but could not be done at that time. At the last vPTG [5], the Searchlight team decided to include this feature in the Stein cycle.
  • Change python3.5 job to python3.7 job on Stein+ [2]: because the conversation about this [6] is still going on so we want to wait a little bit.
  • Talk at the OpenStack Korea User Group on 19th Oct. in Gangnam, Seoul, South Korea [3]: I was trying to tell everybody the story of Searchlight and why it needs more contributors. The impression was good.
That's it for last week. At the end this week, we have to release searchlight, python-searchlightclient, searchlight-ui for the Stein-1 milestone. It is not required but I would want to do it to evaluate our effort in the new cycle of OpenStack.

Beautiful!!!


Reference:

[1] https://review.openstack.org/#/c/453352/
[2] https://review.openstack.org/#/c/610775/
[3] https://www.dangtrinh.com/2018/10/at-openstack-korea-user-group-last.html
[4] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/list-instances-using-searchlight.html
[5] https://www.dangtrinh.com/2018/09/searchlight-vptg-summary.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135626.html

by Trinh Nguyen (noreply@blogger.com) at October 26, 2018 12:58 AM

October 25, 2018

OpenStack Superuser

What to expect from the upcoming OpenStack Forum

Forums were the public square or marketplace of ancient Roman cities where judicial activity and public business took place. At the OpenStack Forum participants bring feedback, pain points and proposals following August’s Rocky release.  The aim is to ensure the broadest coverage of topics that will allow for multiple facets of the community getting together to discuss key areas within our community/projects.

Quick recap: OpenStack Summits are events where all the open infrastructure community gets together. There are keynotes, traditional presentations, training opportunities… The Forum is the part of the event where collaborative discussions happen. It focuses on strategic discussions, taking advantage of having a large cross-section of the OpenStack community present to have fruitful user-dev brainstorming sessions.

The next Forum will be at the Berlin Summit. Forum sessions will include:

  • Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies
  • Cross-team sessions, to coordinate work and decisions between various project teams, work groups or SIGs
  • Project-specific feedback, where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and ‘blue sky’ ideas for the next release.

You can check out the schedule here.

How the Forum is organized

The Forum is for the entire community to come together, to create a neutral space for all contributors. Like in past Summits, we use Etherpads to brainstorm topics, starting a couple of months before the summit. Each team (or group of teams working together) should list topics, communicate with other teams and choose their most compelling ideas for formal submission. Afterward, a team of representatives from the User Committee, the Technical Committee, Foundation staff, and one from each of the new OSF projects will take the list of sessions proposed by the community and fill out the schedule.

This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation.  The Forum is an opportunity to help shape the development of future project releases.

Practically, the Forum is looking to be three parallel rooms laid out in parliament style, running for the majority of the summit.

There’s more in detail on the wiki.

Can my Working Group or project still meet at the Summit?

There will be sessions for Working Group meetings and BoF’s during all three days of the event, but this will be separate to the Forum. A foundation staff member reaches out to Working Group leads about desired space several before the Summit to schedule these sessions. If you would like to request space for your Working Group to meet, please contact summit@openstack.org.

I’m a developer, should I come?

We’d love as many developers as possible to come, but realize that some of you may have had to prioritize attending the PTG over the Summit. To achieve the objectives of the Forum as the big community interaction point, we need to have some significant representation from each project (PTLs, strategically-focused team members…). In order to keep travel costs under control for those attending both events, people physically attending the PTG receive a discount code to attend the Summit.

I’m a cloud operator, should I come?

Yes, this event will allow you to actively participate in the Open Design process. If possible, be sure to bring specific feedback from the latest release including bug links and your ideas for the next release.

I’m an application developer, should I come?

Yes, you are the reason we build the software and run the clouds. We need to know what you are trying to do and how your experience has been.

I’m a product manager, should I come?

Yes, we need your expertise to help shepherd discussions into tangible outcomes. Do note though, that OpenStack is not traditionally product-managed – we recommend you contact the Product Working Group before diving in deep!

Why are we doing this again?

  • To create the best possible software we can
  • To facilitate direct engagement between users and contributors
  • To help us be more strategic and thoughtful with planning (even beyond just one cycle!

 

Check out the Wiki or the schedule for details.

The post What to expect from the upcoming OpenStack Forum appeared first on Superuser.

by Superuser at October 25, 2018 02:31 PM

Chris Dent

Quick Placement Development

One day a while back, I started blabbing out loud about some quick ways to experiment with a live placement service and was (very appropriately) reminded that I make a ton of incorrect assumptions about the familiarity people have with things I do most days. Statements like "just spin up the wsgi script against uwsgi and a stubbed out placement.conf and see what happens" are a good example of me being bad.

So, to help, here are some instructions on how to spin up the wsgi script against uwsgi and a stubbed out placement.conf, in case you want to see what happens. The idea here is that you want to experiment with the current placement code, using a live database, but you're not concerned with other services, don't want to deal with devstack, but need a level of interaction with the code and process that something like placedock can't provide.

As ever, even all of the above has lots of assumptions about experience and context. This post assumes you are someone who either is an OpenStack (and probably placement) developer, or would like to be one.

To make this go you need a unix-like OS, with a python3 dev environment, and git and mysql (or postgresql) installed. We'll be doing this work from within a virtualenv, built from the tox.ini in the placement code.

At the time of writing, some required code is not yet merged into placement, so we'll be using a patch that is currently under review. I'll update this document when that code merges so we can skip a step.

Get The Code and Deps

The placement code lives at https://git.openstack.org/cgit/openstack/placement. We want to clone that:

git clone git://git.openstack.org/openstack/placement
cd placement

Then we want to get the extra code mentioned above:

git pull https://git.openstack.org/openstack/placement refs/changes/61/600161/13

Setup The Database

That patch adds a commands to create tables in a database. We need to 1) create the database, 2) create a virtualenv to have the command, 3) use it to create the tables.

The database can have whatever name you like. Whatever you choose, use it throughout this process. I choose placement. You may need a user and password to talk to your database, setting that up is out of scope for this document. I'm using a machine that has had devstack on it before so mysql is configured in what might be a familiar way:

mysql -uroot -psecret -e "DROP DATABASE IF EXISTS placement;"
mysql -uroot -psecret -e "CREATE DATABASE placement CHARACTER SET utf8;"

You may also need to set permissions:

mysql -uroot -psecret \
    -e "GRANT ALL PRIVILEGES ON placement.* TO 'root'@'%' identified by 'secret';"

Get the table create command by updating the virtualenv:

tox -epy36 --notest

Create a bare minimum placement.conf in the /etc/placement directory (which you may need to create):

[placement_database]
connection = mysql+pymysql://root:secret@127.0.0.1/placement?charset=utf8

(Note that when this command matures you will be able to name the location of the configuration file on the command line.)

Run the command to create the tables:

.tox/py36/bin/placement-manage db table_create

You can confirm the tables are there with mysqlshow placement

Run The Service

Now we want to run the service. We need to update placement.conf so it will produce debugging output and use the noauth strategy for authentication (so we don't also have to run Keystone). Make placement.conf look like this (adjusting for your database settings):

[DEFAULT]
debug = True

[placement_database]
connection = mysql+pymysql://root:secret@127.0.0.1/placement?charset=utf8

[api]
auth_strategy = noauth2

We need to install the uwsgi package into the virtualenv:

.tox/py36/bin/pip install uwsgi

And then use uwsgi to run the service. Start it with:

.tox/py36/bin/uwsgi --http :8000 --wsgi-file .tox/py36/bin/placement-api

If that worked you'll see lots of debug output and spawned uWSGI worker 1. Test that things are working from another terminal with curl:

curl -v http://localhost:8000/

Get a list of resource providers with (the x-auth-token header is required, openstack-api-version is optional but makes sure we are getting the latest functionality):

curl -H 'x-auth-token: admin' \
     -H 'openstack-api-version: placement latest' \
     http://localhost:8000/resource_providers

The result ought to look something like this:

{"resource_providers": []}

If it doesn't then something went wrong with the above and there should be more information in the terminal where uwsgi is running.

From here you can experiment with creating resource providers and related placement features. If you change the placement code, ctrl-c to kill the uwsgi process and start it up again. For testing, you might might enjoy placecat.

Here's a script to do the install for you:

by Chris Dent at October 25, 2018 12:30 PM

October 24, 2018

OpenStack Superuser

Taking a closer look at open source infrastructure

After rounding the six-year mark, the OpenStack Foundation decided to look with fresh eyes on its brand, the evolution of open source and the role of nonprofit organizations in open infrastructure.

Working with research partner ClearPath Strategies, OSF recently presented its findings. They’re the results of global quantitative and quantitative research. The survey polled some 501 respondents divided between operations and architects (30 percent), developers and dev-ops (20 percent), IT managers (25 percent), CIO and CTOs  (25 percent.)  The survey was fielded in nine countries (United States, China and Hong Kong, Germany, Canada, India, Singapore, Ireland, Japan and South Korea) and in five languages (English, Chinese, German, Japanese and Korean.)  The surveys were evenly distributed across North America, Asia and Europe. The qualitative research is from four focus groups in Seattle and Beijing an four in-depth interviews with open source influencers including representatives from Baidu, Google, Microsoft and Tencent.
You can check out the hour-long presentation on video, via the transcript or the slides.

Some takeaways:

  • Foundations are good for open source

The global IT professionals surveyed agreed that foundations which support projects with a common thread best serve users, who in time develop trust in their preferred foundation and come to expect consistency from its projects.
“I don’t think we want just a single foundation, foundations ought to be focused on a particular thing,” said one interviewee, described as an open-source influencer.

  • Open source empowers users to “unlock value” for companies

Open source is increasing productivity and adding value, some 57 percent of those surveyed say their open-source solutions easily integrate with their current environment. On the flip side, just eight percent “strongly agreed” with the statement that ‘open source technologies require too focus on building features that matter much maintenance and cause more problems than they solve.’

“It’s definitely a lot more fun to operate in an open source environment where you can make dramatic changes and somebody’s used to something doing X,Y, and Z and now you can make it do a whole rainbow of things starting from the some base,” said one participant identified as an operator from Seattle.

  • The concept of “open infrastructure” is useful — if you can pin it down

For the purpose of the survey, the term was defined as “a catchall phrase for open-source infrastructure—IT infrastructure build from open-source components. Based on that working definition, most participants were found to be only “somewhat familiar” with the term, a majority was able to define it as pertaining to open source infrastructure. How much can the term encompass just about anything? A few participants thought “open infrastructure” might refer to structures like playground equipment.

Catch the entire presentation video, the transcript or the slides.

The post Taking a closer look at open source infrastructure appeared first on Superuser.

by Nicole Martinelli at October 24, 2018 02:05 PM

October 23, 2018

OpenStack Superuser

Rocking the innovation revolution with open source

PARIS — Most tech conferences rely on coffee to keep attendees alert, but OVH chairman and founder Octave Klaba picked up a guitar and hammered out a Metallica cover. Klaba then dropped the guitar and returned onstage to kick off the event to a standing-room only audience of 7,000 people, including livestream viewers, to outline the revolution his company is leading.

Innovation for Freedom

Today’s explosion of data is fueling a revolution,  Klaba says. OVH is riding this wave by changing their motto from “Innovation is Freedom” to “Innovation for Freedom.”  Although they tweaked the shortest word in the slogan, the shift is significant. This subtle change represents a focus on what their customers and their customers’ users want to achieve and leveraging open source and their product portfolio to deliver the innovation that continues to open doors and leave them open.

As a first step, OVH has been working closely with their four million clients to learn about their use cases, gather feedback and develop four “universes” that encompass their use cases and product portfolio.

OVH Market

  • Digital toolbox for companies with 20-30 employees. These organizations need to be able to work better together and work better with their clients. They will have access IP, email services and customer relation management. The toolbox improves productivity and provides a digital workflow.

OVHSpirit

  • Core of legacy activity. This is infrastructure for people who are into hardware and networks so they have the tools they need to build their private cloud.

OVHStack

  • This is the OpenStack-powered public cloud intended for dev-ops—it’s an API-driven world. This is growing everyday and while the OVH may be behind the big three in the United States, Klaba says they are catching up. OVHEnteprise
  • OVHEnterprise is for big companies who need OVHSpirit or OVHStack, but require a much larger scale.

“You can grow your experience depending on where you are in your digital transformation journey,” Klaba said. “With the universes, we have the foundation to address the specificities of each partner so that you are successful too.” While this strategy is a worldwide initiative, Kalab assured the audience that the execution would remain localized to account for regional legal restrictions.

Klaba turned to the audience and asked if anyone did not see themselves represented by the new strategy. Although zero hands were raised, he went the extra step of putting his email address on the keynote screen to welcome any feedback or concerns.

Retaining the DNA

To connect Klaba’s vision for to day-to-day activities, OVH CEO Michel Paulin underscored how the personality of OVH remain constant despite the shift in strategy. He assured attendees that OVH’s rapid growth will continue, delivering the best cloud at the best prices.

“Thanks to this DNA, OVH’s cloud is different, it’s smart,” Paulin said as he explained how “smart” was an acronym that represented their innovation-driven approach to cloud that is distributed across 28 data centers worldwide.

The OVH cloud strategy is:

  • Simple, easy-to-deploy. The four universes allow users to have all of the tools they need to deploy and move applications easily.
  • Multi-local as OVH has implementations for support in four continents and plus partnerships.
  • Accessible.
  • Reversible – your cloud should be flexible and liberated. OVH works with an open cloud foundation to ensure that the cloud is open so that customers have choice.
  • Transparent: OVH shoulders the responsibility for data – their customers’ data and their customers’ clients’ data.

Turning to the audience, Paulin reminded them that this sense of innovation should empower them. “Migrating your apps should not imprison you in an irreversible model,” he said.

Turning to their own innovation, he discussed how large-scale water cooling is a strategic initiative that they are investing in, as well as robots in their server building factories and ongoing research and development. OVH has invested 300 million euros (roughly $USD344 million) to continue this growth. He assured the audience that they will continue the growth without losing their DNA, saying that the investment today is the growth of tomorrow.

“The OVH team is mobilized,” he said. “With you, thanks to you, we’re going to be disruptive. It’s going to be seriously, seriously disruptive.”

Cool, clear water

Most people know OVH as a leader in cloud services, with a substantial OpenStack public cloud footprint. What they don’t know is that OVH also makes their own physical servers – a significant feat that only a handful of companies (tech titans like Google, Microsoft and Facebook) are bold enough to attempt.

Franois Sterin, OVH EVP chief industrial officer took the stage to discuss the innovation behind their hardware, announcing that they have recently produced their 1 millionth server. They also created a new robotized factory to sustain their future server production needs, giving them the velocity required to bring features faster to their users.

“Our teams are motivated and we are playing on a worldwide stage,” Sterlin said.

His proof lies in the water cooling. To create a more efficient energy consumption model, OVH relies on cooling processors with water to overcome the density of heat which is needed even more with use cases like artificial intelligence (AI) and big data which consume an increased amount of CPUs.

“We just invented autonomous bays, taking the water cooling behind the rack and added cooling doors so the servers are completely independent from the outside environment,” he said. “Agility, simplicity – this is all in our model. This is why we are called industrial and not just infrastructure.”

“Forever Trust in Who We Are, Nothing Else Matters”

To close out the keynotes, Klaba circled back to the inspiration behind OVH’s innovative approach to technology.

When he tells people about his company and his goals, people generally sympathize, but peg him as another hopeless dreamer. It’s seen as impossible to competing with giants while being based in Europe.

“Everyday there’s a challenge and yes, it’s work and it’s not always easy,” he said. “Europeans need to change the paradigm and change the way we work, not just replicating the US or Asia. Attempting the impossible is not crazy.”

Connecting to open source, Klaba discussed how open source is an ecosystem that is based on a sense of trust that cannot be achieved with proprietary solutions.

“A standard belonging to the community will generate more trust than what a proprietary solution can get on the stock exchange,” he said. “Here there’s momentum and we need to create something and we need to create it together.” He went on to encourage the creation of a virtual European giant of the Internet, based on a network of smaller European players linked by trust more than capitalistic ties. People may think this is a crazy objective, but it was definitely crazy to start OVH out of nothing in 1999 and growing it to the giant it is today.

To drive the point home, Klaba launched into a guitar solo and was then joined by the other band members for the only appropriate keynote closing: a hard-charging rendition of  Metallica’s “Nothing Else Matters.”

#impossible n’est pas fou! –#OVHsummit 2018 Tks Octave!! pic.twitter.com/HozIMortfL

— Vincent Carre (@vincarre) October 18, 2018

The post Rocking the innovation revolution with open source appeared first on Superuser.

by Allison Price at October 23, 2018 02:00 PM

Trinh Nguyen

At the OpenStack Korea User Group last Friday (19 Oct 2018)

Last Friday, I had an opportunity to tell the story of Searchlight to the OpenStack Korea User Group in Seoul. My ultimate goal is to attract new contributors and to revive Searchlight. In just 30 minutes I worked people through the history of Searchlight, its architecture, and the current situation. Everybody seems to get the idea of why Searchlight needs their help. Even though not all of the attendants could understand my English, with the help of Ian Y. Choi, the organizer, and core of the OpenStack Docs and I18N team, the communication was great. Hopefully, I will have another chance to discuss with everybody more about Searchlight.

by Trinh Nguyen (noreply@blogger.com) at October 23, 2018 09:00 AM

October 22, 2018

OpenStack Superuser

Inspiring the next generation of contributors to open infrastructure

Open source is fueled by the ongoing arrival of new contributors who offer fresh talent and diverse perspectives. Mentorship programs are critical in inspiring this next generation of contributors, and they enable those more experienced within a community to give back. The number and variety of mentorship programs that serve the OpenStack community is impressive, designed to fit a vast range of time and resource commitments and address the needs of newcomers, regardless of their entry point into the community.

Participants at the mentoring session of the Vancouver Summit.

One of these programs—the Speed Mentoring Workshop—was kicked off by the Women of OpenStack at the Austin Summit, and has since become a mainstay at the summits. Usually held towards the beginning of each conference, it’s a great way for newcomers to kick off the week, and it gives mentors a way to ‘pay it forward’ without an extensive time commitment. Featuring multiple 15-minute rotational rounds across career, community and technical tracks, these workshops are designed to address a wide range of needs and interests among those new to the community, or perhaps new to different teams and groups within the larger OpenStack community.

How to get involved

As we look forward to the OpenStack Summit in Berlin, we’re excited about hosting another workshop there to bring mentors and mentees together. These speed mentoring workshops would not be possible without a wealth of top-notch mentors who are generous of their time, knowledge and expertise, nor eager mentees willing to dive in, roll up their sleeves and contribute to the vibrant OpenStack community. Please join us and participate—we look forward to seeing you there!

OpenStack Summit Berlin

Speed Mentoring Lunch

Tuesday, November 13, 12:30-1:40 pm

Hall 7, Level 1, Room 7.1b / London 1

Click here for more details!

 

What people are saying about these programs

These speed mentoring sessions allow attendees to make contact with individuals willing to answer their questions long past the end of the session,” Ell Marquez, mentee.

“A healthy community survives through its members, which is why the speed mentoring sessions actively prepare go-getting team plays and future leaders. It also reminds me of my responsibilities towards others; how to build a healthy community,” Armstrong Foundjem, open source advocate.

“Passing on the wisdom learned from years of experience is an important element of this speed mentoring event. And both the mentor and mentee benefit from continuing and sustaining open source knowledge,” Nithya Ruff, senior director, Comcast Open Source Practice & Board Director, Linux Foundation.

“The breadth of people participating as mentors reflects the interest of our fellow stackers to help the next generation of stackers feel part of the community. The mentoring process offered during the OpenStack Summit is a user-friendly means to present fellow stackers with the tools, technologies and human connections to ease their growth in the community, find the right project, SIG or community to best use the talents they are offering,” Martial Michel, Ph.D., Chief Scientific Officer at Data Machines Corp., OpenStack Scientific Special Interest Group co-chair.

“Last year, Nalee Jang, a previous Korea user group leader guided me to attend a Women-of-OpenStack event during Boston Summit 2017. Thanks to her, I am now a successful leader in Korea user group. It has been great to share how diversity has affected my community career, and how to get more involved in OpenStack projects like Internationalization team (another diversity part),” Ian Y. Choi, mentor.

About the author
Nicole Huesman is a community & developer advocate at Intel. In the role, she works to ncrease awareness and strengthen impact of Intel’s role in open source across cloud, containers, IoT, robotics, and web, through solid marketing strategies and cohesive storytelling.

The post Inspiring the next generation of contributors to open infrastructure appeared first on Superuser.

by Nicole Huesman at October 22, 2018 02:08 PM

October 19, 2018

OpenStack Superuser

Pairing OpenStack and open source MANO for NFV deployments

OpenStack for NFV

As we know, OpenStack is mainly known to be the largest pool of open source projects which collectively form the software platform for cloud computing infrastructure. This infrastructure is used widely in private cloud use cases by many enterprises. After an introduction of NFV by ETSI, OpenStack has emerged as a key infrastructure platform for NFV. In most of the NFV deployments, OpenStack is used at VIM (virtual infrastructure manager) layer to give a standardized interface for managing, monitoring and assessing all resources within NFV infrastructure.

Various OpenStack projects (like Tacker, Neutron, Nova, Astara, Congress, Mistral, Senlin, etc.) are capable of managing virtualized infrastructure components of NFV environment. As an example, Tacker is utilized to build generic VNF manager (VNFM) and NFV orchestrator (NFVO) which helps in deployment and operation of VNFs within NFV infrastructure. Additionally, integration of OpenStack projects introduces various features to NFV infrastructure. Features include performance features like huge pages, CPU Pinning, NUMA topology and SR-IOV; service function chaining, networking slicing, scalability, high availability, resiliency and multisite enablement.

Telecom service providers and enterprises have implemented their NFV environment with OpenStack: AT&T, China Mobile, SK Telecom, Ericsson, Deutsche Telekom, Comcast, Bloomberg, etc.

Open Source Mano (OSM) for NFV

The MANO layer is responsible for orchestration and complete life cycle management of hardware resources and virtual network functions (VNFs). In other words, MANO layer coordinate NFV Infrastructure (NFVI) resources and map them efficiently to various VNFs. There are various options available as three-dimensional software stack for MANO. But ETSI hosted OSM is largely preferred due to large activity at the community level, highly mature framework, production readiness, easy to initiate and constant feeding of use cases by members.

Virtual Network Functions (VNFs), forms a network services may need updates for feature addition or patch for functionalities. OSM provides a method to invoke the VNF upgrade operation with minimal impact in the running network service.

With the continuous community support and involvement for feature innovation, OSM has now evolved to bring CI/CD (continuous integration and continuous delivery) framework at MANO layer.

The latest release (four) of OSM brought large set of features and enhancements to OSM framework which has impacted functionality, user experience, and maturity which enables various enhancements for NFV MANO from usability and interoperability perspective.

OSM has steadily adopted the cloud-native principles and can be easily deployed in the cloud as installation is container-based and run with the help of container orchestration engine. A new northbound interface is introduced which is aligned with ETSI NFV specification SOL005 provides dedicated control of the OSM system. Monitoring and closed‐loop capabilities have also been enhanced.

The next version of OSM release 5 is expected to be launch in November 2018 and arrive bundled with more 5G-related features, like network slicing and container-based VNFs.

Why OpenStack + open source MANO for the MANO layer in NFV?

Both OpenStack and OSM  have a large community that have a rapid pace for innovating NFV and high contributions by companies to enhance current features and develop new capabilities for core projects under it.

In the case of NFV, OpenStack standardized interfaces between NFV elements and infrastructure. OpenStack is used for commercial solutions offerings by companies like Canonical/Ubuntu, Cisco, Ericsson, Huawei, IBM, Juniper, Mirantis, Red Hat, Suse, VMware and Wind River. A large percentage of VIM deployments are based on OpenStack due to the simplicity in handling and operating various projects targeted towards providing full potential storage, compute and networking for NFVi.

With last two releases (three and four,) OSM has evolved a lot to support integration for cloud-native approach by enabling CI/CD frameworks into orchestration layers. Cloud readiness involvement of OSM is the key benefit along with OpenStack which has proven architecture for private as well as public clouds. OSM deployment into NFV infrastructure has become very lean where one can start with importing docker containers into production. On the other hand, OpenStack is known for enabling simplicity to manage virtualized and containerized infrastructure. Organizations can realize the full benefits from integration as NFV MANO using OSM and OpenStack due to lean and simple management and deployment.

References

https://www.openstack.org/assets/presentation-media/Achieving-end-to-end-NFV-with-OpenStack-and-Open-Source-MANO.pdf

https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseFOUR-FINAL.pdf 

https://www.openstack.org/assets/telecoms-and-nfv/OpenStack-Foundation-NFV-Report.pdf


Article based on session by Gianpietro Lavado (solution architect, Whitestack) at OpenStack Summit 2018, Vancouver. He is leading contributor to ETSI Open Source MANO as well.

For more on NFV and OpenStack, check out the dedicated track at the upcoming Summit Berlin.

About the author

Sagar Nangare,a digital strategist at Calsoft Inc., is a marketing professional with over seven years of experience of strategic consulting, content marketing and digital marketing. He’s an expert in technology domains like security, networking, cloud, virtualization, storage and IoT.

This post first appeared on the Calsoft blog.

The post Pairing OpenStack and open source MANO for NFV deployments appeared first on Superuser.

by Superuser at October 19, 2018 02:06 PM

Chris Dent

Placement Update 18-42

After a gap from when I was away last week, here's this week's placement update. The situation this week remains much the same as last week: focus on specs and the bigger issues associated with extraction.

Most Important

The major factors that need attention are managing database migrations and associated tooling and getting the ball rolling on properly producing documentation. More on both of these things in the extraction section below.

What's Changed

mnaser found an issue with the migrations associated with consumer ids. A fix was created in nova and ported into placement but it raised some questions on what to do with those migrations in the extracted placement. Some work also needs to be done to check to make sure the solutions will work in postgresql, as it might tickle the way it is more strict about group by clauses.

Bugs

Specs

There's a spec review sprint this coming Tuesday. This may be missing some newer specs because I got exhausted keeping tabs on the ones that already exist.

Main Themes

Making Nested Useful

Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. This is creeping ahead, but feels somewhat stalled out, presumably because people are busy with other things.

I feel like I'm missing some things in this area. Please let me know if there are others. This is related:

Extraction

There continue to be three main tasks in regard to placement extraction:

  1. upgrade and integration testing
  2. database schema migration and management
  3. documentation publishing

The upgrade aspect of (1) is in progress with a patch to grenade and a patch to devstack. This is very close to working. A main blocker is needing a proper tool for managing the creation and migration of database tables (more below).

My experiments with using gabbi-tempest are getting a bit closer.

Successful devstack is dependent on us having a reasonable solution to (2). For the moment a hacked up script is being used to create tables. Ed has started some work on moving to alembic.

We have work in progress to tune up the documentation but we are not yet publishing documentation (3). We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out.

Other

Various placement changes out in the world.

End

Hi!

by Chris Dent at October 19, 2018 12:00 PM

October 18, 2018

OpenStack Superuser

How open source communities are coming together to build open infrastructure

At the upcoming Summit Berlin, you’ll find a large contingent of open-source projects meeting up to work together. They include Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV and Zuul. Also check out the open source community track which covers community management, diversity and inclusion, mentoring, open source governance, ambassadors and roadmap development.

Here are a few picks from the packed schedule of 200 sessions plus workshops:

Kubernetes Administration 101: From zero to hero

You already know Openstack and some Docker but are new to Kubernetes? This daylong hands-on training will teach you the main concepts and daily administration tasks of Kubernetes and boost to your career.

Taught by Laszlo Budai, Component Soft Ltd., it will cover Linux containers and Kubernetes, accessing Kubernetes and access control, workloads, accessing applications, persistent storage. Space is limited, RSVP is required. Details here.

The evolution of Open vSwitch integration for OpenStack

Open vSwitch (OVS) has been an important component of the most commonly used networking backend for OpenStack Neutron for several years. Both the OpenStack and Open vSwitch projects have evolved quite a bit. OVN (Open Virtual Network) is a new implementation of virtual networking from Open vSwitch that can be used by OpenStack, but also other projects such as Kubernetes.

This session with Daniel Alvarez Sanchez and Numan Siddique of Red Hat walks through the journey of OpenvSwitch in OpenStack and covers the latest state of the OVN integration with OpenStack. We will discuss how OVN differs from the original OVS and OpenStack integration, as well as how to migrate an existing deployment to OVN. Details here.

Zuul at BMW: Large scale automotive software development

Since the introduction of software in cars, the complexity of automotive software is constantly rising. BMW manages parts of that complexity with Continuous Integration (CI) systems by automating all stages of the software lifecycle. But some huge software projects, like autonomous driving, start to be limited by the performance of available CI solutions.Tobias Henkel will give an overview of the CI requirements for software projects at BMW and how the automaker uses Zuul to develop software at large scale. Details here.

Open source orchestrators for NFV – What’s going on?

There are so many orchestrators, that operators wonder whether it makes sense to build one or choose from competing models including ONAP, OSM, OPNFV, Tacker, TOSCA, YAML and NETCONF.In the rapidly changing landscape of Open Source orchestration for NFV, it’s easy to get confused about which project is focused on what, what are that communities strengths, weaknesses and key areas of focus.

Join Vanessa Little, Layer123 NFV Advisory Panel, committee member, for an interactive session that offers a clear-eyed view on what’s out there and where these projects are headed. Details here.

Artificial intelligence-based container monitoring, healing and optimizing

Monitoring container ecosystem becomes critical in terms of large business applications with complex use cases, making it challenging for the human brains to troubleshoot problems. Though there are traditional monitoring tools available for containers, recurring problems caused by business use case flow cannot be monitored or healed in traditional Docker monitoring systems.

With the help of AI, containers can be monitored in a way where operations can inject rules depending on their use case and help save sapiens from troubleshooting. Moreover, AI provides flexibility for use-case architects to define their commonly known problems in a Docker environment and define rules accordingly to mitigate them dynamically by better prediction algorithms and gradually optimizing the containers in the long run.

This presentation from Cisco’s Sreekrishna Senthilkumar, Aman Sinha and Sachin Joshi offers knowledge about making use of the benefits of AI in container ecosystem in solving business aspects of container monitoring and healing. Details here.

Spectre/Meltdown at eBay Classifieds Group: Rebooting 80,000 cores

eBay Classifieds Group has a private cloud distributed in two geographical regions (plans for a tertiary zone), around 1,000 hypervisors and a capacity of 80,000 cores.

After Spectre and Meltdown, the team needed to patch hypervisors on four availability zones for each region with the latest kernel, KVM version and BIOS updates. During these updates the zones were unavailable and all the instances restarted automatically. The entire process was automated using Ansible playbooks created internally and using the Openstack API to leverage the operations.

Bruno Bompastor and Adrian Joian will walk attendees through all the work done to shut down, update and boot successfully an infrastructure fully patched and without data loss. They’ll also go into the Openstack challenges, missing features and workarounds. The pair will also discuss the management of our SDN (Juniper Contrail) and LBaaS (Avi Networks) when restarting this massive amount of cores. Details here.

// CC BY NC

The post How open source communities are coming together to build open infrastructure appeared first on Superuser.

by Superuser at October 18, 2018 03:48 PM

Fleio Blog

Fleio billing 1.1 adds OpenStack Rocky support and domain name registration

We’ve just released Fleio billing 1.1 introducing support for OpenStack Rocky, domain name registration and many more features. This is the second Fleio stable release and it marks our change in direction to a more general billing solution for cloud computing and classic web hosting solutions. Main new features include: We now officially support Rocky, […]

by adrian at October 18, 2018 06:10 AM

Adam Young

Creating a Self Trust In Keystone

Lets say you are an administrator of an OpenStack cloud. This means you are pretty much all powerful in the deployment. Now, you need to perform some operation, but you don’t want to give it full admin privileges? Why? well, do you work as root on your Linux box? I hope note. Here’s how to set up a self trust for a reduced set of roles on your token.

First, get a regular token, but use the –debug to see what the project ID, role ID, and your User ID actually are:

In my case, they are … long uuids.

I’ll trim them down both for obscurity as well as the make it more legible. Here is the command to create the trust.

openstack trust create --project 9417f7 --role 9fe2ff 154741 154741

Mine returned:

+--------------------+----------------------------------+
| Field              | Value                            |
+--------------------+----------------------------------+
| deleted_at         | None                             |
| expires_at         | None                             |
| id                 | 26f8d2                           |
| impersonation      | False                            |
| project_id         | 9417f7                           |
| redelegation_count | 0                                |
| remaining_uses     | None                             |
| roles              | _member_                         |
| trustee_user_id    | 154741                           |
| trustor_user_id    | 154741                           |
+--------------------+----------------------------------+

On my system, role_id 9fe2ff is the _member_role.

Note that, if you are Admin, you need to explicitly grant yourself the _member_ role, or use an implied role rule that says admin implies member.

Now, you can get a reduced scope token. Unset the variables that are used to scope the token, since you want to scope to the trust now.

$ unset OS_PROJECT_DOMAIN_NAME 
$ unset OS_PROJECT_NAME 
$ openstack token issue --os-trust-id  26f8d2eaf1404489ab8e8e5822a0195d
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2018-10-18T10:31:57+0000         |
| id         | f16189                           |
| project_id | 9417f7                           |
| user_id    | 154741                           |
+------------+----------------------------------+

This still requires you to authenticate with your userid and password. An even better mechanism is the new Application Credentials API. It works much the same way, but you use an explicitly new password. More about that next time.

by Adam Young at October 18, 2018 02:44 AM

October 17, 2018

OpenStack Superuser

What’s the value of contributing to open source?

Nordix is a new effort to encourage more organizations in Nordic countries to participate in open source and drive global innovation.

Johan Christenson, CEO of City Network, and Chris Price, president of Ericsson Software Technology, shared their personal passion for open source and introduced the organization they recently founded at the OpenStack Days Nordic event.

According to Christenson, very few enterprises engage open source in the region. Most are consuming proprietary software and services, which means they are missing out on innovation and opportunities. Without open source, technology choices and even business models are much more limited.

Christenson reminded the audience of the open-source legacy from the Nordics, including both Linux and MySQL. He also pointed to Ericsson as a shining example when it comes to open source contributors in the region.

But that can be a challenge for Price as he tries to spread the open-source message. When he talks to other organizations about getting involved, they often say “sure, but you’re Ericsson. You have the scope and scale. We can’t do that, because we only have one developer.”

“But there are lots of organizations with ‘one developer,’” said Price. “Our mission is to make it easier for them to get involved and drive this innovation.” And contributing doesn’t just mean code. There are plenty of other ways to get involved, including engaging in community mailing lists or events to make sure the right use cases are represented.

So what’s the value of using and contributing to open source software?

By contributing, you help influence the software roadmap and have more control over landing the features you need for your use case.

But it’s more than simply landing code. If you can articulate your use case and requirements, it’s an opportunity to collaboratively solve your problems. As a result, you don’t just get your feature, but can influence the community’s thinking, which will have a broader impact on future development because they will have more knowledge and context for your use case and approach.

There’s also the benefit of learning how to better operate and use the open source technology. Quoting professor Frank Nagle, who has done a significant amount of research about open source and crowdsourcing at Harvard Business School, “companies who contribute and give back learn how to better use the open-source software in their own environment.”

Nordix will focus on education and making it easier for people to get started contributing. One example is Upstream Institute, a hands-on workshop to help new open source contributors get set up with the right tools and land their first patch. The program was run by the OpenStack Foundation and community volunteers and was hosted alongside OpenStack Days Nordic. The next chance to participate (for free!) will be November 11-12 at the Berlin Summit.

If you are in the Nordic region and want to encourage more open source collaboration, check out www.nordix.org.

The post What’s the value of contributing to open source? appeared first on Superuser.

by Lauren Sell at October 17, 2018 02:07 PM

NFVPE @ Red Hat

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package:…

by Yolanda Robla Mota at October 17, 2018 01:14 PM

The Official Rackspace Blog

How Private Cloud as a Service Can Help Your Security Posture

Cloud computing has transformed how organizations need to think about securing their data and assets. As businesses continue to adopt cloud-based initiatives to support artificial intelligence, internet of things, big data and more, they remain apprehensive about protecting critical assets. After all, as our Chief Operations and Product Officer David Meredith noted in a post […]

The post How Private Cloud as a Service Can Help Your Security Posture appeared first on The Official Rackspace Blog.

by Dan Houdek at October 17, 2018 12:00 PM

October 16, 2018

OpenStack Superuser

Vote now for the Berlin Summit Superuser Award

Who do you think should win the Superuser Award for the Berlin Summit? Cast your vote before October 21 at 11:59 p.m. Pacific Standard Time.

When evaluating the nominees for the Superuser Award, take into account the unique nature of use case(s), as well as integrations and applications of OpenStack by each particular team.

Check out highlights from the five nominees and click on the links for the full applications:

  • Adform, CloudServices Team
    “We have three OpenStack deployments for different tiers in seven regions all over the world. In total there are over 4,500 VMs on over 200 hosts. It’s used by several hundred company developers to provide service to millions of users.”
  • City Network’s R&D, Professional Services, Education and Engineering teams
    “We run our public OpenStack based cloud in eight regions across three continents. All of our data centers are interconnected via private networks. In addition to our public cloud, we provide a pan-European cloud for verticals where regulatory compliance is paramount (e.g. banking and financial services, government, healthcare) addressing all regulatory challenges. Over 2,000 users of our infrastructure-as-a-service solutions run over 25,000 cores in production.”
  • Cloud&Heat
    Cloud&Heat developed their own server hardware, water-cooled and operated by OpenStack. The heat of the water is used for energy optimizing in households, offices and commercial enterprises, making a huge use case how OpenStack can save the planet.” Cloud&Heat was nominated by a third party and did not submit a complete application. Judges will take into account their partial application when evaluating the nominees.  More on how they operate here.
  • Linaro
    “Thanks to OpenStack and Ceph we have been able to share hardware with different open source teams and vendors that are trying to make their software multi-architecture aware…In addition, we are changing the culture of cross compiling for Arm architecture, assisting anyone that has to cross compile their Arm binaries to build natively.”
  • ScaleUp Technologies
    We currently have three production OpenStack installations in both Hamburg and Berlin, Germany. Together with another hosting partner we are currently building a third OpenStack cloud in Dusseldorf, Germany. These infrastructures will soon be connected via a dedicated 10-gigabit backbone ring. Earlier this year, we started working on some edge-related activities and are now building OpenStack-based edge clouds on hyper converged hardware for customers.”

Cast your vote here! The deadline is Sunday, October 21 at 11:59 p.m. Pacific Standard Time.

Previous winners include AT&T, CERN, China Mobile, Comcast, NTT Group, the Ontario Institute for Cancer Research and the Tencent TStack Team.

The Berlin Summit Superuser Awards are sponsored by Zenko, the open source multi-cloud data controller.

The post Vote now for the Berlin Summit Superuser Award appeared first on Superuser.

by Superuser at October 16, 2018 05:43 PM

Berlin Superuser Awards Nominee: Adform, CloudServices Team

It’s time for the community to help determine the winner of the OpenStack Berlin Summit Superuser Awards, sponsored by Zenko. Based on the community voting, the Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner.

Now, it’s your turn.

Adform, CloudServices Team is one of  five nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline October 21 at 11:59 p.m. Pacific Standard Time.

Cast your vote here!

Who is the nominee?

Adform, CloudServices Team

Team members: Edgaras Apšega, Andrius Cibulskis, Donatas Fetingis, Dovilas Kazlauskas, Jonas Nemeikšis, Arvydas Opulskis, Tadas Sutkaitis, Matas Tvarijonas.

How has open infrastructure transformed your business? 

OpenStack, together with other open source solutions, implemented unified user experience in all tiers (dev, staging, production). It allowed users to use the same images and tools for faster and more automated development, deployment and testing. HA was increased significantly (availability zones, regions, storage cluster), and self-service opportunities arose as well. Scaling became fast and easy, and because of projects, resource control is more transparent and security is increased.

At the beginning, we had several hundred manually deployed VMs in different platforms. Now, we have over 4,500 VMs running in seven regions all over the world with HA storage clusters, unified images and flexible self-service for our clients.

How has the organization participated in or contributed to an open infrastructure community? 

We are participating in our local IT community events by sharing our discoveries, experience and infrastructure design. We had few talks about out infrastructure in few conferences. We reported some bug’s, participated in mailing list communications as well.

What open-source technologies does the organization use in its IT environment?

Besides OpenStack, our organization use technologies like CentOS, Kubernetes, Ceph, Prometheus, Zabbix, Grafana, Salt Stack, Puppet, Ansible, Nginx, Haproxy, Foreman, Jenkins, ELK and much more.

What’s the scale of the OpenStack deployment? 

We have three OpenStack deployments for different tiers in seven regions all over the world. In total there are over 4,500 VMs on over 200 hosts. It’s used by several hundred company developers to provide service to millions of users.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

Because we started with vanilla OpenStack, we had to build our own configuration management and deployment mechanism on SaltStack. We had to design and implement HA, self-service tools for our users, metric and alert collection (from infrastructure hosts, compute nodes and VMs itself). Old VMs were on different platforms, so we needed migration tool to move these VMs to OpenStack.

How is this team innovating with open infrastructure? 

Team provides scalable private cloud that runs in seven data centers at three continents as a service. In our infrastructure we are using OpenStack, Kubernetes, Prometheus, Consul, Terraform, Nginx, Jenkins, Ceph and other solutions.

We mixed some technologies in between, for example: some Kubernetes clusters are running on OpenStack and OpenStack metrics exporters run on other bare metal Kubernetes cluster.

In addition to that, we run multi regional load balancers cluster which handles up to 800,000 ops.

How many Certified OpenStack Administrators (COAs) are on your team?

None yet.

Voting is limited to one ballot per person and closes October 21 at 11:59 p.m. Pacific Standard Time.

 

The post Berlin Superuser Awards Nominee: Adform, CloudServices Team appeared first on Superuser.

by Superuser at October 16, 2018 05:30 PM

Berlin Superuser Awards Nominee: City Network

It’s time for the community to help determine the winner of the OpenStack Berlin Summit Superuser Awards, sponsored by Zenko. Based on the community voting, the Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner.

Now, it’s your turn.

City Network’s research and development, professional services, education and engineering teams are one of five nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline October 21 at 11:59 p.m. Pacific Standard Time.

Cast your vote here!

Who is the nominee?

The team: Marcus Murwall, Tobias Rydberg, Magnus Bergman, Tobias Johansson, Alexander Roos, Johan Hedberg, Joakim Olsson, Emil Sundstedt, Erik Johansson, Joel Svensson, Ioannis Karamperis, Johan Hedberg, Namrata Sitlani, Christoffer Carlberg, Daniel Öhberg, Florian Haas, Adolfo Brandes, Priscila Prado.

How has open infrastructure transformed your business? 

With emphasis on regulatory compliance and data protection, we are a European leader, promoter and enabler of OpenStack-based compliant cloud solutions. These solutions are tailored for regulatory challenged industries such as banks, insurance companies, healthcare and governments. The pace of innovation within these industries has always been dictated by the heavy demand for control, data protection, auditability and other factors specific to the nature of the information they care for. With our OpenStack-based Compliant Cloud solutions, we prove that some of the largest banks, insurance companies and digital identity management companies in the world can increase their pace of innovation while still being regulatory compliant.

How has the organization participated in or contributed to an open infrastructure community? 

  • Our CEO is a member of the OpenStack Foundation Board
  • City Network initiated and is OpenStack Days Nordic three years in a row. We are also involved in OpenStack Days Israel and India and attend multiple OpenStack Days events across the globe.
  • We have participated in every summit for the past six years, the PTGs, and contribute to the working groups; public cloud and the security project.
  • We provide OpenStack training and strive to bridge the OpenStack and Open edX communities through mutual collaboration with the common ambition of providing quality education to everybody with access to a browser and an internet connection.
  • Members of our team have contributed code since 2012, focused on training, documentation, code contribution and bug fixes to various Official projects.

What open-source technologies does the organization use in its IT environment?

We are very pro open source and use it in every case wherever it’s a viable option.

A selection of the open-source technologies we are currently using: CentOS, OpenBSD, Ubuntu, Nginx, Apache, PHP, Python, Ansible, MySQL, Mariadb, Mongodb, Ceph and Open edX.

What’s the scale of the OpenStack deployment? 

We run our public OpenStack based cloud in eight regions across three continents. All of our data centers are interconnected via private networks. In addition to our public cloud, we provide a pan-European cloud for verticals where regulatory compliance is paramount (e.g. banking and financial services, government, healthcare) addressing all regulatory challenges. Over 2,000 users of our infrastructure-as-a-service solutions run over 25,000 cores in production.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

Since we have been running OpenStack as public IaaS, there have been a lot of hurdles to overcome as OpenStack is not yet fully adapted for public clouds. We had to build our own APIs in order to get network connectivity over several sites to work and also we had to add features such as volume copy and the ability to move volumes between sites. We have also had our fair share of issues with upgrading to new OpenStack versions, however we do feel as this process have been getting better with each upgrade.

We’re also embarking a huge OSA conversion this fall which we believe will make things much easier moving forwards.

How is this team innovating with open infrastructure? 

Our true innovation lies in the fact that we have managed to build a global open-source based cloud solution fit for regulatory challenged enterprises. These are enterprises who haven’t really been able to utilize the true potential of cloud computing until they met us.

An enterprise building their own private cloud because they don’t have a choice is one thing. But unless running a cloud is part of their core business, they are not fully focused on their mission.

A regulatory challenged enterprise who is able to utilize cloud computing on a pay-as-you-go model through a vendor, just like any other organization, is a whole other ball game. That organization can focus 100 percent on their core business and stand a fighting chance in our era of digitization.

How many Certified OpenStack Administrators (COAs) are on your team?

None.

 Voting is limited to one ballot per person and closes October 21 at 11:59 p.m. Pacific Standard Time.

 

 

The post Berlin Superuser Awards Nominee: City Network appeared first on Superuser.

by Superuser at October 16, 2018 05:30 PM

Berlin Superuser Awards Nominee: Linaro Datacenter and Cloud Group (LDCG)

It’s time for the community to help determine the winner of the OpenStack Berlin Summit Superuser Awards, sponsored by Zenko. Based on the community voting, the Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner.

Now, it’s your turn.

Linaro Datacenter and Cloud Group (LDCG) is one of five nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline October 21 at 11:59 p.m. Pacific Standard Time.

Cast your vote here!

Who is the nominee?

The Linaro Datacenter and Cloud group (LDCG) team involved in cloud enablement:

  • Software-defined infrastructure (userspace layers and OpenStack): Marcin Juszkiewicz (RedHat), Xinliang Liu (HiSilicon), Tone Zhang (Arm), Kevin Zhao (Arm), Eugene Xie (Arm), Herbert Nan (HXT), Masahisa Kojima (Socionext), Gema Gomez (Linaro)
  • Linaro developer cloud (DevOps, production cloud): Jorge Niedbalski (Linaro)
  • Server architecture (firmware, kernel, HW enablement): Graeme Gregory (Linaro), Radoslaw Biernacki (Cavium), Ard Biesheuvel (Linaro), Leif Lindholm (Linaro)
  • LDCG director: Martin Stadler (Linaro)

How has open infrastructure transformed your business? 

Open infrastructure is helping enable new architectures and entire ecosystems in our case. Thanks to OpenStack and Ceph we’ve been able to share hardware with different open-source teams and vendors who are trying to make their software multi-architecture aware. OpenStack is also helping with CI/CD across the industry. In addition, we’re changing the culture of cross compiling for Arm architecture, assisting anyone that has to cross compile their Arm binaries to build natively. Metrics about our patches available here: http://patches.linaro.org/team/team-leg/?months=12

Without the Linaro Developer Cloud, many open-source projects that are using the cloud for their build and test capabilities wouldn’t be producing AArch64 binaries.

How has the organization participated in or contributed to an open infrastructure community? 

Linaro has participated in OpenStack events and has been contributing changes to make OpenStack-architecture aware, ensuring equivalent behavior on different architectures. As a public cloud operator we contribute cloud resources to many open source projects including openstack-infra.

Over the past several years we had a presence at PTG events, be part of upstream projects and contribute CI/gate testing when none was available for Arm64. We’ve also been approaching and working with different upstream projects to help them build their own Arm64 binaries without needing our help going forward. Enablement for us is giving projects the tools and access to the hardware they need to be able to be truly multi-architecture aware without relying on us going forward.

What open-source technologies does the organization use in its IT environment?

 Linaro uses and contributes to the Linux Kernel, Debian and Debian derivatives, libvirt, QEMU, Ceph, OpenStack, OpenBMC, CCIX, TianoCore, OpenHPC, Big Data projects, container technologies, Kubernetes and any other project that is required in between to enable all of these technologies.

The big data and data science team has donated two ARM-based developer cloud nodes to Apache Bigtop project for their CI/CD to produce ARM based deb and rpm packages and Docker images. Apache Bigtop is a project for the development of packaging and tests of the Apache Hadoop ecosystem. OpenStack instances are also used to test portability of big data and data science projects (Apache Hadoop, Spark, HBase, Hive, Zookeeper, Cassandra, ElasticSearch, Arrow, etc) onto ARM and also Benchmark and optimize them. Big data projects are tested on top of Docker containers, all running on OpenStack.

What’s the scale of the OpenStack deployment? 

We operate three OpenStack clouds on three different geographic locations (UK, US, China) with ~100 Arm64 hosts with CPUs from a variety of different Linaro members (Cavium, HiSilicon, Qualcomm and others) adding up close to 1,600 CPU cores and five terabytes of memory in total. This service is being used by around 70 developers working for a range of external organizations on a variety of work across regions with a varied set of CI/CD pipelines. They are mostly working on open-source projects.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

Our main challenge with OpenStack was that we needed OpenStack working to be able to test it. We have learned about and fixed issues on libvirt, kernel drivers, and firmware. We have been observing and fixing problems related to the VM life cycle on Arm64, no amount of CI/CD is equivalent to running VMs in production with users that rely on them for their operations and CI, so operating a cloud is also giving us a great insight into the problems that are lurking related to long standing VMs and systems.

We have learned about upgrades with Kolla-Ansible and are able to upgrade from a previously existing set up (fast forward upgrade) to the latest release without issues.

Rocky is our first truly interoperable release (2018.02 guidelines).

How is this team innovating with open infrastructure? 

We are enabling as many data center related technologies as we can to work smoothly on Arm64 and be architecture aware.

How many Certified OpenStack Administrators (COAs) are on your team?

None.

 Voting is limited to one ballot per person and closes October 21 at 11:59 p.m. Pacific Standard Time.

 

The post Berlin Superuser Awards Nominee: Linaro Datacenter and Cloud Group (LDCG) appeared first on Superuser.

by Superuser at October 16, 2018 05:29 PM

Berlin Superuser Awards Nominee: ScaleUp Technologies

It’s time for the community to help determine the winner of the OpenStack Berlin Summit Superuser Awards, sponsored by Zenko. Based on the community voting, the Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner.

Now, it’s your turn.

ScaleUp Technologies is one of  five nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline October 21 at 11:59 p.m. Pacific Standard Time.

Cast your vote here!

Who is the nominee?

ScaleUp Technologies

Team members: Frank Gemein, Christoph Streit, Oliver Klippel, Gihan Behrmann, Julia Streit.

How has open infrastructure transformed your business? 

As ScaleUp is a hosting provider, we started offering a cloud hosting solution back in 2009. After issues with the cloud platform technology that we used back then (regarding licensing, etc.), we began using OpenStack for our cloud services in 2014.

This experience has showed us that it’s better for us to rely on an open-source project such as OpenStack with a very vibrant community, compared to a proprietary solution.

How has the organization participated in or contributed to an open infrastructure community? 

We have also been very interested in giving back to the community, pretty much from the beginning, as we have ourselves received a lot of help and support from the OpenStack community. As we’re a rather small company and team, we do not have enough resources to contribute code to OpenStack. But we have talked about our learnings and experiences in running OpenStack at several occasions (OpenStack summits in Austin, Boston; local OpenStack conferences, etc.)

We have been running the OpenStack meetup group in Hamburg since 2017 and since 2018 we’ve also been also running the meetup group in Berlin. In addition, we have also started offering OpenStack workshops teaching the fundamentals of OpenStack (free of charge to local meetup groups). We have had several of these workshops in 2017 in multiple cities.

What open-source technologies does the organization use in its IT environment?

We use a lot of open source tools: Linux (mostly Ubuntu and Debian), for monitoring and analysis we rely on Check_MK (Nagios) and Elastic Search with Kibana and for communications we rely on tools like Postfix and Mattermost.

What’s the scale of the OpenStack deployment? 

We currently have three production OpenStack installations in both Hamburg and Berlin, Germany. Together with another hosting partner we’re currently building a third OpenStack cloud in Dusseldorf, Germany. These infrastructures will soon be connected via a dedicated 10 gigabit backbone ring.

Earlier this year we started working on some edge related activities and are now building OpenStack based edge clouds on hyper-converged hardware for customers.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

We mostly struggle with having only a small team. Therefore we have worked on ways to involve our other system administrators not working on OpenStack yet, by breaking down tasks. As there are many known open source/Linux tools used in OpenStack, many issues/problems can be fixed by tackling these problems (such as database problems, problems with libvirt/KVM, etc). We have had a presentation about this way of working on one of the last OpenStack Summits.

How is this team innovating with open infrastructure? 

Since we getting involved with OpenStack, we’ve also started working on other new technologies. For example, we are currently working on a managed Kubernetes platform running on top of OpenStack.

How many Certified OpenStack Administrators (COAs) are on your team?

None.

Voting is limited to one ballot per person and closes October 21 at 11:59 p.m. Pacific Standard Time.

 

The post Berlin Superuser Awards Nominee: ScaleUp Technologies appeared first on Superuser.

by Superuser at October 16, 2018 05:29 PM

Pablo Iranzo Gómez

Contributing to OSP upstream a.k.a. Peer Review

Table of contents

  1. Introduction
  2. Upstream workflow
    1. Peer review
    2. CI tests (Verified +1)
    3. Code Review+2
    4. Workflow+1
    5. Cannot merge, please rebase
  3. How do we do it with Citellus?

Introduction

In the article "Contributing to OpenStack" we did cover on how to prepare accounts and prepare your changes for submission upstream (and even how to find low hanging fruits to start contributing).

Here, we'll cover what happens behind the scene to get change published.

Upstream workflow

Peer review

Upstream contributions to OSP and other projects are based on Peer Review, that means that once a new set of code has been submitted, several steps for validation are required/happen before having it implemented.

The last command executed (git-review) on the submit sequence (in the prior article) will effectively submit the patch to the defined git review service (git-review -s does the required setup process) and will print an URL that can be used to access the review.

Each project might have a different review platform, but usually for OSP it's https://review.openstack.org while for other projects it can be https://gerrit.ovirt.org, https://gerrithub.io, etc (this is defined in .gitreview file in the repository).

A sample .gitreview file looks like:

[gerrit]
host=review.gerrithub.io
port=29418
project=citellusorg/citellus.git

For a review example, we'll use one from gerrithub from Citellus project:

https://review.gerrithub.io/#/c/380646/

Here, we can see that we're on review 380646 and that's the link that allows us to check the changes submitted (the one printed when executing git-review).

CI tests (Verified +1)

Once a review has been submitted, usually the bots are the first ones to pick them and run the defined unit testing on the new changes, to ensure that it doesn't break anything (based on what is defined to be tested).

This is a critical point as:

  • Tests need to be defined if new code is added or modified to ensure that later updates doesn't break this new code without someone being aware.
  • Infrastructure should be able to test it (for example you might need some specific hardware to test a card or network configuration)
  • Environment should be sane so that prior runs doesn't affect the validation.

OSP CI can be checked at 'Zuul' http://zuul.openstack.org/ where you can 'input' the number for your review and see how the different bots are running CI tests on it or if it's still queued.

If everything is OK, the bot will 'vote' your change as Verified +1 allowing others to see that it should not break anything based on the tests performed

In the case of OSP, there's also third-party CI's that can validate other changes by third party systems. For some of them, the votes are counting towards or against the proposed change, for others it's just a comment to take into account.

Even if sometimes you know that your code is right, there's a failure because of the infrastructure, in those cases, writing a new comment saying recheck, will schedule a new CI test run.

This is common usually during busy periods when it's harder for the scheduler to get available resources for the review validation. Also, sometimes there are errors in the configuration of CI that must be fixed in order to validate those changes.

Note: you can run some of the tests on your system to validate faster if you've issues by running tox this will setup virtual environment for tests to be run so it's easier to catch issues before upstream CI does (so it's always a good idea to run tox even before submitting the review with git-review to detect early errors).

This is however not always possible as some changes include requirements like testing upgrades, full environment deployments, etc that cannot be done without the required preparation steps or even the infrastructure.

Code Review+2

This is probably the 'longest' process, it requires peers to be added as 'reviewer' (you can get an idea on the names based on other reviews submitted for the same component) or they will pick up new reviews as the pop un on notification channels or pending queues.

On this, you must prepare mentally for everything... developers could suggest to use a different approach, or highlight other problems or just do some small nit comments to fixes like formating, spacing, var naming, etc.

After each comment/change suggested, repeat the workflow for submitting a new patchset, but make sure you're using the same review id (that's by keeping the commit id that is appended): this allows the Code Review platform to identify this change as an update to a prior one, and allow you for example to compare changes across versions, etc. (and also notify the prior reviewers of new changes).

Once reviewers are OK with your code, and with some 'Core' developers also agreeing, you'll see some voting happening (-2..+2) meaning they like the change in its actual form or not.

Once you get Code Review +2 and with the prior Verified +1 you're almost ready to get the change merged.

Workflow+1

Ok, last step is to have someone with Workflow permissions to give a +1, this will 'seal' the change saying that everything is ok (as it had CR+2 and Verified+1) and change is valid...

This vote will trigger another build by CI, and when finished, the change will be merged into the code upstream, congratulations!

Cannot merge, please rebase

Sometimes, your change is doing changes on the same files that other programmers did on the code, so there's no way to automatically 'rebase' the change, in this case the bad news is that you need to:

git checkout master # to change to master branch
git pull # to push latest upstream changes
git checkout yourbranch # to get back to your change branch
git rebase master # to apply your changes on top of current master

After this step, it might be required to manually fix the code to solve the conflicts and follow instructions given by git to mark them as reviewed.

Once it's done, remember to do like with any patchset you submited afterwards:

git commit --amend # to commit the new changes on the same commit Id you used
git-review # to upload a new version of the patchset

This will start over the progress, but will, once completed to get the change merged.

How do we do it with Citellus?

In Citellus we've replicated more or less what we've upstream... even the use of tox.

Citellus does use https://gerrithub.io (free service that hooks on github and allows to do PR)

We've setup a machine that runs Jenkins to do 'CI' on the tests we've defined (mostly for python wrapper and some tests) and what effectively does is to run tox, and also, we do use https://travis-ci.org free Tier to repeat the same on other platform.

Tox is a tool that allows to define several commands that are executed inside python virtual environments, so without touching your system libraries, it can get installed new ones or removed just for the boundaries of that test, helping into running:

  • pep8 (python formating compliance)
  • py27 (python 2.7 environment test)
  • py35 (python 3.5 environment test)

The py tests are just to validate the code can run on both base python versions, and what they do is to run the defined unit testing scripts under each interpreter to validate.

For local test, you can run tox and it will go trough the different tests defined and report status... if everything is ok, it should be possible that your new code review passes also CI.

Jenkins will do the +1 on verified and 'core' reviewers will give +2 and 'merge' the change once validated.

Hope you enjoy!

Pablo

by Pablo Iranzo Gómez at October 16, 2018 05:32 AM

October 15, 2018

OpenStack Superuser

Meet the newest members of the Superuser Editorial Advisory Board

We’re excited to announce three new members of our Board. They’ll be joining the other five members to weigh in on this edition of the Superuser Awards as well as contributing ideas and shaping editorial content.

Here’s a bit more about the new members, in alphabetical order:

Mark Korondi

“When I’m consulting with clients I recommend FOSS solutions for all their needs when possible. And building datacenter infrastructure is possible with OpenStack…I am keen to work with them on these kind of projects and integrations into their current system. I especially like the concept of the open, standardized platform APIs which makes their architecture future-proof and removes a lot of vendor-lock-in. As an OpenStack enthusiast, I attend the local meetup groups and even organize quite a large event, the OpenStack CEE days in Budapest. I take the opportunity to explain and help to understand why the emphasis is on the _open_ infrastructure.” He’s also been involved with OpenStack Swift and taught at the  OpenStack Upstream Institute
Linkedin profile
Twitter: kmarc

Trinh Nguyen

“For a very long time in Vietnam, the big companies and government control the Internet infrastructure because of their resources and capital. That stops many potential individuals and small companies from growing and offering innovative services that can make it a better world. Open infrastructure will open a lot of opportunities for the under privileges people in Vietnam and other countries. As a technologist, I see myself have the responsibility to help to push the open infrastructure movement forward and change the world.” He’s worked on Tacker, Freezer, Fenix, Searchlight, Kolla and is currently the project team lead for Searchlight.
His website: http://me.dangtrinh.com

Ben Silverman

“My interests vary from open hardware architecture to open infrastructure platforms (OpenStack mostly) to open NFV and telco specific operating environments. I am hands-on with many of these platforms since I lead a solution and sales engineering group inside of my company as well as by night I’m frequently busy in my own labs…I’ve been involved with OpenStack for the past 5 years and have contributed heavily to the OpenStack Foundation’s documentation. Most recently I was tasked with re-writing most of the content in the OpenStack Architecture Guide… I am also active with OpenStack Edge Telco and Use Case special interest groups and involved directly and indirectly with the architecture discussions that bridge OpenStack Edge concepts with CORD and OPNFV.
LinkedIn profile
Twitter: @bensilverm
Books “OpenStack for Architects,” “OpenStack: Design and Implement Cloud Infrastructure”

 

 

We are always interested in content about open infrastructure – get in touch: editorATopenstack.org

The post Meet the newest members of the Superuser Editorial Advisory Board appeared first on Superuser.

by Nicole Martinelli at October 15, 2018 02:03 PM

Trinh Nguyen

Searchlight weekly report - Stein R-26



It's been a busy week so there's not much work has been done this week (Stein R26, Oct 08-12). Here is some news from the community this week that somewhat affects the Searchlight project:
  • Assigning new liaisons to projects: basically, the TC will assign who takes care of which project. That person will update on this page the project's statuses, important activities etc.
  • Proposed changes for library releases: in short, for each cycle-with-intermediary library deliverable, if it was not released during that milestone timeframe, the release team would automatically generate a release request early in the week of the milestone deadline.
  • Proposed changes for cycle-with-milestones deliverables: to summarize the discussion:
    • No longer be required to request a release for each milestone
    • Beta releases would be optional
    • Release candidates would still require a tag. Need PTL or release liaison's "+1"
    • Requiring a person for each team to add their name to a "manifest" of sorts for the release cycle
    • Rename the cycle-with-milestones release model to something like cycle-with-rc

This week we will continue working on these tasks:

1. Complete these stories:

by Trinh Nguyen (noreply@blogger.com) at October 15, 2018 06:47 AM

October 12, 2018

OpenStack Superuser

Kayobe and Rundeck: Operational hygiene for infrastructure as code

Rundeck is an infrastructure automation tool, aimed at simplifying and streamlining operational process when it comes to performing a particular task, or ‘job’. That sounds pretty grand, but basically what it boils down to is being able to click a button on a web-page or hit a simple API in order to drive a complex task; For example – something that would otherwise involve SSH’ing into a server, setting up an environment, and then running a command with a specific set of options and parameters which, if you get them wrong, can have catastrophic consequences.

This can be the case with a tool as powerful and all-encompassing as Kayobe. The flexibility and agility of the CLI is wonderful when first configuring an environment, but what about it when it comes to day two operations and business-as-usual (BAU)? How do you ensure that your cloud operators are following the right process when re-configuring a service? Perhaps you introduced ‘run books’, but how do you ensure a rigorous degree of consistency to this process? And how do you glue it together with some additional automation? So many questions!

Of course, when you can’t answer any or all of these questions, it’s difficult to maintain a semblance of ‘operational hygiene’. Not having a good handle on whether or not a change is live in an environment, how it’s been propagated, or by whom, can leave infrastructure operators in a difficult position. This is especially true when it’s a service delivered on a platform as diverse as OpenStack.

Fortunately, there are applications which can help with solving some of these problems – and Rundeck is precisely one of those.

Integrating Kayobe

Kayobe has a rich set of features and options, but often in practice – especially in BAU – there’s perhaps only a subset of these options and their associated parameters that are required. For our purposes at StackHPC, we’ve mostly found those to be confined to:

  • Deployment and upgrade of Kayobe and an associated configuration;
  • Sync. of version controlled kayobe-config;
  • Container image refresh (pull);
  • Service deployment, (re)configuration and upgrade.

This isn’t an exhaustive list, but these have been the most commonly run jobs with a standard set of options i.e those targetting a particular service. A deployment will eventually end up with a ‘library’ of jobs in Rundeck that are capable of handling the majority of Kayobe’s functionality, but in our case and in the early stages we found it useful to focus on what’s immediately required in practical terms, refactoring and refining as we go.

Structure and usage

Rundeck has no shortage of options when it comes to triggering jobs, including the ability to fire off Ansible playbooks directly – which in some ways makes it a poor facsimile of AWX. Rundeck’s power though comes from its flexibility, so having considered the available options, the most obvious solution seemed to be utilizing a simple wrapper script around kayobe itself, which would act as the interface between the two – managing the initialization of the working environment and capable of passing a set of options based on a set of selections presented to the user.

Rundeck allows you to call jobs from other projects, so we started off by creating a library project which contains common jobs that will be referenced elsewhere such as this Kayobe wrapper. The individual jobs themselves then take a set of options and pass these through to our script, with an action that reflects the job’s name. This keeps things reasonably modular and is a nod towards DRY principles.

The other thing to consider is the various ‘roles’ of operators (and I use this in the broadest sense of the term) within a team, or the different hats that people need to wear during the course of their working day. We’ve found that three roles have been sufficient up until now – the omnipresent administrator, a role for seeding new environments, and a ‘read-only’ role for BAU.

Finally it’s worth mentioning Rundeck’s support for concurrency. It’s entirely possible to kick off multiple instances of a job at the same time, however this is something to be avoided when implementing workflows based around tools such as Kayobe.

With those building blocks in place we were then able to start to build other jobs around these on a per-project (environment) basis.

Example

Let’s run through a quick example, in which I pull in a change that’s been merged upstream on GitHub and then reconfigure a service (Horizon).

The first step is to synchronize the version-controlled configuration repository from which Kayobe will deploy our changes. There aren’t any user-configurable options for this job (the ‘root’ path is set by an administrator) so we can just go ahead and run it:

 

The default here is to ‘follow execution’ with ‘log output’, which will echo the (standard) output of the job as it’s run:

Note that this step could be automated entirely with webhooks that call out to Rundeck to run that job when our pull request has been merged (with the requisite passing tests and approvals).

With the latest configuration in place on my deployment host, I can now go ahead and run the job that will reconfigure Horizon for me:

 

And again, I can watch Kayobe’s progress as it’s echoed to stdout for the duration of the run:

Note that jobs can be aborted, just in case something unintended happens during the process.

Of course, no modern DevOps automation tool would be complete without some kind of Slack integration. In our #rundeck channel we get notifications from every job that’s been triggered, along with its status:

Once the service reconfiguration job has completed, our change is then live in the environment – consistency, visibility and ownership maintained throughout.

CLI

For those with an aversion to using a GUI, as Rundeck has a comprehensive API you’ll be happy to learn that you can use a CLI tool in order to interact with it and do all of the above from the comfort of your favourite terminal emulator. Taking the synchronisation job as an example:

[stack@dev-director nick]$ rd jobs list | grep -i sync
2d917313-7d4b-4a4e-8c8f-2096a4a1d6a3 Kayobe/Configuration/Synchronise

[stack@dev-director nick]$ rd run -j Kayobe/Configuration/Synchronise -f
# Found matching job: 2d917313-7d4b-4a4e-8c8f-2096a4a1d6a3 Kayobe/Configuration/Synchronise
# Execution started: [145] 2d917313-7d4b-4a4e-8c8f-2096a4a1d6a3 Kayobe/Configuration/Synchronise <http://10.60.210.1:4440/project/AlaSKA/execution/show/145>
Already on 'alaska-alt-1'
Already up-to-date.

Conclusions and next steps

Even with just a relatively basic operational subset of Kayobe’s features being exposed via Rundeck, we’ve already added a great deal of value to the process around managing OpenStack infrastructure as code. Leveraging Rundeck gives us a central point of focus for how change, no matter how small, is delivered into an environment. This provides immediate answers to those difficult questions posed earlier, such as when a change is made and by whom, all the while streamlining the process and exposing these new operational functions via Rundeck’s API, offering further opportunities for integration.

Our plan for now is to try and standardise – at least in principle – our approach to managing OpenStack installations via Kayobe with Rundeck. Although it’s already proved useful, further development and testing is required to refine workflow and to expand its scope to cover operational outliers, and on the subject of visibility the next thing on the list for us to integrate is ARA.

If you fancy giving Rundeck a go, getting started is surprisingly easy thanks to the official Docker images as well as some configuration examples. There’s also this repository which comprises some of our own customisations, including minor fix for the integration with Ansible.

Kick things off via docker-compose and in a minute or two you’ll have a couple of containers, one for Rundeck itself and one for MariaDB:

nick@bluetip:~/src/riab> docker-compose up -d
Starting riab_mariadb_1 ... done
Starting riab_rundeck_1 ... done
nick@bluetip:~/src/riab> docker-compose ps
     Name                  Command             State                Ports
---------------------------------------------------------------------------------------
riab_mariadb_1   docker-entrypoint.sh mysqld   Up      0.0.0.0:3306->3306/tcp
riab_rundeck_1   /opt/boot mariadb /opt/run    Up      0.0.0.0:4440->4440/tcp, 4443/tcp

Point your browser at the host where you’ve deployed these containers and port 4440, and all being well you’ll be struck with the login page.

Feel free to reach out on Twitter or via IRC (#stackhpc on Freenode) with any comments or feedback!

This post first appeared on the blog of Stack HPC.

Superuser is always interested in community content, email: editorATopenstack.org

// CC BY NC

The post Kayobe and Rundeck: Operational hygiene for infrastructure as code appeared first on Superuser.

by Superuser at October 12, 2018 02:06 PM

Opensource.com

From hype to action: Next steps for edge computing

Get an update on the status of edge computing from the OpenStack Foundation working group.

by ildikov at October 12, 2018 07:02 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
November 12, 2018 10:52 PM
All times are UTC.

Powered by:
Planet