April 18, 2019

Emilien Macchi

Day 2 operations in OpenStack TripleO (episode 1: scale-down)

Scale-up and scale-down are probably the most common operations done after the initial deployment. Let’s see how they are getting improved. This first episode is about scale-down precisely.

How it works now

Right now when an operator runs “openstack overcloud node delete” command, it’ll update the Heat stack to remove the resources associated to the node(s) that we delete. It can be problematic for some services like Nova, Neutron and the Subscription Manager, which needs to be teared down before the server is deleted.

Proposal

The idea is to create an interface where we can run Ansible tasks which will be executed during the scale-down, before the nodes get deleted by Heat. The Ansible tasks will live near to the deployment / upgrade / … tasks that are in TripleO Heat Templates. Here is an example with Red Hat Subscription Management:

It involves 3 changes:

What’s next?

  • Getting reviews & feedback on the 3 patches
  • Implement scale down tasks for Neutron, Nova and Ceph, waiting for this feature
  • Looking at scale-up tasks

Demo

by Emilien at April 18, 2019 04:45 PM

OpenStack Superuser

Take a deep dive into Ceph block storage

In under 20 minutes, Intel’s Mahati Chamarthy offers a deep dive into Ceph’s object storage system. The object storage system allows users to mount Ceph as a thin-provisioned block device known as RADOS block Device (RBD). Chamarthy, a cloud software engineer who previously contributed to Swift and is an active contributor to Ceph, delves into the RBD, its design and features in this talk at the recent Vault ’19 event.

Meet RBD images

Ceph is software-defined storage designed to scale services horizontally. That means there’s no single point of failure and object block and file storage are available in one unified system. RBD is a software that facilitates the storage of block-based data in Ceph distributed storage. RBD images are thin-provisioned images resizable images that store data by striping them across multiple OSDs in a Ceph cluster and it offers two libraries one is the us-based library, librbd, typically used in virtual machines and the other is a kernel module used in container and bare-metal environment.
Here’s a look at a somewhat simplified sample flow for the read/write request environment:

Features

By default, Ceph will default striping and layering for users. Other useful features include exclusive lock, object-map (keeps the location of where the data resides speeding up I/o operations as well as importing and exporting), fast diff (an object map property that helps generate discs between snapshots) and deep flatten (resolves issues with snapshots taken from clone images.)
RBD has two image formats:
Mirroring (available per pool and per image; journaling and exclusive_lock)
In-memory librbd cache (other RO, RWL caching works in progress)

Here’s a look at what gets created with an RBD image, with more details here.


Chamarthy also goes over the details of striping, how snapshots work, layering and use cases for it, RBD and libvert/qemu as well as how to configure them with virtual machines. Check out the full video here.

Get involved

For more on Ceph, check out the code, join the IRC channels and mailing lists or peruse the documentation.
The Ceph community is participating at the upcoming Open Infrastructure Summit – check out the full list of sessions ranging from “Ceph and OpenStack better together” to “Storage 101: Rook and Ceph.”

The post Take a deep dive into Ceph block storage appeared first on Superuser.

by Superuser at April 18, 2019 02:03 PM

Galera Cluster by Codership

Galera Cluster with new Galera Replication library 3.26 and MySQL 5.6.43, MySQL 5.7.25 Generally Available (GA)

Codership is pleased to announce a new Generally Available (GA) release of Galera Cluster for MySQL 5.6 and 5.7, consisting of MySQL-wsrep 5.6.43 and MySQL-wsrep 5.7.25 with a new Galera Replication library 3.26 (release notes, download), implementing wsrep API version 25. This release incorporates all changes to MySQL 5.6.43 (release notes, download) and 5.7.25 respectively (release notes, download).

The Galera Replication library compared to to the previous 3.25 release has a few new features and enhancements: GCache page store fixes around an early release of the GCache page, a check for duplicate UUIDs to prevent a node joining with a similar UUID, and improvements to the internal handling of IPv6 addresses. From a compilation standpoint, dynamic symbol dispatch was disabled in libgalera_smm.so to avoid symbol conflicts during dynamic loading.

MySQL 5.6.43 with Galera Replication library 3.26 is an updated rebase, with a few notes: on Ubuntu 16.04 Xenial, note that the server cannot be bootstrapped with systemd and must rely on the SysV init scripts. However normal server operations will work with systemd. On Debian 9 (Stretch), the service command cannot be used to start the server.

MySQL 5.7.25 with Galera Replication library 3.26 is an updated rebase, with additional packages added for openSUSE Leap 15 and Ubuntu 18.04 LTS (Bionic Beaver). Similarly to the 5.6.43 release, the service command cannot be used to start the server on Debian 9 (Stretch). It should be noted that if you are trying to perform a State Snapshot Transfer (SST) between a MySQL 5.6 and MySQL 5.7 node, this operation is not supported. It should be worth noting that InnoDB tablespaces that are outside the data directory (typically /var/lib/mysql) are not supported as they may not be copied over during an SST operation.

You can get the latest release of Galera Cluster from http://www.galeracluster.com. There are package repositories for Debian, Ubuntu, CentOS, RHEL, OpenSUSE and SLES. The latest versions are also available via the FreeBSD Ports Collection.

by Colin Charles at April 18, 2019 09:36 AM

Aptira

Software Interlude. Part 4 – What is Software Development?

Aptira Software Interlude - What is Software Development?

In our last post, we discussed the different perspectives of software and how it affects the approach to Open Networking solutions by different stakeholder groups. In this Part 4 of the Open Networking Software Interlude, we look at the software development process – but what is software development? 

What is Software Development? 

It seems that when people talk about “software development” they mostly are thinking about the process of creating source code (editing text files), and then compiling, debugging and building these text files to produce the final executable software components. This is such a strong view that some software teams use it as a trope. 

Aptira Software Development TeamTypeyTypey

That’s a common view if the person has at least some level of understanding about the software process. If not, there can be a view that developers just “ do stuff” and software happens.

But is that what programmers do?

What do Software Developers do? 

Software developers do a lot of things when they are working, e.g. they read documents, they talk to target end-users, they talk to each other and have meetings, they review user stories, they have coffee and go to the gym. And sometimes they just sit at their computers, or just sit. And sometimes they write code. 

Unless you’re already experienced in software development projects, developers can do a lot of things that don’t actually look like working. 

But, unlike many other professions, all these activities can be taking place at the same time as actual software development is taking place. Some analogous professions (such as writing and music) have similar characteristics but they are few. How can that be? 

Also, there is one very significant distinction from all other analogous professions: none of them have to take the results of this mentally intensive processes and feed them into an unforgiving rule-based technical realisation process, i.e. a software build that produces executable code, before the results of this work can be captured. There is no sub-editor as strict and ruthless as a compiler. 

No, Really, what do Developers do? 

One important and useful idea of software development was described by Peter Naur as “theory building”, i.e. developers are building a theory of the world that generates the problems that needs solving. 

…it is concluded that the proper, primary aim of programming is, not to produce programs, but to have the programmers build theories of the manner in which the problems at hand are solved by program execution.

-- Peter Naura Danish Computer Pioneer

I would argue that developers are creating multiple “theories” concurrently: one of the current “real world” problem domain, another of the real-world solution domain, and a third model of how the software itself should be designed. 

Researchers have associated this idea of building a “theory” is essentially creating a “mental model” about the problem to be solved and potential solutions. Jorge Aranda wrote a PhD thesis about mental models in software development teams: 

Naur’s programmer’s theories are essentially mental models … the overarching goal of a software development organization is to build those models (or theories) during the life of the project.

-- Jorge Arandahttps://catenary.wordpress.com/2011/04/19/naurs-programming-as-theory-building/

A “mental model” is a cognitive construct that describes the way in which someone thinks about part of the real world in which they exist and operate.  Like other models it represents a subset of the real object(s).  “Mental models” govern behaviour, reasoning and decision-making, amongst other aspects of human thinking. 

Thus developers need to build their own internal mental model of the domain being automated; and, working in teams, those mental models need to be closely aligned and consistent with the mental models of other developers and other key stakeholders in the development process, i.e. building “shared mental models”. 

How well formed and detailed the developer’s mental model is, the more likely the software will be to perform its job correctly. This relates directly to the quality of the information available to the developers. 

What does this mean? 

This “theory building” or “model building” concept of software development is very powerful: it informs much of the agile software development principles that are predicated on both direct access to business representatives and constant dialog between them and the developers. It explains many of the reasons why earlier approaches to software development (which were dependent on document specifications as input to the software development process) have been so problematic. 

We have described two very different, and largely incompatible perspectives on the process of software development. On the one hand a focus on the external, physical tasks of writing and producing code and working (essentially) from other stakeholder’s analyses of the problem domain.  

On the other hand we posit a highly internalised cognitively intensive view of software development as “theory building” which also requires direct access on the part of the developers to the actual problem domain (people and processes). 

This difference in development perspectives and the understanding of software development dynamics are tremendously important to the development of Open Networking solutions. 

We’ll describe the implications of that problem in future posts. 

Stay tuned. 

Become more agile.
Get a tailored solution built just for you.

Find Out More

The post Software Interlude. Part 4 – What is Software Development? appeared first on Aptira.

by Adam Russell at April 18, 2019 06:48 AM

April 17, 2019

OpenStack Superuser

Testing cloud-native applications

Testing cloud-native micro-services is critical for creating robust, reliable and scalable applications. It is crucial to have an automated CI pipeline that tests the main branch of the application and creates a checkpoint for pull requests from other branches. There are well-defined and established levels of testing in the industry; however, in this tutorial we’ll dive into testing for  cloud-native micro-services.

Before getting into the levels of testing, let’s consider creating a sample API. This API will be created with the perspective of organizing, retrieving, and analyzing information on books in a library. A sample book-server was designed with cloudnative microservice architecture in mind and will be used for defining different levels of test in this section.

Note: The source code of the book-server API can be found here: https://gitlab.com/TrainingByPackt/book-server.

Keeping the micro-service architecture in perspective, we need to design and implement self-sufficient services for accomplishing business needs. To that end, we’ll create a micro-service that works as a REST API to interact with the book information. We do not need to consider how book information is stored in a database, since it should be a separate service in the architecture. In other words, we need to create a REST API server that works with any SQL-capable database and design it to be compatible with other types of databases in the future.

Before exploring the details of book-server, let’s take a look at the structure of the repository by using the tree command. With the following command, all of the files and folders are listed, except for the vendor folder, where the source code of Go dependencies are kept:

tree -I vendor –U

You can view the files and folder in the following output:

By following the best practices in Go development, the book-server is structured as follows:

  • The cmd folder includes main.go, which creates the executable for the bookserver
  • The docker folder includes Dockerfiles that will be used for different testing levels such as static code check, unit test, smoke test, and integration test
  • Dockerfile is the container definition that’s used for building book-server
  • Makefile contains the commands to test and build the repository
  • README.md includes documentation on the repository
  • The pkg folder consists of the source code of the book-server
  • pkg/books defines the database and book interfaces for extending book-server
  • pkg/commons defines the option types that are used within book-server
  • pkg/server consists of the HTTP REST API code and related tests
  •  

For our API, book-server is a Go REST API micro-service that connects to a database and serves information about books. In the cmd/main.go file, it can be seen that the service only works with three parameters:

  • Log level
  • HTTP port
  • Database address

In the following init function, in main.go, these three parameters are defined as command-line arguments as log-level, port, and db:

func init() { pflag.StringVar(&options.ServerPort, "port", "8080", "Server port for listening REST calls") pflag.StringVar(&options.DatabaseAddress, "db", "", "Database instance") pflag.StringVar(&options.LogLevel, "log-level", "info", "Log level, options are panic, fatal, error, warning, info and debug") 

It is expected that you should have different levels of logging in microservices so that you can debug services running in production better. In addition, ports and database addresses should be configured on the fly, since these should be the concerns of the users, not developers.

In the pkg/books/model.go file, Book is defined, and an interface for book database, namely BookDatabase, is provided. It is crucial for micro-services to work with interfaces instead of implementations, since interfaces enable plug-and-play capability and create an open architecture. You can see how book and BookDatabase are defined in the following snippet:

type Book struct { 
ISBN string
Title string
Author string
}
type BookDatabase interface {
GetBooks() ([]Book, error)
Initialize() error
}

Note: The code files for this section can be found here: https://bit.ly/2S92tbr.

In the pkg/books/database.go file, an SQL-capable BookDatabase implementation is developed as SQLBookDatabase. This implementation enables the book-server to work with any SQL capable database. The Initialize and GetBooks methods could be checked for how SQL primitives are utilized to interact with the database. In the following code fragment, the GetBooks and Initialize implementations are included, along with their SQL usages:

 func (sbd SQLBookDatabase) GetBooks() ([]Book, error) {
books := make([]Book, 0)
rows, err := sbd.db.Query('SELECT * FROM books')
//[…]
return books, nil
}
func (sbd SQLBookDatabase) Initialize() error {
var schema = 'CREATE TABLE books (isbn text, title text, author
text);'
//[…]
return nil
}

Finally, in the server/server.go file, an HTTP REST API server is defined and connected to a port for serving incoming requests. Basically, this server implementation interacts with the BookDatabase interface and returns the responses according to HTTP results.

In the following fragment of the Start function in server.go, endpoints are defined and then the server starts to listen on the port for incoming requests:

 func (r *REST) Start() {
//[…]
r.router.GET("/ping", r.pingHandler)
r.router.GET("/v1/init", r.initBooks)
r.router.GET("/v1/books", r.booksHandler)
r.server = &http.Server{Addr: ":" + r.port, Handler: r.router}
//[…]
err := r.server.ListenAndServe()
//[…]
}

Note: The complete code can be found here: https://bit.ly/2Cm9Mag.

Static code analysis

In the preceding section, a cloud-native micro-service application, namely bookserver, was presented, along with its important features. In the next section, we will begin with static code analysis so that we can test this application comprehensively.

Reading and finding flaws in code is cumbersome and requires many engineering hours. It helps to use automated code analysis tools that analyze the code and find potential problems. It’s a crucial step and should factored into the very first stages of the CI pipeline. Static code analysis is essential because correctly working code with the wrong style will cause more damage than non-functional code.

It’s beneficial for all levels of developers and quality teams to follow standard guidelines in the programming languages and create their styles and templates only if necessary. There are many static code analyzers available on the market as services or open source, including:

  • Pylint for Python
  • FindBugs for Java
  • SonarQube for multiple languages and custom integrations
  • The IBM Security AppScan Standard for security checks and data breaches
  • JSHint for JavaScript

However, when choosing a static code analyzer for a cloud-native micro-service, the following three points should be considered:

  • The best tool for the language: It is common to develop micro-services in different programming languages; therefore, you should select the best static code analyzer for the language rather than employing one-size-fits-all solutions.
  • Scalability of the analyzer: Similar to cloud-native applications, tools in software development should also be scalable. Therefore, select only those analyzers that can run in containers.
  • Configurability: Static code analyzers are configured to run and find the most widely accepted errors and flaws in the source code. However, the analyzer should also be configured to different levels of checks, skipping some checks or adding some more rules to check.

Exercise: Performing static code analysis in containers

In this exercise, a static code analyzer for the book-server application will be run in a Docker container. Static code analysis will check for the source code of book-server and list the problematic cases, such as not checking the error returned by functions in Go. To complete this exercise, the following steps have to be executed:

Note: All tests and build steps are executed for the book-server application in the root folder. The source code of the book-server is available on GitLab: https://gitlab.com/TrainingByPackt/book-server. The code file for this exercise can be found here: https://bit.ly/2EtB0Ny.

1. Open the docker/Dockerfile.static-code-check file from the GitlLab interface and check the container definition for the static code analysis:

 FROM golangci/golangci-lint
ADD . /go/src/gitlab.com/onuryilmaz/book-server
WORKDIR /go/src/gitlab.com/onuryilmaz/book-server
RUN golangci-lint run ./…

2. Build the container in the root directory of book-server by running the following code:

docker build --rm -f docker/Dockerfile.static-code-check .

In the preceding file, the golangci/golangci-lint image is used as the static code analysis environment and the book-server code is copied. Finally, golangci-lint is run for all folders to find flaws in the source code.

The following output is obtained once the preceding code is run with no errors, with a “successfully built” message at the end:

 

3. Change the Initialize function in pkg/books/database.go as follows by removing error checks in the SQL statements:

 func (sbd SQLBookDatabase) Initialize() error {
var schema = 'CREATE TABLE books (isbn text, title text, author
text);'
sbd.db.Exec(schema)
var firstBooks = 'INSERT INTO books …'
sbd.db.Exec(firstBooks)
return nil
}

With the modified Initialize function, the responses of the sbd.db.Exec methods are not checked. If these executions fail with some errors, these return values are not controlled and not sent back to caller functions. It’s a bad practice and a common mistake in programming that’s mostly caused by the assumption that the code will always run successfully.

4. Run the following command, as we did in step two:

 docker build --rm -f docker/Dockerfile.static-code-check .

Since we had modified the code is step three, we should see a failure as a result of this command, as shown in the following screenshot:

As we can see, errcheck errors are expected, since we’re not checking for the errors during SQL executions.

Revert the code for the Initialize function to the original, with error checks where static code analysis successfully completed; otherwise, the static code analysis step will always fail in the pipeline and the further steps will never run.

Hope you enjoyed reading this article. If you want to learn more about continuous integration and delivery, check out the online course “Cloud-Native Continuous Integration and Delivery.” Developed by author Onur Yilmaz, the class begins with an introduction to cloud-native concept, teaching participants skills to create a continuous integration and delivery environment for your applications and deploy them using tools such as Kubernetes and Docker.

This content was provided by Packt Pub.

The post Testing cloud-native applications appeared first on Superuser.

by Superuser at April 17, 2019 02:02 PM

Fleio Blog

Fleio 2019.04: instance traffic billing, end-user router management, right to left languages

Fleio version 2019.04 is now available. Just like every month, we’re adding features and improvements to Fleio – OpenStack billing and control panel for public cloud service providers. Read on for the major features in 2019.04. New major features in 2019.04 are: OpenStack compute instance traffic pricing Overall OpenStack project traffic pricing rules was already […]

by adrian at April 17, 2019 12:32 PM

James Page

OpenStack Stein for Ubuntu 18.04 LTS

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Stein on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Stein release can be found here.

You can enable the Ubuntu Cloud Archive pocket for OpenStack Stein on Ubuntu 18.04 LTS installations by running the following commands:

    sudo add-apt-repository cloud-archive:stein
    sudo apt update

The Ubuntu Cloud Archive for Stein includes updates for:

aodh, barbican, ceilometer, ceph (13.2.4), cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-vpnaas, nova, nova-lxd, octavia, openstack-trove, openvswitch (2.11.0), panko, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions please refer to the Stein UCA version report.

Python 3

The majority of OpenStack packages now run under Python 3 only; notable exceptions include Swift.  Python 2 packages are no longer provided for the majority of projects.

Branch package builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit-ish via the following PPA’s:

    sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky
    sudo add-apt-repository ppa:openstack-ubuntu-testing/stein

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thanks to everyone who has contributed to OpenStack Stein, both upstream and downstream. Special thanks to the Puppet OpenStack modules team and the OpenStack Charms team for their continued early testing of the Ubuntu Cloud Archive, as well as the Ubuntu and Debian OpenStack teams for all of their contributions.

Have fun and see you all for Train!

Cheers

James

(on behalf of the Ubuntu OpenStack team)

by JavaCruft at April 17, 2019 10:50 AM

Aptira

Software Interlude. Part 3 – Software Aint Software

In our last post, we discussed the essential nature of software and why it is different to any of the other “stuff” with which we build Open Network solutions. In this Part 3 of the Open Networking Software Interlude, we look at different perspectives of software and how this impacts the development of Open Networking solutions.

We saw that misunderstanding the core nature of software leads to potential issues in Open Networking projects. And in this post, we will outline how misunderstanding can also be generated by the way in which software is used as a component in a solution, which then informs different people with a different perspective on what software is (or isn’t).

How people use and interact with software components in a solution, anywhere in its lifecycle, can be described as the perceived “form factor” of the component.

A spectrum of software form factors

I think of four main “form factors” for Open Network components, which influence the perspective of all stakeholders (from designers through to operators).

These form factors are:

  • Hardware + “Hard (Firmware) Config”
  • Hardware + “Soft” Config
  • Software as Hardware
  • Software as Software

Hardware + “Hard (Firmware) Config”

All the functionality is embodied as firmware in hardware components  and is fixed except for the user-accessible hardware settings that can be modified. Changes are limited to upgrades in firmware via specialised chipsets (e.g. flash memory), if at all.

Hardware + “Soft” Config

The functionality is embedded as firmware in hardware but can be controlled via configuration definitions that are editable as text by local engineers. Device functionality varies significantly based on the settings of configuration options. The range of config variation is much larger and may extend to a command set allowing logic-based decisions on input parameters from other components.

Software as Hardware

All the functionality is in software but is a software proxy for the “Hardware + Soft Config”. Component functionality is treated as a “black box”. Software enables much greater flexibility such as easy distribution of software images to required network locations and configuring the device by editing and / or loading its config.

Software as Software

All of the functionality is embodied in a software application that can be compiled and re-installed at all stages of the solution lifecycle.

Modifications can be made at both the software source code level and the configuration level.  Delivery / implementation is via the build process from source code through to deployment.  Source code and configuration are managed in the same manner: via a code repository. This model fully captures the benefits of CI/CD.

The implications of these “form factors”

All of these form factors are driven by software, with defined functionality that can be modified by local configuration, but they offer very different experiences for solution integrators and operators. They broadly break down into two categories: Black Box and Open Box.

Black Box

This category covers the first 3 form factors, excluding “Software as Software”.

For these form factors, development process is isolated at the vendor, and update cycles tend to be slow; functionality is effectively frozen. Vendors can and do update firmware, but whether new versions of this type of code can be retrofitted onto existing installed equipment is variable.

A solution designer can only vary component functionality via configuration options: in “Hard Config” this variation is limited to specific flags or values updated via an editing interface.

The “software” experience of stakeholders who use these components is limited to their knowledge of the (frozen) functionality and their level of skill in the configuration syntax.

Although “Software as Hardware” introduces 100% software components to the mix, the interaction between stakeholders and the components is identical to hardware.

This gives the stakeholders who use these components very limited view of software and what can be achieved.

Open Box

Only “Software as Software” delivers a true open experience. Changing the functionality of a component may be achieved by configuration or by changing the source code and regenerating the component.

Open Networking and Software

What does all this mean for Open Networking?

Firstly, Open Networking solutions are likely to include components of all four “form factors” in their design, but the three “black box” form factors are most likely, but “Software as Software” components are growing rapidly in availability.

Secondly, there is a growing mismatch between skills requirements and availability: the experience base of teams in an Open Networking solution is most likely to be in the “black box” form factors. Software development skills are often quite rare in the organisations that run Open Network projects.

How does this inform and affect Open Networking projects?

We’ll cover that in future posts.

Stay tuned.

The post Software Interlude. Part 3 – Software Aint Software appeared first on Aptira.

by Adam Russell at April 17, 2019 10:41 AM

April 16, 2019

OpenStack @ NetApp

My Name Is Stein!

The OpenStack community is ready with its latest release! As part of the Stein development cycle, NetApp is proud to have contributed in the development of Cinder and Manila. The Stein release includes the following feature enhancements that support the integration of NetApp’s storage portfolio into your OpenStack deployment:    1. Manila Manage/Unmanage Share Servers […]

The post My Name Is Stein! appeared first on thePub.

by Bala RameshBabu at April 16, 2019 04:17 PM

OpenStack Superuser

Inside container infrastructure: Must-see sessions at the Open Infrastructure Summit

Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit.  The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.

In this post we’re highlighting some of the sessions you’ll want to add to your schedule about container infrastructure.  Check out all the sessions, workshops and lightning talks focusing on this topic here.

Rook: A new and easy way to run Ceph storage on Kubernetes

Ceph is one of the most popular storage backends for OpenStack and it has a reputation for being complex to set up and to manage. Rook is an open-source project incubated by the Cloud Native Computing Foundation with a goal of providing Ceph block, file, and object storage inside of Kubernetes. It’s more than just running Ceph containers; it’s an intelligent operator that can help deploy and manage Ceph clusters, keeping them healthy with less work. In this session, Blaine Gardner Rook-Ceph maintainer and Suse engineer along with colleauge Stephen Nemeth will introduce Rook and explore the features important for OpenStack storage.
Details here.

Container use-cases and developments at the CERN cloud

Starting with only four OpenStack projects and few hypervisors in 2012, the CERN cloud has evolved significantly, running 16 OpenStack projects, 9,000 hypervisors, over 400 Kubernetes clusters and two production regions. The CERN teams works closely with the OpenStack, Kubernetes and Ceph communities to deliver a self-service cloud for virtual machines, containers, storage and bare metal. Containers have revolutionized the way application developers work, in turn, Kubernetes made management of thousands of containers easy and access to Kubernetes is one OpenStack Magnum API request away. In this talk, Tim Bell and Spyros Trigazis will cover the latest developments in the CERN cloud, focusing on the latest container use cases in high energy physics and recent scale tests to prepare for the upgrades of the Large Hadron Collider and the experiments.
Details here.

Dealing with Kubesprawl – Tetris style!

Kubesprawl is one of the big pain points for organizations operating Kubernetes. The emerging pattern of having Kubernetes clusters per team or even per developer, along with separate environments for dev, test, staging and production can mean that operations teams are dealing with many dozens or even hundreds of Kubernetes clusters. Aside from the complexities of deployment, life cycle management and automation, this can lead to significant expense, in both virtual and physical environments. In this session, Mesosphere’s Matt Jarvis will introduce how we solved that problem in DC/OS, by bin packing multiple Kubernetes clusters onto the same underlying infrastructure to build high density infrastructure. This approach has a number of interesting problems in itself, including handling complex isolation and networking and he’ll offer deep dive into how this works.
Details here.

Tailor-made security: Building a container specific hypervisor

One of the many benefits of the recently introduced Kubernetes RuntimeClass feature is the ability for operators to run hypervisor isolated container workloads and build secure multi-tenant deployments. While projects like Kata Containers allow operators to run their container workloads through a growing list of hypervisors, none of them is exclusively targeting container and Kubernetes specific use cases.

This session from Samuel Ortiz, Intel, and Andreea Florescu, AWS, will describe how to improve container workloads performance, security and density by building a containers dedicated hypervisor. They’ll describe what running a container runtime compatible hypervisor requires by looking more specifically at the Kubernetes runtime interface (CRI). They’ll also show how the recently formed rust-vmm project allows for designing KVM based hypervisors for very customized use cases, including the container ones.
Details here.

See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.

The post Inside container infrastructure: Must-see sessions at the Open Infrastructure Summit appeared first on Superuser.

by Superuser at April 16, 2019 02:03 PM

Opensource.com

Building a DNS-as-a-service with OpenStack Designate

Learn how to install and configure Designate, a multi-tenant DNS-as-a-service (DNSaaS) for OpenStack.

by ayaseen at April 16, 2019 07:03 AM

April 15, 2019

OpenStack Superuser

Bringing open source to the bare metal edge

Kubespray, a community project providing Ansible playbooks for the deployment and management of Kubernetes clusters, recently merged to support for bare metal cloud Packet. This new support allows Kubernetes clusters to be deployed across next generation edge locations including these cell-tower based micro-data centers.

Packet, which is unique in its bare metal focus, expands existing Kubespray’s support beyond the traditional AWS, GCE, Azure, OpenStack, vSphere, and Oracle Cloud infrastructure. Kubespray removes the complexities of standing up a Kubernetes clustering through automation via Terraform and Ansible. Terraform provisions the infrastructure and installs the prerequisites for the Ansible installation. Terraform provider plugins allow support for a variety of different cloud providers. The Ansible playbooks then deploys and configures Kubernetes.

Since there are already detailed instructions online for deploying with Kubespray on Packet, I’ll focus this post on why bare metal support is important for Kubernetes and what it takes to make it happen.

Why bare metal?

Historically, Kubernetes deployments have relied upon the “creature comforts” of a public cloud, or fully managed private cloud, to provide virtual machines and networking infrastructure upon which to run Kubernetes. This added a layer of abstraction (e.g. a hypervisor with virtual machines) that Kubernetes itself doesn’t necessarily need. In fact, Kubernetes began its life on bare metal as Google’s Borg.

As we move workloads closer to the end user (in the form of edge computing) and deploy to ever-more diverse environments (including hybrid and on-premises infrastructure of different architectures and sizes), relying on a homogenous public cloud substrate isn’t always possible or ideal. For instance, with edge locations being resource constrained, it’s more efficient and practical to run Kubernetes directly atop bare metal.

Mind the gaps!

Without a full-featured public cloud underneath a bare metal cluster, some traditional capabilities such as load balancing and storage orchestration will need to be managed within the Kubernetes cluster itself. Luckily, there are projects such as MetalLB and Rook that provide just such support for Kubernetes.

MetalLB, a Layer 2 and Layer 3 load balancer, has already been integrated into Kubespray. Support for Rook, which orchestrates Ceph to provide distributed and replicated storage for a Kubernetes cluster, can be easily installed onto a bare metal cluster. In addition to enabling full functionality, this “bring your own” approach to storage and load balancing removes the reliance upon specific cloud services, helping users to avoid lock-in with an approach that can be installed anywhere.

One gap that we don’t have to overcome is support for Arm64 processors, since that is already in place with Kubespray. The Arm architecture (which is starting to show up regularly in datacenter grade hardware, SmartNICs, and other custom accelerators) has a long history in mobile and embedded devices, making it well-suited for Edge deployments.

Going forward, I’m hoping to see deeper integration with MetalLB and Rook as well as bare metal CI of daily builds on a number of different hardware configurations. With access to automated bare metal at Packet, we now have the opportunity to test and maintain support across various processor types, storage options, and networking setups. This will help to ensure that Kubespray-powered Kubernetes can be deployed and managed confidently across public clouds, bare metal and edge environments.

It takes a village

As an open-source project driven by the community, it’s important that we thank the core developers and contributors to Kubespray, as well as the folks who assisted with the Packet integration. Notably, Maxime Guyot and Aivars Sterns for the initial commits and code reviews, Rong Zhang and Ed Vielmetti for document reviews, as well as Tomáš Karásek (who maintains the Packet Go library and Terraform provider) and John Studarus (who tries not to mess up Karásek’s code too much with his pull requests).

Learn More

At the upcoming Open Infrastructure Summit, there are over 20 sessions on edge computing including “The Open Micro Edge Data Center”, discussing using open source across bare metal edge locations.

About the author

John Studarus, president of JHL Consulting, provides cloud security product development and cloud security consulting services. Within the open source communities, he volunteers his time managing the community supported Packet CI cloud and running numerous user groups across the U.S. as an OpenStack Ambassador.

The post Bringing open source to the bare metal edge appeared first on Superuser.

by John Studarus at April 15, 2019 01:49 PM

April 12, 2019

Chris Dent

Placement Update 19-14

Placement update 19-14 is here. There will be no 15, 16 or 17 due to various bits of travel. There will be some PTG-related summaries.

Most Important

The Virtual Pre-PTG is in full swing and making some good progress towards making sure that we only hit the hard stuff at the in-person PTG. Apologies if it has been a bit overwhelming. The hope is that by paying the price of a bit more whelm now we will have less whelm at the PTG. If you have questions, please ask.

Links to all the threads are in the PTG etherpads:

There's also a retrospective etherpad.

What's Changed

  • The 0.12.0 release of os-traits is pending. It switches that package to using the independent release policy. os-resource-classes will get the same treatment when we next need to release it.

Specs/Features

There are four specs in flight in the placement repo and one pending to be ported over from nova:

There are also several nova-specs that were visited in the nova spec review day earlier this week. Some are listed below.

Bugs

osc-placement

osc-placement is currently behind by 13^w11 microversions. -1 since last week. Support for 1.19 has just merged. Oh wait, no, -3. 1.21 has just merged. There was nothing to do for 1.20.

Pending changes:

Main Themes

More work remains in the pre-PTG discussions to try to drive towards some themes. The specs above capture some of it, but it appears like a lot of the work will be a) supporting other projects doing things with Placement, b) fixing bugs they discover.

Other Placement

Mostly specs in progress (listed above) for now.

  • https://review.openstack.org/#/c/645255/ This is a start at unit tests for the PlacementFixture. It is proving a bit "fun" to get right, as there are many layers involved. Making sure seemingly unrelated changes in placement don't break the nova gate is important. Besides these unit tests, there's discussion on the PTG etherpad of running the nova functional tests, or a subset thereof, in placement's check run.

    On the one hand this is a pain and messy, but on the other consider what we're enabling: Functional tests that use the real functionality of an external service (real data, real web requests), not stubs or fakes.

    There's a pre-PTG thread for this.

  • https://review.openstack.org/641404 Use code role in api-ref titles

  • https://review.openstack.org/#/q/topic:refactor-classmethod-diaf A sequence of refactorings, based off discussion in yet another pre-PTG thread.

Other Service Users

New discoveries are added to the end. Merged stuff is removed.

End

There's a lot going on.

by Chris Dent at April 12, 2019 05:52 PM

April 11, 2019

Emilien Macchi

OpenStack Containerization with Podman – Part 2 (SystemD)

In the first post, we demonstrated that we can now use Podman to deploy a containerized OpenStack TripleO Undercloud. Let’s see how we can operate the containers with SystemD.

Podman, by design, doesn’t have any daemon running to manage the containers lifecycle; while Docker runs dockerd-current and docker-containerd-current which take care of a bunch of things, such as restarting the containers when they are in failure (and configured to do it, with restart policies).

In OpenStack TripleO, we still want our containers to restart when they are configured to, so we thought about managing the containers with SystemD. I recently wrote a blog post about how Podman can be controlled by SystemD, and we finally implemented it in TripleO.

The way it works, as of today, is that any container managed by Podman with a restart policy in Paunch container configuration, will be managed by SystemD.

Let’s take the example of Glance API. This snippet is the configuration of the container at step 4:

As you can see, the Glance API container was configured to always try to restart (so Docker would do so). With Podman, we re-use this flag and we create (+ enable) a SystemD unit file:

How it works underneath:

  • Paunch will run podman run –conmon-pidfile=/var/run/glance_api.pid (…) to start the container, during the deployment steps.
  • If there is a restart policy, Paunch will create a SystemD unit file.
  • The SystemD service is named by the container name, so if you were used to the old services name before the containerization, you’ll have to refresh your mind. By choice, we decided to go with the container name to avoid confusion with the podman ps output.
  • Once the containers are deployed, they need to be stopped / started / restarted by SystemD. If you run Podman CLI to do it, SystemD will take over (see in the demo).

Note about PIDs:

If you configure the service to start the container with “podman start -a” then systemd will monitor that process for the service. The problem is that this leaves podman start processes around which have a bunch of threads and is attached to the STDOUT/STDIN. Rather than leaving this start process around, we use a forking type in systemd and specify a conmon pidfile for monitoring the container. This removes 500+ threads from the system at the scale of TripleO containers. (Credits to Alex Schultz for the finding).

Note about PIDs:

If you configure the service to start the container with “podman start -a” then systemd will monitor that process for the service. The problem is that this leaves podman start processes around which have a bunch of threads and is attached to the STDOUT/STDIN. Rather than leaving this start process around, we use a forking type in systemd and specify a conmon pidfile for monitoring the container. This removes 500+ threads from the system at the scale of TripleO containers. (Credits to Alex Schultz for the finding).

Stay in touch for the next post in the series of deploying TripleO and Podman!

by Emilien at April 11, 2019 02:46 AM

April 10, 2019

OpenStack Superuser

Get started with Oracle Container Runtime for Kata Containers

In just about five minutes, you can get an overview of Oracle Container Runtime for Kata Containers.

The 6:43 tutorial explains basic Kubernetes cluster operation and shows how using Kata Containers with Kubernetes enhances container and orchestration environment for the delivery of micro-services and next-generation application development. The video from the Oracle Learning Library also goes over how a Kubernetes cluster works and how Kata Containers works within the cluster to support containerized applications. It offers an overview of  the software and packages needed to build an oracle supported Kubernetes cluster implementing Kata Containers as well.

Oracle has adopted Kata Containers, an OpenStack Foundation project, which defines creating the container in a lightweight virtual machine that provides the workload isolation and security for containers deployed in the container infrastructure. (Oracle container runtime is available on GitHub under dual licenses: Oracle’s Universal Public License and the Apache 2 license.) Kata combines the benefits of containers and virtual machines and it’s OCI or open container initiative compliant, as are Docker containers.

Kata Containers are lightweight container VMs created to provide isolation and the separate kernel for each container. Each container with its namespace is within its own lightweight VM and has its own kernel. Kata Containers work and perform like typical containers, but offer resource isolation and security advantages of regular virtual machines. The lightweight VM used for each container addresses the security concerns of the shared kernel used by traditional containers.
To support the creation of Kata Containers a Kubernetes cluster is first created to orchestrate and manage the deployment of containers. The Kubernetes cluster comprises of master nodes and worker nodes, the master node manages the cluster and schedules the deployment of container pods and services.

Oracle Kata Containers are implemented by integrating with an Oracle container services for use with Kubernetes cluster. to launch and deploy the containers in Kata virtual machines, the Kubernetes cluster is built with a minimum of Oracle Linux 7 update 5, an unbreakable enterprise kernel release 5. In the Kubernetes cluster, users build the master and worker nodes using the Oracle container services for use with Kubernetes tools. To launch containers from the Kubernetes cluster, users must also register with the Oracle container registry and on each node login to the registry through Docker. Oracle container runtime for Docker is installed and used in the cluster for building and containerizing applications.To get things up and running with Kata, users need to set up Kubernetes cluster worker nodes and install Oracle Container Runtime for Kata, QEMU virtualization and CRI-O.

Learn more

Kata Containers is a fully open-source project––check out Kata Containers on GitHub and join the channels below to find out how you can contribute.

There are also a number of sessions featuring Kata Containers at the upcoming Open Infrastructure Summit, ranging from project onboarding to “Tailor-made security: Building a container specific hypervisor.” See all the sessions here.

Additional resources:

Step-by-step tutorial of Oracle container runtime for Kata

CRI-O project site – https://cri-o.io

Oracle Container Registry — https://container-registry.oracle.com

The post Get started with Oracle Container Runtime for Kata Containers appeared first on Superuser.

by Nicole Martinelli at April 10, 2019 02:04 PM

StackHPC Team Blog

Blazar 3.0.0: Highlights of the Stein Release

Blazar is a resource reservation service for OpenStack. Initially started in 2013 under the name Climate, Blazar was revived during the Ocata release cycle and became an official OpenStack project during the Queens release cycle. It has just shipped its third official release (the fifth since the revival of the project) as part of the OpenStack Stein release.

While Blazar’s ambition has always been to provide reservations for the various types of resources managed by OpenStack, it has only supported compute resources so far, in the form of instance reservations and physical host reservations. Both were supported purely by integrating with Nova. This is changing in Stein in two ways.

First, the Blazar community has added support for reserving floating IPs by integrating with Neutron. Public IPv4 addresses are usually scarce resources which need to be carefully managed. Users can now request to reserve one or several floating IPs for a specific time period to ensure their future availability, and even bundle a floating IP reservation with a reservation of compute resources inside the same lease. While the implementation of this feature is not fully complete in Stein and is thus classified as a preview, most of the missing pieces are in client support and documentation, and should be completed soon. Chameleon, a testbed for large-scale cloud research, has already made available this new feature to its users.

Second, the instance reservation feature is now leveraging the Placement API service. Originally introduced within Nova, OpenStack Placement provides an HTTP service for managing, selecting, and claiming providers of classes of inventory representing available resources in a cloud. Placement was extracted from Nova in the Stein release and is now a separate project. This change allows Blazar to support all types of affinity policies for instance reservation, instead of being limited to anti-affinity as in previous releases. While Blazar initially leverages Placement only for instance reservation, it paves the way for extending reservation to other types of resources when they integrate with Placement themselves. It will also help Blazar to provide reservation of bare-metal nodes managed by Ironic.

Blazar also includes a new Resource Allocation API, allowing operators to query the reserved state of their cloud resources. This provides a foundation for developing new tools such as a graphical calendar view, which we hope can be made available upstream in a future release.

More details about all the notable changes in Stein are available in the Blazar release notes.

On May 1, two of the Blazar core reviewers will be presenting a Project Update at the Denver 2019 Open Infrastructure Summit. Join them to learn more about these changes and discuss how reservations can make better use of cloud resources!

With the Train release on the horizon, the Blazar community is planning to go full steam ahead by:

  • extending its integration with Neutron with reservation of network segments (e.g. VLANs and VXLANs);
  • making Blazar compatible with bare-metal nodes managed by Ironic, possibly without using Nova;
  • providing a graphical reservation calendar within Horizon;
  • integrating with preemptible instances.

StackHPC sees resource reservation as one of OpenStack’s functional gaps for meeting the needs of research computing. Blazar can provide a critical service, enabling users to reserve in advance enough resources for running large-scale workloads.

Blazar project mascot

by Pierre Riteau at April 10, 2019 08:00 AM

Aptira

Software Interlude. Part 2 – What is Software?

In our last post we outlined the need for a “Software Interlude” before we moved on to a description of the third domain (Open Network Integration). 

In this Part 2 of the Open Networking Software Interlude, we look at the key question: “What is Software?”. This might seem simplistic but understanding the answer to this question is important if we are to understand the broader value chain of software development and the application of software to real-world problems. 

What is Software?

The answer to that question should be common knowledge, given that these days people are introduced to software in primary or high school. We all probably know the basics, that software is:  

data or computer instructions that tell the computer how to work.

Wikipediahttps://en.wikipedia.org/wiki/Software

We all probably know that software is created by people (with titles like programmers, coders or software developers) by creating language statements in a text editor and using other software applications to translate that text into the “data or computer instructions” as above. 

We all probably know that there are different categories of software: Applications software, Networking Software, Operating Systems, Virtualisation Software, Graphics and Visualisation software and so forth. Each category of software not only solves different problems but requires a different type of knowledge, algorithms and supporting software to create them. 

And we all probably know that there are various incompatible technical platforms that software is built to run on: Windows PC’s and Apple Mac’s and Unix machines and so forth. 

So, back to our question: “what is software?”. Is software the instructions that runs in a computer’s CPU? Or is software the source code files that developers edit? Or is it the algorithms and flow-charts that solve the problems that software encodes and implements? 

While these explanations may sound satisfactory, they are fairly superficial and don’t come close to the core nature of software or inform us about what it takes to build software that works correctly. 

Attributes of Software

Let’s look at some important aspects of software and that differentiate software from other media of types of materials that are used to build the things that we use. 

  • Software is intangible to normal human senses: You can’t get an idea of its properties by looking at it. Even if you identify the source code files as “tangible”, you still won’t get much just by looking at them, as compared to say a brick or a pipe, whose tangible form implies function. 
  • Software is transient: it only shows its full function and behaviour when it is executing, which can be for very short periods of time. In its normal operating state you can’t freeze it and poke around to see what’s happening. 
  • Software is infinitely malleable: it has no natural structure. There are few if any built-in constraints that guide how a program is constructed. 
  • Software is highly interdependentinternally and with other software components. The internal structure of software can create many interdependencies which are not easily detected. 
  • Software is opaque at the external interface: a program can appear to run correctly but this gives no indication of the internal structure or quality of how the software is designed and implemented – it could produce the required results but be implemented so poorly as to be unmodifiable and unmaintainable or may perform erratically when an edge-case input is found. 

All of the above characterise software. These are attributes of software but they aren’t software itself. Software is unlike physical components that are, at the very least, constrained by the physical properties of their components and of the material from which they are made. 

Software is a Model

Notwithstanding all the characteristics and attributes of software they are not the essence. Fred Brooks gives us a strong hint: 

Software is pure mind stuff, that is its allure and the source of all its frustrations.

Fred Brooks

Software is in fact a model. It is an executable model of some real-world process which it either implements, interacts with, or influences. It’s properties and attributes enable software to model anything. 

This is the essence of software and what makes it so useful. We may get close when we describe algorithms, data and processes but these are just components. Just as a person is holistically more than a collection of organs with specific names and capabilities, so is software more than its components and attributes. 

Software creates value by the quality and effectiveness of this model and its fidelity with the real-world processes that it implements, interacts with, or influences. There are many ways to influence the quality of the model positively and negatively.  

We will cover the implications of this in greater detail in the future posts. 

Stay tuned. 

Let us make your job easier

Find out how Aptira's managed services can work for you.

Find Out Here

The post Software Interlude. Part 2 – What is Software? appeared first on Aptira.

by Adam Russell at April 10, 2019 02:35 AM

April 09, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on OpenStack Stein

Congrats to the 1,400 contributors from more than 150 organizations that contributed to Stein, the 19th release of OpenStack that launched this week!

Check out a full list of features and contributors and download the OpenStack Stein release here.

Among the dozens of enhancements in Stein, highlights include:

Delivering core functionality for Kubernetes users

Kubernetes is the number one container orchestration framework running on OpenStack, with 61% of OpenStack deployments indicating they integrate the two platforms, according to the 2018 OpenStack User Survey.

In Stein, OpenStack continues to deliver the core infrastructure management features delivering the bare metal and network functionality that containers need:

  • OpenStack Magnum, a Certified Kubernetes installer, improved Kubernetes cluster launch time significantly—down from 10-12 minutes per node to five minutes regardless of the number of nodes.
  • With the OpenStack cloud provider, you can now launch a fully integrated Kubernetes cluster with support from the Manila, Cinder and Keystone services to take full advantage of the OpenStack cloud it’s created on.
  • Neutron, OpenStack’s networking service, has faster bulk port creation, targeting container use cases, where ports are created in groups.
  • Ironic, the bare metal provisioning service, continues to improve deployment templates for standalone users to request allocations of bare metal nodes and submit configuration data as opposed to pre-formed configuration drives.

Networking enhancements for 5G, edge computing and NFV use cases

  • Within Neutron, Network Segment Range Management enables cloud administrators to manage segment type ranges dynamically via a new API extension, as opposed to the previous approach of editing configuration files. This feature benefits StarlingX and edge use cases, where ease of management is critical.
  • For network-heavy applications, it is crucial to have a minimum amount of network bandwidth available. Work began during the Rocky cycle to provide scheduling based on minimum bandwidth requirements, and the feature was delivered in Stein. As part of the enhancements, Neutron treats bandwidth as a resource and works with the OpenStack Nova compute service to schedule the instance to a host where the requested amount is available.
  • API improvements boost flexibility, adding support for aliases to Quality of Service (QoS) policy rules that enable callers to execute the requests to delete, show and update QoS rules more efficiently.

OpenStack Foundation news

  • Open Infrastructure Summit Denver, April 29 – May 1
      • Registration prices increase THIS WEEK: Thursday, April 11 at 11:59pm PT. Buy your tickets now!
      • Interested in learning to contribute to OpenStack upstream? Sunday, before the Summit begins, OpenStack Upstream Institute (OUI) will run as a day long training teaching you the basics of contribution- everything from the tools we use to collaborate to how our community is structured and how releases work. There’s still room left! Register for OUI here.
      • The Speed Mentoring Lunch on Monday at the Summit is looking for mentors. Pay it forward by signing up to share your experiences and give advice on careers, technical topics, or the community. Lunch will be provided thanks to our sponsor, Intel.
      • There are still a few tickets left for the Project Teams Gathering following the Summit, May 2-4. Take a look the list of teams meeting at the event and register to participate in these contributor team meetings.

OpenStack Foundation project news

Airship

  • The Airship 1.0 release is getting ready to land. Keep a look out on the skies in the coming weeks as the Airship team does a flight check on the final features.
  • Join us at the Open Infrastructure Summit for a wealth of sessions on how Airship is being used right now, how you can get started on your own journey with Airship and and some exciting announcements about the growing Airship community.

StarlingX

  • Join us in Denver for the StarlingX hands-on workshop to learn how to deploy and use the project. Don’t forget to RSVP.
  • Got edge? Tell us about your StarlingX story and give feedback to the community by filling out a short survey.

Zuul

  • Meet the Zuul Community at the Open Infrastructure Summit, April 29 – May 1 in Denver, Colorado. Topics include a project update and opportunities to hear from users of Zuul across a variety of communities, technologies and industries (Airship, Finance, Kubernetes, OpenLab, SR-IOV).

Kata Containers

  • This week marks a major milestone for the Kata community! On April 8, the OSF Board of Directors voted and approved Kata Containers as the first new top-level Open Infrastructure project in the Foundation. Learn how Kata Containers got started and what it means to be an OSF confirmed project.
  • Join the Kata community in celebration at the upcoming Open Infrastructure Summit in Denver. Kata will also be featured at several other upcoming events, including Container World and both KubeCon//CloudNativeCon events in Europe and China. See a full lineup of upcoming Kata talks here. In the meantime, explore Kata on GitHub, KataContainers.io and connect with the community via Slack or IRC Freenode: #kata-dev, mailing list, weekly meetings and Twitter.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at April 09, 2019 08:51 PM

Inside HPC, artificial intelligence and machine learning: Must-see sessions at the Open Infrastructure Summit

Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit.  The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.

In this post we’re highlighting some of the sessions you’ll want to add to your schedule about edge computing.  Check out all the sessions, workshops and lightning talks focusing on this topic here.

Getting a neural network up and running with OpenLab

Access to hardware for AI/ML for the every day developer wanting to explore this field can be difficult to obtain and maintain for even the most rudimentary applications and testing. Needing to go beyond a single development machine running locally only increases this obviously. OpenLab is curated infrastructure accessible to open source projects and individuals working within and on open source projects designed to help address this use case. With access to GPU, FPGA, IoT, and more, HPC, AI/ML, Deep Learning, or other testing and applications can be quickly explored. In this beginner-level presentation Huawei’s Melvin Hillsman will walk through getting an account with OpenLab, obtaining resources and getting a simple neural network up and running with an application that should bring back great childhood memories. Details here.

Facial recognition in five minutes with OpenStack

In this session, Red Hat’s Erwan Gallen and Sylvain Bauza will demonstrate how a facial recognition program paired with graphics processing units (GPUs) can maximize the speed and accuracy of facial recognition as well as how to tune the frame decoding and inference time. They’ll also show application developers how to set up face detection networks quickly and choose the correct neural network design option while helping operators will learn how to provide resources such as GPUs and virtual GPUs. Details here.

HPC using OpenStack

High-performance computing (HPC) proposes to design a super computer around the use of parallel processing to run advanced application programs efficiently, reliably and quickly. OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds.

If you’re looking for common deployment models for HPC & OpenStack, come and interact with the community in this recurring panel. This session will be an opportunity for architects and operators pairing HPC with OpenStack to get together and discuss best practices and common deployment models, pain points, war stories and wish lists. Check out the Etherpad for pre-panel questions here.  Details on the session here.

Accessible ML: Combining open source and open data

If you think that only big tech companies or PhD scientists can use ML & AI, this session aims to show you that an individual open-source enthusiast can build and train a model on commodity hardware using open data – and then scale that up on a public cloud. If you’re a gamer and a python developer, you might already have all the tools you need!

* fast.ai, an easy-to-learn Python ML framework
* nvidia-docker on an Ubuntu Gaming PC
* Public-domain GIS imagery
* A couple terabytes of storage space and a fast internet connection

This talk grew out of the Firewise project, which Aeva van der Veen helped bootstrap last year. The project aimed to use public-domain satellite imagery to help predict and prevent forest fires. Even though the founders chose not to pursue this as a business, it’s an excellent example of how easily open source and public data can be combined to benefit society. Details here.

See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.

The post Inside HPC, artificial intelligence and machine learning: Must-see sessions at the Open Infrastructure Summit appeared first on Superuser.

by Superuser at April 09, 2019 05:01 AM

April 08, 2019

OpenStack Superuser

How open source hackathons are fast-tracking growth

Hackathons featuring OpenStack, Kubernetes and Ceph are fast-tracking open-source growth in China. Starting in 2015 with the goal of bridging the familiarity gap with open source, what started as a bug smash has since developed into a twice-yearly skill sharing event sponsored by companies like Huawei, Tencent, Intel and others.

In a post on Medium, organizers Fred Li, principal engineer at Huawei Technologies, Ruan He, chief architect of TStack Cloud and OpenStack Board Director at Tencent and Jianfeng “JF” Ding, software engineering manager at Intel sum up the achievements and what’s next for these popular events.

“Today, the open source world is evolving,” they say. “OpenStack is expanding to support open infrastructure and emerging use cases, like artificial intelligence and machine learning, are driving the need for us to collaborate across many different projects and communities. Similarly, a focus on customer needs requires a focus on the full stack for use cases like machine learning.”

And it’s not just about teaching newbies how to submit patches or get up to speed on upstream, either. Take the example of edge computing.

“Some people are knowledgeable about devices and interfaces to extract data from the cloud, but lack expertise about cloud or edge cloud. Likewise, other people understand cloud or edge cloud, but lack device knowledge,” organizers write. “If we invite them to sit together, to learn from each other as a trial, or pilot, maybe we can help bridge this knowledge and solve some of the issues through an exchange of ideas. ‘I know device. You know cloud. But we don’t know each other so much. How about we sit together and share what we know.’ We can help create spaces and opportunities to connect different communities — the OpenStack, Kata Containers, StarlingX, Kubernetes and Ceph communities, among others — and for folks from these different communities, and different companies, to get to know each other, learn from each other and collaborate on open-source projects.”

Read the full story over on Medium.

The post How open source hackathons are fast-tracking growth appeared first on Superuser.

by Superuser at April 08, 2019 02:14 PM

CERN Tech Blog

Helm Plugin for Secret Management

Helm is a popular package manager for Kubernetes hosted by the CNCF at incubation-level. Applications managed by Helm are defined as Charts, allowing easy reuse, versioning and sharing. The charts are published in repositories such as stable with more than 250 charts. Others include the incubator or cern where we keep charts for internal components or other dependencies. The helm hub tries to keep track of what is available.

by CERN (techblog-contact@cern.ch) at April 08, 2019 07:00 AM

Sean McGinnis

April 2019 OpenStack Board Notes

These are just some of my notes from the OpenStack Foundation Board of Directors meeting that took place on April 8, 2019.

Upcoming and past OSF board meeting information is published on the wiki and the meetings are open to everyone. Occasionally there is a need to have a private, board member only portion of the call to go over any legal affairs that can’t be discussed publicly, but that should be a rare occasion.

This meeting was added to discuss some of the incubating project confirmations ahead of our next face-to-face “joint leadership” meeting with the Technical Committee and User Committee on April 28, the Sunday prior to the next Open Infrastructure Summit in Denver, Co.

April 8, 2019 OpenStack Foundation Board Meeting

The original agenda can be found here and Jonathan Bryce sent out some unofficial minutes to the Foundation mailing list. The April 8th notes can be found here.

Project Confirmation

After the typical roll-call and approving prior meeting minutes administrative activities, our first order of business for the day was to review proposals from the two pilot projects that are ready for confirmation to become full OpenStack Foundation top level projects: Zuul and Kata Containers.

This was initially supposed to be a walk through of the presentations with part of the goal of seeing what other information we might need to be able to fully evaluate them before making anything official. At least in the case of Kata Containers, we felt there really wasn’t anything more needed. So after some quick discussion of whether to defer until the meeting in Denver, we decided to hold the vote today. More details below.

These presentations by each team were driven by ansering the factors called out in the OSF Project Confirmation Guidelines. This was our first time going through a real world case of answering these, and I think they held up well and ended up being a good way to cover most areas of concern and guide the discussions.

Kata Containers

Eric Ernst from Intel presented for Kata Containers.

Mission Statement

Openly collaborate across a diverse, global community to define and implement secure, versatile container solutions that combine the benefits of virtualization with the performance and ease of containers.

Basically, Kata is a container runtime that leverages lightweight virtual machines to provide the isolation and security of VMs with the speed and flexibility of containers.

Governance

The Kata Containers community is made up of Contributors and Maintainers, with an Architecture Committee responsible for overall architectural decisions and making final decisions if Maintainers disagree. This seems roughly equivalent to OpenStack’s Contributors and Core Reviewers, with the Architecture Committee equivalent to what some have argued the OpenStack Technical Committee should really be.

Really great to hear that they do have policies in place that no more than two members of the AC can be from the same company and that they have already been holding elections every six months using the same condorcet method CIVS voting as used in the OpenStack community.

Technical Best Practices

Kata has documentation available and it is considered an ongoing and active focus.

Code contributed to the project goes through peer code review before being accepted.

Test and CI/CD is enforced on code changes to help ensure quality.

Bug/issue tracking is in place and security VMT is in place.

Open Collaboration

I was really glad to see the Kata Community is following the “4 Opens”:

  • Open Source
  • Open Design
  • Open Development
  • Open Community

I think this is really the cornerstone of what has united the OpenStack Community, so my concern with expanding the umbrella of the OpenStack Foundation was making sure the new projects coming on also believed in these principles. It’s really great to see the Kata Community believes in them too and considers all four as important elements to an open source project and community.

Active Engagement

Big kudos to the team for the level of involvement they’ve had with trying to get out and get active engagement from contributors and users. They’ve had a presence at the past OpenStack Summits and will be at the upcoming Open Infrastructure Summit. They will also be holding sessions at the Project Teams Gathering following the Summit in Denver.

They’ve also made an effort to get out to other communities. They’ve been attending OpenStack and Open Infra Days events, but have also been active at Kubecon, KVM-Forum, Open Source Summit, and DevSecCon. All great areas for this type of project.

Voting

All questions were answer openly and clearly, and there really didn’t seem to be much reason to wait to do a vote. Concern was raised by some board members that we had said we would be doing the vote at the end of the month so we should stick to what we had said, but after a quick poll of the board members it was decided to move ahead with the approval vote.

With one abstaining (due to the change in our stated voting plan) we did approve the Kata Containers confirmation. Kata is now an official full top level project of the OpenStack Foundation.

Zuul

The Zuul CI project grew out of (and in many ways helped shape) the OpenStack project and community, so this seemed like it would be even less of a discussion.

Presentation

Monty Taylor presented the slides that the Zuul team has made available. I believe the plan is to keep that available, so I won’t go over each section here. I think it’s enough to say, all the areas that the board is concerned about for confirming a project have been followed and shaped by the Zuul team well before we even starting talking about other top level projects.

Voting

Voting did hit a snag for this project though. One piece of Zuul, the zuul-preview component does get compiled with some GPLv3 libraries. The OpenStack Governance Bylaws do specifically call out the Apache 2.0 license.

I think it may be a matter of properly phrasing things, but since no one had a clear answer if the use of GPLv3 was acceptable in this situation, we plan on working through the legal aspects of this and regroup for further discussion at the face-to-face meeting in Denver.

Bylaws Change

A potential change in the wording of another bit of the bylaws was brought up, with the plan to actual vote on the change in Denver.

4.16 Open Meetings and Records. Except as necessary to protect attorney-client privilege, sensitive personnel information, discuss the candidacy of potential Gold Member and Platinum Members, and discuss the review and approval of Open Infrastructure Projects, the Board of Directors shall: (i) permit observation of its meetings by Members via remote teleconference or other electronic means, and (ii) publish the Board of Directors minutes and make available to any Member on request other information and records of the Foundation as required by Delaware Corporate Law.

This was in reaction to past meetings where it was brought up that some board members may have some reasons where they can’t speak publicly about a project or backing company being considered and would need a private “closed door” session to be able to raise their concerns.

Setting aside for now that I’m against doing any of the new project evaluations behind closed doors, I did raise on the mailing list before the meeting that I would really rather this be reworded to make sure it is clear that these meetings would only be an exception basis, and as much as possible all discussions about new projects should be done in the open for all to see. The way this is worded now, to me at least, reads like these meetings are expected to be done in private. I know that is not the intent of everyone involved right now, but I would rather have that intent made clear in the way this is added in the bylaws versus just leaving it open for interpretation.

I’m sure there will be plenty more discussion around the wording, and even the for this before we meet in Denver.

by Sean McGinnis at April 08, 2019 12:00 AM

April 05, 2019

Chris Dent

etcd-compute refresh

A while back I wrote about etcd-compute. It's a collection of Python that combines Placement, etcd, and libvirt to provide a very simple system for starting VMs on a collection of hosts. It started out as a non-working learning and exploration tool, but it keeps getting closer to something useful.

This evening I had some time to refresh the code a bit and do a few useful cleanups so now seemed like a good time to give an update on how things are going. The recent changes are:

  • Placement and etcd run in two docker containers. The dockerenv file used for placement was out of date with the much more consistent configuration system that is now available.

  • Prior to today, when a VM management process (now called ecompute) was shut down its inventory in placement was orphaned. Now, ecompute can declare and reuse a UUID so that between different runs it is using the same inventory and when it is shut down with a SIGINT or Ctrl-C, it will lock its inventory so that nothing will try to schedule to it while it is down. When it comes back up later, it will unlock itself.

  • When a VM is destroyed, its disk is destroyed too.

  • Console-script entry-points are now used for ecompute and eschedule. This will eventually allow for tidier packaging.

Some areas continue to need help. Much of these are in areas that require expertise I've yet to have the opportunity acquire. Help wanted. Patches accepted. Etc.

  • Many of the commands spawn sub-processes to virt-install and related command line tools. That's icky.

  • Image management is less than ideal. Handling is naive, cumbersome, and slow (unless you turn off image resizing in config).

  • Network handling is also rather naive. Some instructions on how to make use of a bridge are available, but incomplete.

  • A metadata server is present, and can provide an ssh key and other data to the instance, but it is poorly managed.

But, despite all that, it works pretty well. I can spin up several VMs across several hosts, quickly and easily. The README describes how to get started. If you look at it, it will be obvious that it has grown organically and could do with a reset now that it can actually do things. Suggestions, as always, welcomed.

by Chris Dent at April 05, 2019 09:00 PM

OpenStack Superuser

What’s new in latest release of the OpenStack Cloud Provider for Kubernetes

The OpenStack Special Interest Group (SIG-OpenStack) is excited to announce the v1.14 release of Cloud-Provider-OpenStack (CPO). CPO provides an interface between a Kubernetes cluster and a host OpenStack cloud, allowing for advanced management and introspection of OpenStack resources. This release is matched to the recent Kubernetes v1.14 release. The K8s-SIG-OpenStack team has been hard at work on adding new features, including extended support for Kubernetes secrets with Barbican integration, ingress controllers with Octavia integration, CSI-conformant volumes through Cinder, and Keystone based authentication. The release is available for immediate download as source code, compiled binaries, and Docker images.

If you’d like to learn more or get involved with OpenStack and Kubernetes integrations:

For a deeper dive on OpenStack and container integrations, check out the white paper  “Leveraging OpenStack and Containers: A Comprehensive Review,” written by the SIG-K8s community.

1.14 release notes

In-tree support for the OpenStack cloud provider is deprecated and scheduled for removal by the end of the year. If you depend on the in-tree provider that ships with Kubernetes core, now’s the time to start on your migration strategy to the external cloud-provider-openstack.

As part of that deprecation, the in-tree volume provider code is being removed in favor of proxying to the out-of-tree Cinder CSI provider. This will not impact the in-tree Cinder APIs, but will require the Cinder CSI provider to be present on deployment nodes.

The v1.14 release of OpenStack cloud provider for Kubernetes includes the following features and bug fixes:

  • Keystone Authentication
    • Improved argument handling for keystone client auth.
    • Added support for clouds.yaml.
    • Added multi-cloud selection support in clouds.yaml.
    • Added Support for client certificates in keystone auth.
    • Fixed keystone client auth error.
    • Fixed mountpoint for cloud.conf.
    • Improved failure logging.
  • Cinder Volume/CSI Storage
    • Reduced size of Cinder CSI images.
    • Improved CSI status reporting.
    • Added Certificate of Authority (CA) support in CSI.
    • Updated Cinder driver to CSI 1.0.0 spec
    • Added snapshot support for CSI.
    • Added volume stage and unstage capabilities.
    • Added support for topology-aware dynamic volume provisioning.
    • Fixed volume snapshot and restore.
  • Neutron networking
    • Added internal-network-name option.
    • Improved ingress naming to improve resource management and cleanup.
    • Fixed floating ip descriptions.
  • Load balancer
    • The name of load balancer created by Service has been changed, now the name is more meaningful, including cluster name, Service namespace and Service name. The existing Services are not affected.
    • Introduced a new Service annotation ‘loadbalancer.openstack.org/x-forwarded-for’, if set to “true”, the backend HTTP service is able to get the real source IP address of the request from the HTTP headers(X-Forwarded-For).
    • Introduced a new Service annotation ‘loadbalancer.openstack.org/port-id’ for Service of LoadBalancer type to specify a particular Neutron port as the Octavia load balancer VIP, which is useful for automation.
  • Octavia ingress controller
    • Standardized use of ‘kebab case’ for the setting configuration options of Octavia ingress controller.
    • Improved floating IP and security group management in octavia-ingress-controller.
    • Added support for creating internal or external ingress by setting the annotation ‘octavia.ingress.kubernetes.io/internal’, if it’s true, the load balancer created in Octavia won’t have floating IP associated. The default value is true.
  • Secret management
    • Added support for Kubernetes secrets.
    • Added support for cloud configuration as Kubernetes secret.
    • Simplified cloud configuration secret generation.
    • Additional support for Barbican secrets.
  • Manilla file storage
    • Added support for Manilla RBAC Permissions for endpoints
    • Improved Manilla options validation.

The post What’s new in latest release of the OpenStack Cloud Provider for Kubernetes appeared first on Superuser.

by Chris Hoge and Aditi Sharma at April 05, 2019 02:06 PM

Chris Dent

Placement Update 19-13

Placement update 19-13 is brought to you by the letters P and U.

Most Important

The Virtual Pre-PTG starts next week. Watch out for emails to start different threads throughout the week. Also next week there will be a Nova pre-PTG spec review. Plenty of the pending work touches on placement.

In the meantime here are a couple of etherpads for the PTG:

What's Changed

  • There were some lingering docs and log fixes so we released an RC 3 of Placement. That candidate will become 1.0.0 on April 10th.

  • Microversion 1.32, for forbidden aggregates merged.

  • We've decided it would be nice to a) release os-traits and os-resource-classes in an independent fashion and, b) make them available to both placement and nova as tox-siblings. We'll do (a) once release time settles. For (b) the strategy is still a bit up in the air. There's some email discussion.

Specs/Features

Bugs

Once Stein has settled, we'll figure out a good time to have a bug cleanup and consolidation.

osc-placement

osc-placement is currently behind by 14 microversions. +1 since last week.

Pending changes:

Main Themes

Be thinking about what you'd like the main themes to be. Put them on the PTG etherpad.

Other Placement

  • https://review.openstack.org/#/c/645255/ This is a start at unit tests for the PlacementFixture. It is proving a bit "fun" to get right, as there are many layers involved. Making sure seemingly unrelated changes in placement don't break the nova gate is important. Besides these unit tests, there's discussion on the PTG etherpad of running the nova functional tests, or a subset thereof, in placement's check run.

    On the one hand this is a pain and messy, but on the other consider what we're enabling: Functional tests that use the real functionality of an external service (real data, real web requests), not stubs or fakes.

  • https://review.openstack.org/641404 Use code role in api-ref titles

  • https://review.openstack.org/649618 Removing some unused code.

Other Service Users

New discoveries are added to the end. Merged stuff is removed.

Since last week 2 removals (by merge), 7 new discoveries.

End

Latency is the mind-killer.

by Chris Dent at April 05, 2019 12:27 PM

April 04, 2019

Mirantis

What you need to know about compliance audits

A compliance audit is quite literally an audit to see how closely you're following the rules and regulations to which your company is subject, but it's also more than that.

by Jason James at April 04, 2019 10:25 PM

OpenStack Superuser

Inside edge computing: Must-see sessions at the Open Infrastructure Summit

Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit.  The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.

In this post we’re highlighting some of the sessions you’ll want to add to your schedule about edge computing.  Check out all the sessions, workshops and lightning talks focusing on this topic here.

Far edge with virtual machines and containers

With both OpenStack and Kubernetes infrastructures like OpenShift, there are separate platforms for both VMs and containers. There are a couple of solutions, including to develop a unified platform would be to add container support into OpenStack or to add VMs to Kubernetes.

In this lightning talk, Red Hat’s Eric Lajoie, Timo Jokiaho and Bertrand Rault will share the use case for common open infrastructure with both VMs and containers plus how KubeVirt allows VMs to run in an existing k8s environment. Details here.

Precision Time Protocol (PTP) on StarlingX

StarlingX is a complete cloud infrastructure software stack for the edge used by the most demanding applications in industrial internet of things, telecom, video delivery and other ultra-low latency use cases. To meet the requirements of these use-cases, StarlingX has developed an integrated solution for supporting the IEEE 1588 Precision Time Protocol (PTP) as the primary clock source to provide sub-microsecond accuracy to the application. This intermediate session with Alexander Kozyrev Matt Peters, both of Wind River Systems, offers a demo showing the PTP deployment architecture and demonstrate the clocking accuracy within StarlingX. Details here.

Kata Containers on edge cloud

Kata Containers wrap container workloads in extremely lightweight VMs which combine the speed and flexibility of containers with the isolation and security of VMs, makes it ideal candidates for multi-tenant deployments. In this session, Intel’s Yuntong Jin, Yu Bai and Zhiming Hu will show how to use Kata Containers to implement FaaS and the service will be deployed on edge side using a real-world case from the Baidu edge team. Details here.

Edge Computing Working Group update

The Edge Computing Working Group, formed a little over two years ago, continues to work on defining use cases, identifying project synergies and architecting a minimum viable product based on the previous done to define use cases. The panel will discuss what the group has accomplished and how it has been engaged with the larger OpenStack, open infrastructure and oOpen source communities over the past year and what we will be working on in the coming year. In addition to use cases, the session will touch on Keystone federation development to support edge use cases and Glance support for federated models. Details here.

Container networking at the edge with Kuryr

Running containerized micro-services at the network edge is quickly gaining popularity, but how can users provide networking to these containers?
Kuryr can provide the bridge between your Container Orchestration Engine (COE) and Neutron’s network abstraction layer, say Dell EMC’s David Paterson, Mark Beierl and Daniel Mellado. OpenStack already gives users the ability to deploy OpenStack compute nodes to the edge via Nova’s existing availability zones functionality. Kuryr allows users to leverage the existing Neutron network layer between the Central Data Center (CDC) control plane and the remote compute node. Details here.

See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.

Photo // CC BY NC

The post Inside edge computing: Must-see sessions at the Open Infrastructure Summit appeared first on Superuser.

by Superuser at April 04, 2019 02:01 PM

April 03, 2019

Chris Dent

Open Infra Days UK Thoughts

I attended Open Infra Days UK in London. When asked to extract a theme from the two days my response boiled down to "people want to run Kubernetes clusters and want the bits underneath to be low fuss." There were many more ideas passed around, but that one was there, either staring me in the face, or lurking behind the superficies.

This might tell us something about where the OpenStack community should be directing its development energy but, as usual, the true picture is more complex than that. For one thing, you can put a Kubernetes cluster on a collection of elastic and easily available VMs. OpenStack has been rather devoted to exactly that for years. The OpenStack provider for the Kubernetes cluster API is active and healthy. A presentation on Monday from Spyros Trigazis of CERN, explained how Magnum is being used to bootstrap many Kubernetes clusters on their very large OpenStack cloud.

But — at least for me — this was not a very telco-heavy or people-selling-to-telcos-heavy gathering. At those gatherings, milking every last hertz, bit and millisecond of CPU and network performance, achieved by describing the available hardware in excruciating detail, is the apotheosis of a vision of OpenStack as infrastructure manager.

This division between a) adequate (but not best) for many purposes (including Kubernetes) while simple to manage and consume, and b) capable of being the best for specific use cases while requiring complexity and knowledge to manage and consume has been commonplace in the history of computing. I can run several games well enough on an off-the-shelf PC, but if I have specific demands for resolution and frame rate I need to make well-informed choices about the details.

Throughout its history, OpenStack has been trying to balance the sometimes conflicting goals of being a general purpose cloud, simple to manage and use; and being a, well I'm not even sure what to call it: A manager for software defined data centers that vary in size from a little box at a cell tower to geographically disperse racks around the world.

There are plenty of people who work on OpenStack who want it to be (or it might be accurate to say "keep it") the open source cloud solution, but the economics of the situation don't always support that. Many OpenStack developers are employees of companies that somehow make money from OpenStack; by selling a packaged version or supporting or consulting on some piece of OpenStack that is being used in a private deployment. Yes, there's an increasingly visible and valuable section of contribution coming from individuals who are involved with running public OpenStack clouds, but over the history of the project these people have not been the majority.

This means that development is often driven by features that either give or merely give the impression of giving value to private and not-so-general purpose use cases. That's because those people have money to spend on use cases that can't be satisfied elsewhere. The big three cloud providers are so cheap for general purpose computing that it is difficult for OpenStack to compete. If you don't truly care about open source, OpenStack has to present either some very low barriers to use, or unique value propositions.

There's more money in the latter, so we've spent more energy on that than on lowering the barriers to use. It was inevitable, we shouldn't feel too bad about it. But we may now have an opportunity to change things: The basic orchestration functionality provided by Kubernetes is actually good. There will be demand for places to put clusters. Simple VMs or simple bare metal are a good choice. Having a diversity of options is good, OpenStack is one good option and it could be better. The scale of the demand could overcome the aforementioned economic limitations.

That suggests that we could do some things to help this along. Some of this is already in progress:

  • Highlight and give greater attention to the several ways it's possible to build a Kubernetes cluster on OpenStack: Magnum, cluster API on VMs, and cluster API on bare metal.

  • In some situations, invert the traditional relationship between Nova and Ironic. Make it easy to host VMs, or not. Whatever works. First comes Ironic, then if you need VMs, comes Nova.

  • Explore simpler "get me a VM" solutions that operate at the same level as Nova. The end user gets a choice. If they want a fancy, hardware-aware, compute service, they use Nova. If they don't, they can use something else.

I started exploring the "something else" with etcd-compute. It started out as a joke but then it worked and was something other than funny.

by Chris Dent at April 03, 2019 06:06 PM

OpenStack Superuser

How the StarlingX project opens new doors for telecoms

As the telecom industry pushes beyond traditional functions, it will look to the edge to provide new business opportunities.

That’s where StarlingX, a fully featured and high performance edge cloud software stack, comes in says Glenn Sieler, vice president of open source strategy at Wind River, in a post on RCR Wireless News. (Wind River is contributing technology from Wind River Titanium Cloud, a virtualization software platform, to StarlingX.)

“Some examples of telecom-oriented edge-hosted applications and functions that are generating wide interest are multi-access edge computing or MEC (video caching, AR/VR, retail and more), uCPE, vCPE, and vRAN. For example, by bringing content and applications to mini-data centers in the radio access network (RAN), MEC allows service providers to introduce new types of services that are unachievable with cloud-hosted architectures because of latency or bandwidth constraints. The service provider opportunity is to enable new services directly at the point of the consumer at location-based, high-density venues such as stadium events and shopping malls,” Sieler writes.

Virtualization is another cost-cutting buzzword that comes into play with StarlingX, in this case with radio access network (RAN.) By leveraging standard server hardware that cost-effectively scales processor, memory and I/O resources based on changes in demand, vRAN infuses RAN with the capacity for application intelligence, significantly improving service quality and reliability, Sieler notes. With vRAN, service providers can achieve a combination of cost savings, dynamic capacity scaling, better quality of experience and rapid instantiation of new services.

Get involved

There are about 15 sessions centering on StarlingX at the upcoming Open Infrastructure Summit Denver, ranging from a hands-on workshop to “StarlingX: Hardened Managed Kubernetes Platform for the Edge.”  Check them all out here.

Check out the code on Git repositories: https://git.openstack.org/cgit/?q=stx
Keep up with what’s happening with the mailing lists: lists.starlingx.io
There are also weekly calls you can join: wiki.openstack.org/wiki/StarlingX#Meetings
Or for questions hop on Freenode IRC: #starlingx
You can also read up on project documentation: https://wiki.openstack.org/wiki/StarlingX

Via RCR Wireless News

Photo // CC BY NC

The post How the StarlingX project opens new doors for telecoms appeared first on Superuser.

by Superuser at April 03, 2019 01:55 PM

April 02, 2019

OpenStack Superuser

Superuser Awards: Make your opinion count!

It’s a close call as the deadline approaches for community voting on this edition of the Superuser Awards.

When evaluating the nominees for the Superuser Award, take into account the unique nature of use case(s), as well as integrations and applications of OpenStack by each particular team.

If you haven’t weighed in yet, please take a moment to rate the four nominees. You have until Tuesday, April 2 at 11:59 p.m. pacific standard time.

Check out highlights from the nominees and click on the links for the full applications:

  • EnterCloudSuite, a public OpenStack platform launched in 2013, has been working with OpenStack since the Cactus release. They’ve worked to position OpenStack as the conquering David in the Goliath of Europe’s public cloud arena, winning the confidence of the EU government to provide cloud services and organizing OpenStack Italy days and ops meetups.
  • The National Supercomputer Center in Guangzhou (NSCC-GZ) Sun Yat-Sen University first deployed OpenStack in June, 2013 with 256 nodes. They are currently running 1,000 nodes on the Tianhe-2 cloud platform along with five control nodes and three MariaDB nodes, one StackWatch and 30,000 virtual machines.
  • VEXXHOST has been contributing to the OpenStack community since 2011. The company’s offering is fully open source without any proprietary licensed technology. Among many other technologies, they currently use Nova with KVM with Libvirt, Ceph centralized storage, Pacemaker for high availability, MySQL Galera for database and Puppet for config management.
  • Whitestack is a startup focused on promoting the adoption of cloud computing in emerging markets, aiming to bring benefits of hyper-scalability to places where it’s still uncommon. The team regularly contributes to Superuser and other blogs and speaks at important industry events, like the Open Infrastructure Summit and ETSI OSM events. In addition, they contribute code to the OpenStack and Open Source Mano projects.

Each community member can rate the nominees once by April 2 at 11:59 p.m. pacific standard time.

The Open Infrastructure Summit Denver Superuser Awards are sponsored by Zenko.

Previous winners include AT&T, CERN, China Mobile, Comcast, NTT Group and the Tencent TStack Team.

Cover Photo by The Magic Tuba Pixie // CC BY NC

The post Superuser Awards: Make your opinion count! appeared first on Superuser.

by Superuser at April 02, 2019 12:30 PM

Trinh Nguyen

Searchlight for Train


As we are reaching the final weeks of the Stein cycle, I would like to discuss a little bit about what we've done in Stein and planning for the Train cycle.

Stein cycle highlights as putting in [1]:
  • Searchlight now works with Elasticsearch 5.x
  • We have released a new vision to make Searchlight a multi-cloud application [2]. Moreover, we did a comparison of our vision and the OpenStack clouds vision
  • Functional test setup has been improved
  • Searchlight now can work and be tested with Python 3.7
And for the Train cycle, we would like to accomplish these main goals to fulfill the vision:
  • Make searchlight work with multiple cloud platforms including multiple OpenStack clouds [6], Azure [9], Google Cloud [10], AWS [11]. 
  • Add support for other OpenStack resources: Tacker [7], Octavia [8]
  • Deprecate support for Elasticsearch 2.x [5]
There is a lot of work to be done so I would continue putting effort to encourage new contributors to Searchlight and explore the values it could bring the world.

Let's rock it!!!

Yours,

References:

[1] https://releases.openstack.org/stein/highlights.html#searchlight-search-service
[2] https://docs.openstack.org/searchlight/latest/contributor/searchlight-vision.html
[3] https://docs.openstack.org/searchlight/latest/contributor/vision-reflection.html
[4] https://governance.openstack.org/tc/reference/technical-vision.html
[5] https://storyboard.openstack.org/#!/story/2004904
[6] https://storyboard.openstack.org/#!/story/2004840
[7] https://storyboard.openstack.org/#!/story/2004968
[8] https://storyboard.openstack.org/#!/story/2004383
[9] https://storyboard.openstack.org/#!/story/2004718
[10] https://storyboard.openstack.org/#!/story/2004996
[11] https://storyboard.openstack.org/#!/story/2004719

by Trinh Nguyen (noreply@blogger.com) at April 02, 2019 03:38 AM

April 01, 2019

OpenStack Superuser

Putting down roots in education: Open Infra Institute Day Pune

PUNE, India — Digital disruption has left few industries untouched: service providers, companies and telecom operators are all transforming their operations into software-centric, virtualized resources. All are looking to have systems that can be seamlessly deployed, centrally managed and with minimum capital and operational investment.
OpenStack is proving to be a valid solution and ideal community-driven project to address the digital business needs by service providers as well as companies.

Recently, Prakash Ramchandran (Dell) along with India’s leading contributor Digambar Patil (Calsoft Inc.) and the Open Tech Foundation organized the Open Infra Institute Day at the D.Y.Patil College of Engineering, Akurdi (Pune, Maharashtra). The idea was to introduce the world’s biggest community-driven software project – OpenStack – to students and highlight how it can influence data centers of any industry of any scale.

 

A  keynote by J.A. Gokhale (Intellysys) on the basics of cloud computing and OpenStack kickstarted the day. Ruturaj Kadikar (SRM Institute of Science and Technology, Chennai) conducted the session on core modules of OpenStack and walkthrough on the orchestration of services in OpenStack enabled infrastructure.

Other sessions featured:

  • A demo of OpenStack core components by Red Hat’s Punit Kundal and Nilesh Chandekar
  • An pverview of Red Hat OpenStack Platform 13.0 by Ganesh Hiregoudar (Dell),
  • An introduction to Containers, Ansible and Kubernetes by Omprakash (Red Hat),
  • Use cases and real deployment stories of OpenStack by Jaison Raju Ravi Trivedi (both Red Hat)
  • An overview of  OpenStack projects like Cyborg, Ironic, StarlingX, Kata containers, Magnum and Zuul.

A panel discussion featuring women in OpenStack concluded the day. Participants shared their experience working on several OpenStack-based projects and contributing to several flavors of OpenStack.

At the same event, the OpenStack User Group Pune announced new core members Ganesh Kadam (Red Hat) and Deepali Gothwal (D.Y.Patil College of Engineering).

About the author

Sagar Nangare is a technology blogger, focusing on data center technologies (networking, telecom, cloud, storage) and emerging domains like edge computing, IoT, machine learning, AI). He works at Calsoft Inc. as a digital strategist.

 

The post Putting down roots in education: Open Infra Institute Day Pune appeared first on Superuser.

by Sagar Nangare at April 01, 2019 01:55 PM

March 29, 2019

Chris Dent

Placement Update 19-12

Placement update 19-12. Nearing 1/4 of the way through the year.

I won't be around on Monday, if someone else can chair the meeting that would be great. Or feel free to cancel it.

Most Important

An RC2 was cut earlier this week, expecting it to be the last, but there are a couple of patches which could be put in an RC3 if we were inclined that way. Discuss.

We merged a first suite of contribution guidelines. These are worth reading as they explain how to manage bugs, start new features, and be a good reviewer. Because of the introduction of StoryBoard, processes are different from what you may have been used to in Nova.

Because of limited time and space and conflicting responsibilities the Placement team will be doing a Virtual Pre-PTG.

What's Changed

  • The contribution guidelines linked above describe how to manage specs, which will now be in-tree. If you have a spec to propose or re-propose (from stein in nova), it now goes in doc/source/specs/train/approved/.

  • Some image type traits have merged (to be used in a nova-side request pre-filter), but the change has exposed an issue we'll need to resolve: os-traits and os-resource-classes are under the cycle-with-intermediary style release which means that at this time in the cycle it is difficult to make a release which can delay work. We could switch to independent. This would make sense for libraries that are basically lists of strings. It's hard to break that. We could also investigate making os-traits and os-resource-classes required-projects in job templates in zuul. This would allow them to be "tox siblings". Or we could wait. Please express an opinion if you have one.

  • In discussion in #openstack-nova about the patch to delete placement from nova, it was decided that rather than merge that after the final RC, we would wait until the PTG. There is discussion on the patch which attempts to explain the reasons why.

Specs/Blueprint/Features

Bugs

We should do a bug squash day at some point. Should we wait until after the PTG or no?

Note that the contribution guidelines has some information on how to evaluate new stories and what tags to add.

osc-placement

osc-placement is currently behind by 13 microversions.

Pending changes:

Main Themes

Be thinking about what you'd like the main themes to be. Put them on the PTG etherpad.

Other Placement

  • https://review.openstack.org/#/q/topic:2005297-negative-aggregate-membership Negative member of aggregate filtering resource providers and allocation candidates. This is nearly ready.

  • https://review.openstack.org/#/c/645255/ This is a start at unit tests for the PlacementFixture. It is proving a bit "fun" to get right, as there are many layers involved. Making sure seemingly unrelated changes in placement don't break the nova gate is important. Besides these unit tests, there's discussion on the PTG etherpad of running the nova functional tests, or a subset thereof, in placement's check run.

    On the one hand this is a pain and messy, but on the other consider what we're enabling: Functional tests that use the real functionality of an external service (real data, real web requests), not stubs or fakes.

  • https://review.openstack.org/641404 Use code role in api-ref titles

Other Service Users

There's a lot here, but it is certain this is not all of it. If I missed something you care about, followup mentioning it.

by Chris Dent at March 29, 2019 02:06 PM

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

OSF Project Releases

  • Kata Containers
    The Kata Containers community recently announced its 1.6.0 release. Features include OpenTracing support; changes to enabled virtio-fs so the agent can mount virtio-fs shared directories; NVDIMM support on arm64; CPU cgroups in sandbox are honored which includes user defined paths and the limit of hypervisor vCPU threads. The project also updated to the Linux Kernel 4.19.x as its preferred kernel version.

  • Zuul
    With the recent 3.7 release, Zuul executors can now manage multiple Ansible versions allowing jobs to choose which one to use and now supports Ansible 2.6 and 2.7  Priority review focus is now shifting to zuul-runner, an effort to simplify local reproduction of builds from job configuration.

Upcoming releases

  • OpenStack
    We’ll be toasting the OpenStack Stein release in just two weeks! Release candidates were just produced for all deliverables. Please help testing them and report release-critical issues or regressions that might have slipped through automated testing. Meanwhile, the election of project team leads for the upcoming Train cycle just concluded. About 38 percent of teams changed their project leads, including Swift where Tim Burke takes the helm replacing John Dickinson who guided the project since the Diablo cycle.
  • Airship
    • The Airship team continues to work towards its 1.0 release with a focus on solid documentation, including the Treasure Map project which gives users a tested starting point for production deployments.
    • At the weekly design meetings, the Airship team has been focusing on building out a new workflow for managing bare metal. At the Open Infrastructure Summit in Denver, attendees can learn more about how Airship will employ OpenStack Ironic within Kubernetes to manage bare metal in the session “Bare Metal Provisioning In Airship: Ironic It’s Not Just For OpenStack Anymore.”

StarlingX

  • The next StarlingX release is scheduled for the week of May 20 featuring components from the latest OpenStack release, Stein.
  • The community has been working with several OpenStack projects to add functionality including the new network segment range management feature in Neutron to support edge use cases.

OpenStack Foundation news

Open Infrastructure Summit updates

The agenda for the Denver Summit, Forum and PTG are now live. Register before April 11 to save $300 USD.

Sponsorship opportunities for the Shanghai Summit are currently available. Information on registration and the call for papers will be available in the upcoming weeks.

An upcoming Board of Directors meeting on April 8 will focus on reviewing presentations from OSF pilot projects applying for confirmation.  Audio from the meeting will be made available to the community.

Open infrastructure community events

  • Find out why software-defined networking is so important in telco cloud deployments at an event hosted by Open Infrastructure CDMX User Group.
  • Explore open infrastructure and open source solutions at the with Intel, Huawei, 99cloud and more at the OpenInfra event hosted by China Open Infrastructure Meetup group.
  • Learn about open hardware, containers, automation at OpenInfra Day UK April 1-2.
  • Find the OpenStack Foundation at SUSE Con April 1-5 and Foundation members at Open Networking Summit North America on April 3-5.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at March 29, 2019 01:49 PM

March 28, 2019

Robert Collins

Continuous Delivery and software distributors

Back in 2010 the continuous delivery meme was just grabbing traction. Today its extremely well established… except in F/LOSS projects.

I want that to change, so I’m going to try and really bring together a technical view on how that could work – which may require multiple blog posts – and if it gets traction I’ll put my fingers where my thoughts are and get into specifics with any project that wants to do this.

This is however merely a worked model today: it may be possible to do things quite differently, and I welcome all discussion about the topic!

tl;dr

Pick a service discovery mechanism (e.g. environment variables), write two small APIs – one for flag delivery, with streaming updates, and one for telemetry, with an optional aggressive data hiding proxy, then use those to feed enough data to drive a true CI/CD cycle back to upstream open source projects.

Who is in?

Background

(This assumes you know what C/D is – if you don’t, go read the link above, maybe wikipedia etc, then come back.)

Consider a typical SaaS C/D pipeline:

git -> build -> test -> deploy

Here all stages are owned by the one organisation. Once deployed, the build is usable by users – its basically the simplest pipeline around.

Now consider a typical on-premise C/D pipeline:

git -> build -> test -> expose -> install

Here the last stage, the install stage, takes place in the users context, but it may be under the control of the create, or it may be under the control of the user. For instance, Google play updates on an Android phone: when one selects ‘Update Now’, the install phase is triggered. Leaving the phone running with power and Wi-Fi will trigger it automatically, and security updates can be pushed anytime. Continuing the use of Google Play as an example, the expose step here is an API call to upload precompiled packages, so while there are three parties, the distributor – Google – isn’t performing any software development activities (they do gatekeep, but not develop).

Where it gets awkward is when there are multiple parties doing development in the pipeline.

Distributing and C/D

Lets consider an OpenStack cloud underlay circa 2015: an operating system, OpenStack itself, some configuration management tool (or tools), a log egress tool, a metrics egress handler, hardware mgmt vendor binaries. And lets say we’re working on something reasonably standalone. Say horizon.

OpenStack for most users is something obtained from a vendor. E.g. Cisco or Canonical or RedHat. And the model here is that the vendor is responsible for what the user receives; so security fixes – in particular embargoed security fixes – cannot be published publically and the slowly propogate. They must reach users very quickly. Often, ideally, before the public publication.

Now we have something like this:

upstream ends with distribution, then vendor does an on-prem pipeline


Can we not just say ‘the end of the C/D pipeline is a .tar.gz of horizon at the distribute step? Then every organisation can make their own decisions?

Maybe…

Why C/D?

  • Lower risk upgrades (smaller changes that can be reasoned about better; incremental enablement of new implementations to limit blast radius, decoupling shipping and enablement of new features)
  • Faster delivery of new features (less time dealing with failed upgrades == more time available to work on new features; finished features spend less time in inventory before benefiting users).
  • Better code hygiene (the same disciplines needed to make C/D safe also make more aggressive refactoring and tidiness changes safer to do, so it gets done more often).

1. If the upstream C/D pipeline stops at a tar.gz file, the lower-risk upgrade benefit is reduced or lost: the pipeline isn’t able to actually push all the to installation, and thus we cannot tell when a particular upgrade workaround is no longer needed.

But Robert, that is the vendors problem!

I wish it was: in OpenStack so many vendors had the same problem they created shared branches to work on it, then asked for shared time from the project to perform C/I on those branches. The benefit is only realise when the developer who is responsible for creating the issue can fix it, and can be sure that the fix has been delivered; this means either knowing that every install will install transiently every intermediary version, or that they will keep every workaround for every issue for some minimum time period; or that there will be a pipeline that can actually deliver the software.

2. .tar.gz files are not installed and running systems. A key characteristic of a C/D pipeline is that is exercises the installation and execution of software; the ability to run a component up is quite tightly coupled to the component itself, for all the the ‘this is a process’ interface is very general, the specific ‘this is server X’ or ‘this is CLI utility Y’ interfaces are very concrete. Perhaps a container based approach, where a much narrower interface in many ways can be defined, could be used to mitigate this aspect. Then even if different vendors use different config tools to do last mile config, the dev cycle knows that configuration and execution works. We need to make sure that we don’t separate the teams and their products though: the pipeline upstream must only test code that is relevant to upstream – and downstream likewise. We may be able to find a balance here, but I think more work articulating what that looks like it needed.

3. it will break the feedback cycle if the running metrics are not receive upstream; yes we need to be careful of privacy aspects, but basic telemetry: the upgrade worked, the upgrade failed, here is a crash dump – these are the tools for sifting through failure at scale, and a number of open source projects like firefox, Ubuntu and chromium have adopted them, with great success. Notably all three have direct delivery models: their preference is to own the relationship with the user and gather such telemetry directly.

C/D and technical debt

Sidebar: ignoring public APIs and external dependencies, because they form the contract that installations and end users interact with, which we can reasonably expect to be quite sticky, the rest of a system should be entirely up to the maintainers right? Refactor the DB; Switch frameworks, switch languages. Cleanup classes and so on. With microservices there is a grey area: APIs that other microservices use which are not publically supported.

The grey area is crucial, because it is where development drag comes in: anything internal to the system can be refactored in a single commit, or in a series of small commits that is rolled up into one, or variations on this theme.

But some aspect that another discrete component depends upon, with its own delivery cycle: that cannot be fixed, and unless it was built with the same care public APIs were, it may well have poor scaling or performance characteristics that making fixing it very important.

Given two C/D’d components A and B, where A wants to remove some private API B uses, A cannot delete that API from its git repo until all B’s everywhere that receive A via C/D have been deployed with a version that does not use the private API.

That is, old versions of B place technical debt on A across the interfaces of A that they use. And this actually applies to public interfaces too – even if they are more sticky, we can expect the components of an ecosystem to update to newer APIs that are cheaper to serve, and laggards hold performance back, keep stale code alive in the codebase for longer and so on.

This places a secondary requirement on the telemetry: we need to be able to tell whether the fleet is upgraded or not.

So what does a working model look like?

I think we need a different diagram than the pipeline; the pipeline talks about the things most folk doing an API or some such project will have directly in hand, but its not actually the full story. The full story is rounded out with two additional features. Feature flags and telemetry. And since we want to protect our users, and distributors probably will simply refuse to provide insights onto actual users, lets assume a near-zero-trust model around both.

Feature flags

As I discussed in my previous blog post, feature flags can be used for fairly arbitrary purposes, but in this situation, where trust is limited, I think we need to identify the crucial C/D enabling use cases, and design for them.

I think that those can be reduce to soft launches – decoupling activating new code paths from getting them shipped out onto machines, and kill switches – killing off flawed / faulty code paths when they start failing in advance of a massive cascade failure; which we can implement with essentially the same thing: some identifier for a code path and then a percentage of the deployed base to enable it on. If we define this API with efficient streaming updates and a consistent service discovery mechanism for the flag API, then this could be replicated by vendors and other distributors or even each user, and pull the feature API data downstream in near real time.

Telemetry

The difficulty with telemetry APIs is that they can egress anything. OTOH this is open source code, so malicious telemetry would be visible. But we can structure it to make it harder to violate privacy.

What does the C/D cycle need from telemetry, and what privacy do we need to preserve?

This very much needs discussion with stakeholders, but at a first approximation: the C/D cycle depends on knowing what versions are out there and whether they are working. It depends on known what feature flags have actually activated in the running versions. It doesn’t depend on absolute numbers of either feature flags or versions

Using Google Play again as an example, there is prior art – https://support.google.com/firebase/answer/6317485 – but I want to think truely minimally, because the goal I have is to enable C/D in situations with vastly different trust levels than Google play has. However, perhaps this isn’t enough, perhaps we do need generic events and the ability to get deeper telemetry to enable confidence.

That said, let us sketch what an API document for that might look like:

project:
version:
health:
flags:
- name:
  value:

If that was reported by every deployed instance of a project, once per hour, maybe with a dependencies version list added to deal with variation in builds, it would trivially reveal the cardinality of reporters. Many reporters won’t care (for instance QA testbeds). Many will.

If we aggregate through a cardinality hiding proxy, then that vector is addressed – something like this:

- project:
  version:
  weight:
  health:
  flags:
  - name:
    value:
- project: ...

Because this data is really only best effort, such a proxy could be backed by memcache or even just an in-memory store, depending on what degree of ‘cloud-nativeness’ we want to offer. It would receive accurate data, then deduplicate to get relative weights, round those to (say) 5% as a minimum to avoid disclosing too much about long tail situations (and yes, the sum of 100 1% reports would exceed 100 :)), and then push that up.

Open Questions

  • Should library projects report, or are they only used in the context of an application/service?
    • How can we help library projects answer questions like ‘has every user stopped using feature Y so that we can finally remove it’ ?
  • Would this be enough to get rid of the fixation on using stable branches everyone seems to have?
    • If not why not?
  • What have I forgotten?

by rbtcollins at March 28, 2019 08:18 PM

March 27, 2019

OpenStack Superuser

Meet the Denver Open Infrastructure Superuser Award nominees

Who do you think should win the Superuser Award for the Open Infrastructure Summit Denver?

When evaluating the nominees for the Superuser Award, take into account the unique nature of use case(s), as well as integrations and applications of open infrastructure by each particular team. Rate the nominees before April 2 at 11:59 p.m. Pacific Standard Time.

Check out highlights from the four nominees and click on the links for the full applications:

  • EnterCloudSuite, a public OpenStack platform launched in 2013, has been working with OpenStack since the Cactus release. They’ve worked to position OpenStack as the conquering David in the Goliath of Europe’s public cloud arena, winning the confidence of the EU government to provide cloud services and organizing OpenStack Italy days and ops meetups.
  • The National Supercomputer Center in Guangzhou (NSCC-GZ) Sun Yat-Sen University first deployed OpenStack in June, 2013 with 256 nodes. They are currently running 1,000 nodes on the Tianhe-2 cloud platform along with five control nodes and three MariaDB nodes, one StackWatch and 30,000 virtual machines.
  • VEXXHOST has been contributing to the OpenStack community since 2011. The company’s offering is fully open source without any proprietary licensed technology. Among many other technologies, they currently use Nova with KVM with Libvirt, Ceph centralized storage, Pacemaker for high availability, MySQL Galera for database and Puppet for config management.
  • Whitestack is a startup focused on promoting the adoption of cloud computing in emerging markets, aiming to bring benefits of hyper-scalability to places where it’s still uncommon. The team regularly contributes to Superuser and other blogs and speaks at important industry events, like the Open Infrastructure Summit and ETSI OSM events. In addition, they contribute code to the OpenStack and Open Source Mano projects.

Each community member can rate the nominees once by April 2 at 11:59 p.m. Pacific Standard Time.

Previous winners include City Network, AT&T, CERN, China Mobile, Comcast, NTT Group and the Tencent TStack Team.

The Open Infrastructure Summit Denver Superuser Awards are sponsored by Zenko.

The post Meet the Denver Open Infrastructure Superuser Award nominees appeared first on Superuser.

by Superuser at March 27, 2019 04:22 PM

Denver Superuser Awards Nominee: EnterCloudSuite

It’s time for the community to help determine the winner of the Open Infrastructure Summit Denver Superuser Awards, sponsored by Zenko. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

The EnterCloudSuite team is one of four nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline April 2 at 11:59 p.m. Pacific Standard Time.

Rate them here!

Who is the nominee?

The EnterCloudSuite team is made up of 14 people; four of them will be in Denver: Mariano Cunietti, Federico Minzoni, Jacopo Secchiero, Mattia Scalvini. EnterCloudSuite has been a public OpenStack platform ever since launching in August 2013. Enter, based in Milan, Italy was acquired in June 2018 by a larger infrastructure group called Irideos. (Until the merger is complete, our company is officially Enter/Irideos.)

How has open infrastructure transformed EnterCloudSuite’s business? 

The team started in 2011 working on the Cactus release. It completely changed our views on infrastructure, moving from hardware and vendor-based to an open-source software based model.

For the last several years, the team has faced the challenge of delivering fast and reliable changes to their public cloud. They built a CI/CD process using GitLab, Jenkins and Docker to deliver rapid updates in production. The have also moved from one OpenStack update per year, to three per year (yes, we know there are only two releases every year, but we were lagging behind.)

The ability to run their own infrastructure with no support contract outside the company made them very accountable, competent and fast. This value was transferred to their customers when building their infrastructures on their cloud, resulting in 50 percent year-over-year growth.

How has EnterCloudSuite participated in or contributed to an open source project? 

Our team is mainly made up of infrastructure and operations people. With only two developers, it hasn’t been possible to contribute code back to the community. What the team did instead was to provide early feedback on new functionalities (especially in Neutron) and to collaborate building real-world scenarios for the developers, including pitching in with testing and running them in production. They’ve also built a lot of connections in the European community, trying to position OpenStack as the conquering David in the Goliath of Europe’s public cloud arena. In 2015, they won the European Commission DIGIT Cloud I Tender —  the only open source, only OpenStack player to meet the criteria, making them official providers for EU institutions. They have also been co-organizers of the OpenStack Day in Italy since 2013 as well as organizing ops meetups.

What open source technologies does your company use in its open infrastructure environment?

The EnterCloudSuite stack is 100 percent open source. Some of the tech we use includes:

  • Compute – KVM
  • Networking – OpenvSwitch and L3 agent
  • Block Storage – Ceph
  • Object Storage – Swift
  • Hardware – open compute gear
  • On the SCM/ALM level, GitLab, Jenkins, Portus.
  • For automation, Terraform, Ansible and Go, all baked together in a tool we named “Automium” after the Atomium monument in Brussels (basically the symbol of Europe).
  • Monitoring and logging: Prometheus, Icinga, InfluxDB, ElasticSearch, Grafana, Kibana

In the process of adopting these technologies, the team has severed relationships vendors. We’re now a vendor-free company (except working with SwiftStack but, hey: Joe Arnold is our friend!)

What’s the scale of your open infrastructure environment?

As a public cloud service provider, we can’t publish precise metrics around customer or platform consumption. We can divulge  that our platform has been engineered and deployed to support public cloud workloads capable of serving the whole EU private and public sector market and beyond. OpenStack is the compute, storage and networking engine that drives their cloud native infrastructure product line, which has been engineered specifically for today’s digital community focused on delivering cloud- native applications.

What kind of operational challenges has EnterCloudSuite overcome during their experience with open infrastructure? 

The team has learned that customers are much more relaxed and collaborative if they know what’s going on and can prepare ahead. That’s why every time they plan (or screw up!), they do it publicly on enter.statuspage.io.

That means they usually don’t need to do rolling upgrades or live migrations but try to keep the maintenance windows the narrowest possible. If you automate all the things and do your homework (dry-runs and planning), no one is going to complain or even notice what you’re doing.

To mention some challenges: upgrade PG number on Ceph (took hours), some early OpenStack upgrades (around Kilo), bug fixing on the beta OFED firmware for Mellanox NICs (they were great in supporting us) and scaling RabbitMQ on controllers at some point when the workload increased due to increased demand.

How is EnterCloudSuite innovating with open infrastructure? 

The team truly believes that cloud is changing how organizations work. Automation means industrialization of processes and it requires standardization of processes first.  Also, the way people collaborate in the cloud space and in the collaborative communities is something they believe will change they way all the industries work today.

On a technical standpoint, they are moving quickly to Kubernetes as an alternative to new standards that are hiding dangerous lock-ins from the big players. In addition they’re exploring the implications and OpenStack capabilities in terms of edge computing following the acquisition by a network infrastructure company. They expect OpenStack and Kubernetes to play a big role in that field.

Each community member can rate the nominees once by April 2 at 11:59 p.m. Pacific Standard Time.

The post Denver Superuser Awards Nominee: EnterCloudSuite appeared first on Superuser.

by Superuser at March 27, 2019 04:05 PM

Denver Superuser Awards Nominee: National Supercomputer Center in Guangzhou, Sun Yat-Sen University

It’s time for the community to help determine the winner of the Open Infrastructure Summit Denver Superuser Awards, sponsored by Zenko. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

The National Supercomputer Center in Guangzhou (NSCC-GZ), Sun Yat-Sen University is one of four nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline April 2 at 11:59 p.m. Pacific Standard Time.

Rate them here!

Who is the nominee?

A team at the NSCC-GZ, Sun Yat-Sen University.

How has open infrastructure transformed your organization? 

The existing cloud platform is based on the secondary development of OpenStack. At present, more than a 1,000 nodes have been deployed on the Tianhe-2 cloud platform and the construction of virtualized resource pools has been launched successfully. To further utilize the resource capacity of the NSCC-GZ and deepen scientific research innovation and industrial cultivation, the Centre has also launched a new upgrade of the cloud service platform software project.

How has your organization participated in or contributed to open source projects? 

NSCC-GZ is an end-user of OpenStack infrastructure, so we have not contributed to open source projects directly.

What open source technologies does the NSCC-GZ use in its open infrastructure environment?

We’ve used MariaDB, StackWatch, RabbitMQ, InfluxDB and notifications. We’ve also used the following core projects of OpenStack: Nova, Neutron, Cinder, Glance, Keystone, Horizon, Heat and Ceilometer.

What’s the scale of your open infrastructure environment?

The cluster architecture of the first project: five control nodes and three MariaDB nodes,  one StackWatch and 512 computing nodes and 30,000 virtual machines running at the same time during tests.

What kind of operational challenges has the NSCC-GZ overcome during your experience with open infrastructure? 

  • Cross-version upgrading
  • Multi-regional management
  • Tenant management
  • Administering public clouds
  • Cloud platform security

How is the NSCC-GZ innovating with open infrastructure? 

The implementation of an OpenStack cloud platform on the customized infrastructures of Tianhe-2 met many challenges in terms of architecture and optimization. Because it’s difficult for traditional network nodes to meet the bandwidth and stability requirements of supercomputing, a two-layer architecture was devised for the network. In it, traffic directly from and to the external network is allowed for computing nodes to mitigate data traffic of the network nodes by avoiding the complicated traffic routing in a conventional OpenStack network. We’re also using ARP-Mac binding to reduce network interruption risks caused by cyber attacks. Three-layer switching, with gateways implemented on switches, also improves overall performance and stability.

Each community member can rate the nominees once by April 2 at 11:59 p.m. Pacific Standard Time.

The post Denver Superuser Awards Nominee: National Supercomputer Center in Guangzhou, Sun Yat-Sen University appeared first on Superuser.

by Superuser at March 27, 2019 03:55 PM

Denver Superuser Awards Nominee: VEXXHOST

It’s time for the community to help determine the winner of the Open Infrastructure Summit Denver Superuser Awards, sponsored by Zenko. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

VEXXHOST is one of four nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline April at 11:59 p.m. Pacific Standard Time.

Rate them here!

Who is the nominee?

VEXXHOST is a leading Canadian public, private and hybrid cloud provider with an infrastructure powered by 100 percent vanilla OpenStack. Since the migration to an OpenStack infrastructure in 2011, VEXXHOST has been offering infrastructure-as-a-service without any vendor lock-in or proprietary technology. The team, lead by CEO Mohammed Naser, delivers a high level of expertise to help users optimize cloud infrastructure so they can focus on their core competencies.

How has open infrastructure transformed your business? 

OpenStack allows our company to speed up the delivery of new services to customers by accelerating our ability to innovate. The team can focus resources to deliver quality services without dealing with infrastructure issues which are solved by OpenStack. Every release of OpenStack has offered new features that were utilized to deliver higher performance.

How has  your company participated in or contributed to an open source project? 

VEXXHOST has been contributing to the OpenStack community since its second release in 2011. The company also had a presence in the community by regularly attending OpenStack Summits and participating in the interop challenge during the Boston summit in 2017. The team has also hosted community events, including OpenStack Canada Day and help organize the Montreal OpenStack meetup. Our co-founder Mohammed Naser, the project team lead (PTL) for Puppet OpenStack, has given talks at Montreal, Ottawa and Toronto OpenStack meetups.

We also play a part in the community by actively contributing upstream code and sharing feedback with PTLs and developers. When we encounter bugs, we report them, diagnose them and work with the community to get a full fix in. We’re active on the mailing list and provide feedback and fixes.

What open source technologies does your company use in its open infrastructure environment?

We run exclusively OpenStack services across our entire infrastructure. Our offering is fully open source without any proprietary licensed technology. Among many others, in our IT environment we use Nova with KVM with Libvirt, Ceph centralized storage, Pacemaker for high availability, MySQL Galera for database and Puppet for config management.

What is the scale of  your open infrastructure environment?

As a public cloud provider, we can’t disclose metrics regarding the scale of users’ consumption. Our public cloud can handle several production-grade enterprise-scale workloads with private and hybrid cloud solutions delivering the same level of stability and robustness as our public cloud. Both their infrastructure and users’ production workloads are powered by OpenStack. OpenStack’s compute, network and storage are the backbone that power all our managed solutions.

What kind of operational challenges has your company overcome during their experience with open infrastructure? 

Initially, we faced some challenges in terms of rolling upgrades as they were difficult though they’ve become much easier with new releases. After upgrading their infrastructure to Pike, we found a bug in the code which we reported. The developers at OpenStack were very responsive and happy to cooperate — as they always are — to help fix the bug. The bug was fixed in less than 24 hours in trunk and less than 48 hours in stable branches.

How is your company innovating with open infrastructure? 

As a public and private cloud provider, we’re heavily invested in improving and extending their list of managed services. Using OpenStack has increased innovation in our managed services. In August 2017, we launched Kubernetes services using Magnum on the Pike release. We’ve worked with the Magnum project team to ensure delivery of the best possible Kubernetes and OpenStack experience. VEXXHOST is currently one of the few cloud providers to offer Magnum. OpenStack has also facilitated the delivery of big data solutions with the help of Sahara integration. We were also able to speed up the deployment of clusters with the help of transient clusters that provide huge cost savings.

Each community member can rate the nominees once by April 2 at 11:59 p.m. Pacific Standard Time.

The post Denver Superuser Awards Nominee: VEXXHOST appeared first on Superuser.

by Superuser at March 27, 2019 03:54 PM

Denver Superuser Awards Nominee: Whitestack

It’s time for the community to help determine the winner of the Open Infrastructure Summit Denver Superuser Awards, sponsored by Zenko. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

Whitestack is one of four nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate the nominees before the deadline April 2 at 11:59 p.m. Pacific Standard Time.

Who is the nominee?

Whitestack is a startup focused on promoting the adoption of cloud computing in emerging markets (currently focused in the telecom industry), aiming to bring benefits of hyper scalability to places where it’s still uncommon.

The Whitecloud team is a distributed team of developers who created their own OpenStack distribution (available in the OpenStack Foundation Marketplace), designed with an emphasis on simplifying deployment.

Whitestack also has other teams working in similar challenges for other solutions like NFV orchestration based on Open Source Mano (OSM) or SDN control based on Open Source Network Operating System (ONOS.)

How has open infrastructure transformed your business? 

Emerging markets face the challenge of pursuing cost efficiencies and scaling fast, due to technical difficulties that large organizations can easily overcome. Our vision of “making OpenStack easy” fills a big gap, allowing small organizations start using OpenStack, learn from it in practice, and scale when required.With successful deployments in Latin America and Europe, the Whitestack model, where organizations deploy stacks with no secrets, have proved to be something customers appreciate and value the adoption risk.Most of our customers are small operations, with people delighted to experience innovative open technologies, and traditional Telcos, which are shifting to open technologies. These are the results after 3 years positioning the most important open source projects.

Emerging markets face the challenge of pursuing cost efficiencies and scaling fast due to technical difficulties that large organizations can easily overcome. Our vision of making OpenStack easy fills a big gap, allowing small organizations to start using OpenStack, learn from it in practice and scale when required.

With successful deployments in Latin America and Europe, the Whitestack model has proved to be something customers appreciate and value despite the adoption risk.

Our customers are split between small outfits with people who are excited to work with innovative open technologies and traditional telecoms shifting to open technologies.

How has Whitestack participated in or contributed to an open source project? 

We always share our vision of building telecom clouds with open technologies. As such, we regularly contribute to Superuser and other blogs, and speak at important industry events, like the Open Infrastructure Summit and ETSI OSM events.

We are leading the first Open NFV showcase (with OpenStack and OSM), to demonstrate that multi-vendor NFV over a unified and open operator-controlled platform is possible! Thanks to these efforts, we’re one of the most respected and knowledgeable cloud organizations in Latin America and have built OpenStack deployments for very large telecoms.

We contribute code to OpenStack and OSM projects. In OSM, we lead the development of very critical monitoring & NFV auto-scaling modules, and lead VNF onboarding efforts.

What open source technologies does Whitestack use in its open infrastructure environment?

The portfolio of solutions we are positioning in the market, as part of mission critical services, is based on these projects:

  • OpenStack (core projects, plus Kolla, Designate, Heat, Aodh)
  • ManageIQ for hybrid cloud
  • OpenNMS for network performance
  • Open Source Mano for NFV Orchestration
  • ONOS for SDN underlay control
  • Ceph for software-defined storage
  • Kubernetes for container orchestration)
  • ELK, Prometheus, and Grafana for operations simplification

Being able to offer a complete solution, based on open standards, is part of our mission, and for that we leverage the most robust and successful components.

What is the scale of Whitestack’s open infrastructure environment?

Our customers’ scale is not high, yet, but diverse, covering Latin America and Europe, as we are looking first for broader adoption, that will drive higher scale as a consequence (and not the opposite).

What kind of operational challenges have you overcome during your experience with open infrastructure? 

Initially, most of our problems were during deployment and initial troubleshooting. We determined that those issues were going to be OpenStack blockers in many organizations. Therefore, we decided to fix the deployment process, and we are now facing the challenge to improve troubleshooting.

Our current deployment mechanism is so easy, that we had asked customers to try it, by just sending the installer and basic directions in an email!

How is Whitestack innovating with open infrastructure? 

Whitestack is innovating by positioning these technologies in markets that traditionally are very conservative, by providing high-quality training and content, developing engagement activities, and connecting different organizations by creating multi-vendor industry events.

As a result, we have telecoms and companies using OpenStack, testing interesting (non-traditional) use cases, and working very hard to get more and more adoption, because we strongly believe that infrastructure must be open!

Each community member can rate the nominees once by April 2 at 11:59 p.m. Pacific Standard Time.

The post Denver Superuser Awards Nominee: Whitestack appeared first on Superuser.

by Superuser at March 27, 2019 03:51 PM

Aptira

Software Interlude. Part 1 – Let’s talk about Softwareisation

Aptira Open Networking Interlude: Software

In the last post we completed our description of the second functional domain of Open Networking: Open Network Software. 

The fundamental trend in the evolution of Open Networking (outlined in the whole series to date)has been the progressive shift towards the software implementation of network functionality.  

The common (and clumsy but evocative) term for this trend is the “softwareisation of the network”. 

Although this series has described the historical innovations and key developments of softwareisation, there are multiple dilemmas: software is broadly used but not coherently conceived or developed across or within different industry domains.  Software as a problem solving tool is still widely misunderstood and its practices often misapplied. 

In order to successfully describe the remaining domains of Open Networking in this series, we need to invest a bit more time in this series exploring the nature of software. This will help to reduce potential misunderstanding about its significant role in the successful delivery of Open Network solutions. 

So, we’re going to take a short interlude to discuss the nature of software and how it impacts and informs the field of Open Networking. 

In particular we are going to drill down into key differences between the traditional networking world and the software world that cause problems in many implementation projects. 

The Problem with Software

As pervasive as software has become in all aspects of life, we’d like to think that all is right with the software world.  But unfortunately that’s not the case. Leaving aside any experiences you may have had in imperfect software projects, you only have to look at recent events in the world to see the impacts.  For example, 2 accidents involving Boeing 737-Max aircraft (in Indonesia in 2018 and Ethiopia in 2019): although the causes of these 2 accidents have not been determined, the ongoing narrative is about software. 

Our civilization depends critically on software, and we have a dangerously low degree of professionalism in the computer fields.

Bjarne StroustrupOriginal Designer of C++

It’s almost a truism that software development projects are prone to run into difficulties, and this is even more true in the case of integrated solutions involving hardware, software and other components. And typically the more organisations involved in the process, the harder still it is to do well.  

Open Networking solutions occupy this latter space: multi-component, multi-technology, multi-vendor projects which can be technically quite complex.  All the good aspects of Open Networking that we’ve seen over the life of this series to date contribute towards this complexity, as they open up new layers or interfaces, or combine previously distinct technology and practice domains together and create a need for those domains to work together. 

In Open Networking, software is both the glue and the new foundation that holds this complexity together. We a common understanding of the nature of software and the aspects of software that make it both hard to develop and  so very valuable when it is done well. 

There may be only a few ways to write software well, but there are many projects that find new ways to write software badly; or just find a way to re-use an old bad way of doing it. 

Evolving practices in software development, mostly but not exclusively centred around Agile, are helping to improve the success rate of software projects, but we are still in the early days and the end-to-end process will need to evolve much further to improve quality and success rates. 

The Software Interlude

Firstly, we look at the key question: what is software? This might seem overly simplistic, but it is an important definitional starting point from which we have to be aligned for all else that follows to also be aligned, in the upcoming post: “What is Software?” 

Secondly, we’ll look at different perspectives of software and how it is used in Open Networking solutions, in the post: “Software Ain’t Software”. 

Thirdly, we’ll look at the key aspects of software development and break this process down so we can compare and contrast with related but different practices, in the post: “What is Software Development?”. 

Fourthly, we’ll look at why software development (and managing software development projects) is so hard, in the post: “Why is Software Development Hard?”. 

And lastly, we’ll look at the development processes of software and how that relates to development processes in other technical domains in Open Networking, in the post: “Software Development Paradigms”. 

After this Software Interlude, we will move on to the Open Network Integration domain. 

Stay tuned for these upcoming posts. 

Let us make your job easier.
Find out how Aptira's managed services can work for you.

Find Out Here

The post Software Interlude. Part 1 – Let’s talk about Softwareisation appeared first on Aptira.

by Adam Russell at March 27, 2019 03:54 AM

March 26, 2019

Fleio Blog

Fleio 2019.03: OpenStack region revenue report, domain registration option and more

We have just released Fleio version 2019.03. Read on and see what’s new. Here’s a highlight of what’s new in Fleio 2019.03: add domain registration/transfer/use existing domain options on web hosting services ordering Option to show flavors (or VPS packages) as cards on instance create form add revenue report per OpenStack region. Useful for tax purposes […]

by adrian at March 26, 2019 02:25 PM

OpenStack Superuser

What’s next in OpenStack networking: Smart NIC support, Cyborg and guaranteed minimum bandwidth

The latest release of OpenStack, Stein, will be served up soon. Named in honor of a Berlin street — and also conveniently abbreviated with the 🍺 emoji — the community will celebrate the release April 10.

What’s on tap for Neutron, the networking-as-a-service project run by most OpenStack users? The main updates include single-root I/O virtualization (SR-IOV) VF-to-VF mirroring, hookups to project Cyborg and smart NIC support. Other improvements include better scalability and performance and integration across communities including OPNFV, Middonet, OpenDaylight, Tungsten Fabric, BaGPipe and BGP VPN.  Here are a few highlights from a post by the OSF’s  Ildiko Vancsa over at opensource.com

 

Smart NIC support

The Neutron team is hard at work on providing support for smart NICs that will enable bare-metal networking with feature parity for the virtualization use case. The result will increase bare-metal compute hosts per deployment, eliminating the need for an agent running on the hosts and for using remote procedure call (RPC) as a communication channel between software components.

Cyborg

OpenStack has a new project to provide a hardware acceleration framework that will be crucial to use cases like 5G and virtual reality: Cyborg. The Cyborg and Neutron teams are working together to provide joint management of NICs with field-programmable gate array (FPGA) capabilities to make it possible to bind Neutron ports with these type of cards.

Guaranteed minimum bandwidth

“Work began during the Rocky cycle to provide scheduling based on minimum bandwidth requirements. The team already showed a demo of this new feature and plans to finalize it by the time Stein is released. As part of the enhancements, Neutron treats bandwidth as a resource and works with the Nova OpenStack compute service to schedule the instance to a host where the requested amount is available,” Vancsa writes.

Get involved
Use the openstack-discuss at lists.openstack.org mailing list with the tag [neturon]
To get code, ask questions, view blueprints, etc., see: Neutron Launchpad Page
Check out Neutron’s regular IRC meetings on the #openstack-meeting channel: http://wiki.openstack.org/Network/Meetings or read the logs here.

Read more from the OSF’s Ildiko Vancsa over at opensource.com

Photo // CC BY NC

The post What’s next in OpenStack networking: Smart NIC support, Cyborg and guaranteed minimum bandwidth appeared first on Superuser.

by Superuser at March 26, 2019 01:07 PM

Trinh Nguyen

Searchlight RC1 released



Yahooo!!! We just released Searchlight Stein RC1 last week [1][2]. The stable/stein branch has been created for all of the projects. Here are the versions:

- searchlight: 6.0.0.0rc1
- searchlight-ui: 6.0.0.0rc1
- python-searchlightclient: 1.5.0

Moreover, we also added some highlights for Searchlight in this Stein cycle  [3]. There will be not much going on for the rest of the cycle, only minor changes. And, since we're busy preparing for the next term with more features to fulfill the Searchlight's vision [4], we will focus on designing the architecture and make Searchlight more stable.

BTW, I will continue serving as Searchlight's PTL for Train :) So, let's rock it!!!


References:

[1] https://review.openstack.org/#/c/644359/
[2] https://review.openstack.org/#/c/644358/
[3] https://releases.openstack.org/stein/highlights.html#searchlight-search-service
[4] https://docs.openstack.org/searchlight/latest/contributor/searchlight-vision.html

by Trinh Nguyen (noreply@blogger.com) at March 26, 2019 05:54 AM

March 25, 2019

OpenStack Superuser

Report: 5G will run on open source, virtualization and edge

5G networks promise a lot: breakneck speed, self-driving cars, telemedicine and more. A new report says that most telecom insiders expect them to be powered by open source, virtualization and edge computing.

This is one of the top takeaways from a recent Futurum Research study sponsored by Intel. The 22-page report, free with email registration here, polled 500 telecom industry insiders. Half of them are based in North America and half in Western Europe, with over 40 percent working at companies with more than 5,000 employees.

When asked what specific 5G revenue and monetization opportunities telcos on were most interested in pursuing, virtual networks universally came up as the number one response, followed by automation, cloud and edge computing, open source and cloud-native applications. This follows, the report authors note, since the three main drivers of 5G rollouts so far are mobile broadband and the internet of things. Of those polled, nearly 40 percent are primary decision makers for planning, deployment management or oversight for 5G in their companies. And the tech is coming fast: Nearly one third of first generation 5G rollouts are scheduled to occur in H1 2019, according to the report.

Speed bumps

Of the challenges experts say they face for this ambitious rollout schedule, first on the list is lack of a clear 5G deployment strategy (38 percent) followed by lack of adequate budget (32 percent) followed by leadership apathy (30 percent.) Rounding out the list are a pair of perennial problems, a lack of 5G training and specialized knowledge (26.7 percent) and inadequate technology investments (17.5 percent.)

From the Futurum Research report.

“Again, we bump into an agility problem,” report authors Daniel Newman and Olivier Blanchard write. “Organizations lacking the right skills and the right tools to leverage new technologies and the revenue models they can drive are unable to operationalize their strategy and execute on it in the real world.”

Get ahead

The upcoming Open Infrastructure Summit offers a number of sessions on 5G, plus tracks dedicated to edge computing and telecoms and NFV. Speakers come from companies including Intel, Nokia, China Mobile, Verizon, Ericsson and Lenovo.

Photo // CC BY NC

The post Report: 5G will run on open source, virtualization and edge appeared first on Superuser.

by Superuser at March 25, 2019 02:21 PM

March 22, 2019

OpenStack Superuser

Inside CI/CD: Must see sessions at the Open Infrastructure Summit

Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit.  The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.

In this post, we’re highlighting some of the sessions you’ll want to add to your schedule about continuous integration and continuous delivery. Check out all the sessions, workshops and lightning talks focusing on these topics here.

Bare metal provisioning in Airship, or Ironic: It’s not just for OpenStack anymore

Airship is a collection of open source tools for automating cloud provisioning and management for the three levels of abstraction: containers (using Kubernetes), VMs (using OpenStack) and bare metal (currently using MaaS).
Ironic is OpenStack’s bare metal provisioning service, but it’s also capable of operating in standalone mode. It’s already used in that mode in containerized OpenStack deployment projects such as Kayobe, and the AirShip community is eager to have Ironic as an additional bare metal provisioning driver for their DryDock component.
This talk discusses the reasoning behind integrating Ironic into Airship and the issues involved in making it happen. Details here.

Testing Jenkins configuration changes

Many people uses Jenkins for testing changes in their software automatically. But how many people test changes in Jenkins itself? Using Jenkins Configuration as Code plugin, job DSL and pipelines mechanism allows users to store configurations in a programer-friendly way. As a result, it’s easier to introduce proper workflow with change reviews. During this session, OVH’s Szymon Datko and Roman Dobosz will cover the topic of such verification. In this intermediate-level talk, they’ll discuss not only basic syntax checking, but also the idea of more sophisticated scenario/integration tests with different services like Gerrit. Details here.

Zuul project update

Zuul is a program that drives continuous integration, delivery, and deployment systems with a focus on project gating and interrelated projects. James Blair, principal software engineer with Red Hat and founding member of the Zuul project team, will walk attendees through what’s new in the latest release and what you can expect to see from the project in the upcoming release. Details here.

Profiling and optimizing container image builds

In Tungsten Fabric, which is based on Zuul, we build over 5,000 container images per day. Every improvement in this process reduces the load on our infrastructure and gives users faster CI jobs. It’s a well understood correlation, but without convenient tooling it may be hard to effectively profile builds and detect code changes that significantly impact performance, leading to undesired pipeline bloat.

In this session, Codilime’s Jarek Lukow and Paweł Kopka will show how to track the performance of image builds in terms of time and storage, what tools to use to easily identify the most problematic points and how to measure and quantify image quality as well as possibilities for improvement. They’ll start from the standard Dockerfile workflow to arrive at new tools that allow for greater control of the builds. All of this will be served in a “spicy automation sauce” for use both in a personal projects and at scale in a CI system of a relatively large open-source project. Details here.

Continuous integration for the enterprise: Teaching an old dog new tricks

As organizations adopt CI/CD into their software practices, it often stops at building artifacts and running test suites. In the typical enterprise, there are many more processes surrounding a software release such as change management and product management sign-off. Integrating these business processes into CI/CD pipelines allows software teams to spend more time delivering value to customers and less time filling out paperwork.
In this session, Red Hat’s Patrick Easters will walk you through how one of their teams journeyed to fully integrate business processes into their CI/CD pipeline. Details here.

See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.

Photo // CC BY NC

The post Inside CI/CD: Must see sessions at the Open Infrastructure Summit appeared first on Superuser.

by Superuser at March 22, 2019 02:06 PM

Chris Dent

Placement Update 19-11

Placement update 19-11! Our little friend has come so far: a stable/stein branch was cut this week.

Most Important

Soon after RC1 was created, we discovered an issue in the PlacementFixture, used by nova. This was fixed and backported so there will be an RC2, toward the end of next week. In the meantime we should be trying to find release critical bugs and looking for feedback from others.

There are (thus far) two PTG related etherpads where you may want to leave some placement-planning-related thoughts:

What's Changed

House Ordering

  • The Monday nova scheduler meeting has been renamed to placement. I'll be taking over from Eric as the usual chair as he's busy with other things. While we are still establishing ourselves as a new project, the meeting seems like a good idea, but we should decide if/how we want to phase it out (or into office hours). Feel free to respond with your thoughts (on this or anything else in the post).

  • I sent out a message earlier in the week asking Nova cores to confirm if they'd like to stay on as Placement core. I'll give it a few more days and then remove nova-cores as an included member.

  • In-tree specs are going to happen (see next section).

  • A message was sent asking for volunteers for cross project liaisons.

Specs/Blueprint/Features

A review is in progress for in-tree specs, also mentioned in an email. As mentioned there, some specs need to be re-proposed for Train.

Bugs

We've got a StoryBoard project group now. I've started using it. Tagging bugs with bug and also making use of cleanup and rfe tags to indicate things that needs to be cleaned up or feature requests.

Please be prepared for these structures to evolve as we gain some understanding of how StoryBoard works.

There are still bugs in launchpad and we need to continue to watch there:

osc-placement

osc-placement is currently behind by 13 microversions.

Pending changes:

Prepping the RC

Everything listed last week has been done except for:

  • Ensuring the install docs are sane and complete. I have asked packaging-related people for their input, as they're the ones who know how their packages are (or will be) set up.

Main Themes

Be thinking about what you'd like the main themes to be. Put them on the PTG etherpad.

With regard to the PTG, because we will have limited time, we should do as much of the discussion in email prior to the PTG so that when we get to the PTG we are resolving the difficult problems, not discovering what they are.

Other Placement

  • https://review.openstack.org/#/q/topic:cd/gabbi-tempest-job Gabbi-based integration tests of placement. These recently found a bug that none of the functional, grenade, nor tempest tests did. Not release related, but useful testing.

  • https://review.openstack.org/#/q/topic:bp/negative-aggregate-membership Negative member of aggregate filtering resource providers and allocation candidates. Work on this can go ahead now that stable/stein has been cut.

  • https://review.openstack.org/#/c/645255/ This is a start at unit tests for the PlacementFixture. It is proving a bit "fun" to get right, as there are many layers involved. Making sure seemingly unrelated changes in placement don't break the nova gate is important. Besides these unit tests, there's discussion on the PTG etherpad of running the nova functional tests, or a subset thereof, in placement's check run.

    On the one hand this is a pain and messy, but on the other consider what we're enabling: Functional tests that use the real functionality of an external service (real data, real web requests), not stubs or fakes.

Other Service Users

We'll hold off here until the final RC is cut. In the future if you stick "placement" somewhere in your commit message I'll probably eventually find your in-progress placement-related changes. A quick scan indicates there's quite a lot of interesting work in progress.

End

🍺🐦

by Chris Dent at March 22, 2019 10:38 AM

Opensource.com

New features in OpenStack Neutron

OpenStack's Stein release offers a variety of network connectivity-as-a-service enhancements to support 5G, the IIoT, and edge computing use cases.

by ildikov at March 22, 2019 07:00 AM

March 21, 2019

OpenStack Superuser

How Kata Containers boost security in Docker containers

Docker owes much of its popularity to the fact that it removes hurdles for developers who need to distribute their software. Pairing it with Kata Containers can make it even more secure.

Kata Containers is an open-source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.

Nils Magnus, a cloud architect at Open Telekom Cloud, ran a recent hour-long webinar (the recording is free with email registration here) on how Kata can improve Docker security. It includes a demo installation of Kata and Docker containers,  plus how to configure them and verify that they’re up and running.

Here are the main sections of the webinar, along with their timestamps:

• Overview of containers (at the 9:45 mark), Docker and Kata architecture (15:01)


• Installation (30:32) on a bare metal server at Open Telekom Cloud with 16 cores and 256 GB, though Magnus says it’s not necessary to run the demo, he wanted to show it on a “decent machine” with capabilities for running several hundred containers.

• Configuration (37:04), verification (39:00), containers in action (40), inside the VM (41:29), troubleshooting commands (42:36). The scripts used in the demo for the automated installation and simple benchmark of the Kata Container runtime are available on GitHub.

• Performance benchmarks (47:06) and results (49:17)

• Use cases (50:10)

So just what are the security advantages? “In a situation where you have several classic Docker containers running and one of your apps has some vulnerabilities and an attacker can access one of the apps inside one of your containers. If the hacker can identify some vulnerabilities in the host kernel, they could access whole system itself of another one of the containers – an isolation breach.” In Kata Containers, it’s different, Magnus says. “Even if we have an attacker in one of the apps and even if the kernel contains a vulnerability, the attacker wouldn’t be able to leave the VM. This is the main difference.”

As for what could be improved in the Kata-Docker pairing, some of the improvements Magnus would like to see include an improved memory footprint, further reduction of the guest Image, removal of the shim between container and runtime and further reduction of the hypervisor footprint.

Get involved

Kata Containers is a fully open-source project––check out Kata Containers on GitHub and join the channels below to find out how you can contribute.

There are also a number of sessions featuring Kata Containers at the upcoming Open Infrastructure Summit, ranging from project onboarding to “Tailor-made security: Building a container specific hypervisor.” See all the sessions here.

The post How Kata Containers boost security in Docker containers appeared first on Superuser.

by Superuser at March 21, 2019 02:06 PM

March 20, 2019

OpenStack Superuser

How OpenStack and Ceph team up for distributed hyper-converged edge deployments

 Leading tech giants and multiple enterprises are investing heavily in edge computing solutions.
Edge computing will enable businesses to act faster after consuming data and to stay on top of the competition. Faster responses expected by innovative applications will need near real-time access to data, processing data to nearby edge nodes and generating insights to feed the cloud and originating devices. Edge solution vendors are building solutions to reduce the impact of latency on a number use cases.

The goal of edge computing enabled network should be to maintain the end-to-end quality of service and user experience continuity in a network where edge nodes are active. For example, considering edge will be mainstream in 5G-telecom network, a 5G subscriber should not lose the active services while moving within edge premises. Also, new services need to push in real time, irrespective of any edge zone in the network. A subscriber will demand not just new services on a consistent basis but in a faster way to realize 100% outcome for real-time applications. As IoT is evolving in technology market landscape, such low latency demand will be higher from consumers to network operators and solution providers.

Along these same lines, the Red Hat team has devised an integrated solution to reduce latency and maintain user experience continuity within a 5G network enabled with edge nodes. Let’s take a closer look at it.

Co-locating Ceph and OpenStack for hyperconvergence

5G networks are characterized with distributed cloud infrastructure in which services are set to deliver at every part of network, i.e. from central data center/cloud to regional and edge. But, having distributed edge nodes connected to a central cloud comes with constraints in the case of 5G networks.

  • The basic requirement for service providers is to life cycle management of network services to every node in the network, have a centralized control on those functions and end-to-end orchestration from a central location.
  • A 5G network should provide lower latency, higher bandwidth along with resiliency (failure and recovery at a single node) and scalability (of services as per increasing demands) at edge level.
  • Service providers will need to provide faster and more reliable services to consumers with minimum hardware resources, especially at regional nodes and edge nodes.
  • A huge amount of data processing and analysis will take place at edge nodes. This will require storage systems that can store every type of data in all available ways and faster access to that data.

To address the above needs, Red Hat’s Sean Cohen, Giulio Fidente and Sébastien Han proposed the solution architecture in OpenStack Summit in Berlin. This architecture amalgamates OpenStack’s core as well as storage-related projects with Ceph in a hyperconverged way. The resulting architecture will support distributed NFV (the backbone technology for 5G), emerging use cases with fewer control planes and distribute VNFs (virtual network functions or network services) within all regional and edge nodes involved in network.

The solution employs the Akraino Edge Stack (a Linux Foundation project), with a typical edge architecture consisting of central site/data center/cloud, regional site and a far edge site.

Figure 1 – Edge Architecture

A central cloud is the backbone of all operations and management of a network where all processed data can be stored. Regional sites or edge nodes can be mobile towers, a node dedicated to specific premises or any other telco-centric premises. Far edge nodes are endpoints of a network which can be digital equipment like mobiles, drones, smart devices; autonomous vehicles, industrial IoT equipment, etc. Shared storage is available at edge zone to make persistent to survive in case of node failure.

Sample deployment

In this proposed solution, edge point of delivery (POD) architecture for telco service providers are referred by the Red Hat team to explain where Ceph clusters can be placed with OpenStack project in a hyperconverged way.

Figure 2 – Point of Delivery (POD)

Based on the above diagram, let’s take a further look at the deployment and operations scenarios.

OpenStack

In the case of figure two above, OpenStack already covers the support for Cruiser and Tricycle of POD. However, for edge deployments different OpenStack projects can be utilized for various operations.

TripleO: A proposed TripleO architecture targeted to reduce the control planes from central cloud to far edge nodes using OpenStack TripleO controller node at the middle layer. The proposed solution is to make TripleO capable of deploying non-controller nodes that are at the edge. With the power of TripleO, OpenStack can have central control over all the edge nodes participated in the network.

Figure 3 – TripleO Architecture

Glance API: It will mainly responsible for workload delivery in form of VM images in the edge network from the central data center to far edge nodes. Glance is set up at to central data center and deployed it on the middle edge node where the controller resides. Glance API with a cache can be pushed at far edge site, which is hyperconverged. This way images can be pulled to far edge nodes from the central data center.

Ceph

Ceph provides different interfaces to access your data based on the object, block, or file storage. In this architecture, Ceph can be deployed in containers as well as a hypervisor as well. Containerizing Ceph clusters brings more benefits to dynamic workloads. Like better isolation, faster access to applications, better control on resource utilization, etc.

Figure 4 – Ceph for the proposed architecture

Deployment of Ceph in hyperconverged should be done at Unicycle and Satellite PODs (refer to figure 2) that is the edge nodes; right after central cloud. Therefore, a resultant architecture, which depicts the co-location of containerized Ceph clusters at a regional site, looks like below. Such co-location can (refer to figure 2).

Figure 5 – Distributed Compute Nodes with Ceph

Ceph is deployed with two packages: Monitor and Manager; to bring monitoring benefits such as gathering info, managing maps and storing them.

The graphic shows how the control plane is detached from decedent nodes and put on a central site.
This brings a number of benefits, including:

  • Reduction of hardware resources and cost at the edge, since edge nodes are hyperconverged and no control plane is required to manage each node
  • Better utilization of compute and storage resources
  • Reduction of deployment complexity
  • Reduction in operational maintenance as the control plane will be similar across all edge nodes and a unified life cycle will be followed for scaling, upgrades, etc.

Final architecture (OpenStack + Ceph Clusters)

Here is the overall architecture from the central site to far edge nodes comprising the distribution of OpenStack services with integration in Ceph clusters. The representation shows how projects are distributed; control plane projects stack at central nodes and data stacks for far edge nodes.

Figure 6 – Final Architecture showing OpenStack projects + Ceph Clusters in HCI Way.

There are few considerations and future work involved in upcoming OpenStack release, Stein. It will involve focusing on service sustenance when edge nodes disconnections, no storage requirement at the far edge, HCI with Ceph monitors using containers resource allocations, ability to deploy multiple Ceph clusters with TripleO, etc.

Conclusion

Hyperconvergence of hardware resources is expected to be a fundamental architecture for multiple mini data center i.e. edge nodes. Red Hat team came with an innovative hyper-convergence of OpenStack projects along with Ceph software-defined storage. As this solution shows, it’s possible to gain better control of all edge nodes by reducing control planes and yet maintain the continuity and sustainability of 5G network along with the performance required by the latest applications.

This article was first published on TheNewStack.

About the author

Sagar Nangare is a technology blogger, focusing on data center technologies (networking, telecom, cloud, storage) and emerging domains like edge computing, IoT, machine learning, AI). He works at Calsoft Inc. as a digital strategist.

The post How OpenStack and Ceph team up for distributed hyper-converged edge deployments appeared first on Superuser.

by Sagar Nangare at March 20, 2019 01:09 PM

Mirantis

Introduction to Kustomize, Part 2: Overriding values with overlays

Now it's time to move on and look at overriding Kubernetes object parameters using Kustomize overlays.

by Nick Chase at March 20, 2019 12:26 PM

Aptira

One Small Step Just Won’t Cut It

Tristan Good Company Step Challenge Winner

As you may have already noticed, many of our Solutionauts like to run. We run in the City to Surf. We run around conferences. We run all up in ur Cloud.

So we recently setup a platform to hold company step challenges. Aptira Solutionauts from all over the world stepped out from behind their computers to take the title of Aptira’s Greatest Stepper! Here’s how it went down.

Farzaneh and John were first out of the gates, with John taking part in a craft beer tour in Manly – racking up lots of steps in-between breweries and downing a few well deserved pints along the way. This is an excellent use for your Aptira bottle opener thongs by the way!

Tom got lost in the Taiwanese mountains while Bharat was transversing the Indian subcontinent and Jarryd hit the beach. Kat came in first out of the girls almost hitting 30k steps and Jess conveniently broke her toe the day before the challenge, coming in last place. Excuses much??

Our winner took first place with an unbelievable 122,578 steps. Seriously Tristan, are you even human?

The next step challenge will be taking place soon. No Tristan’s aloud. Stay tuned to find out who wins!

The post One Small Step Just Won’t Cut It appeared first on Aptira.

by Jessica Field at March 20, 2019 02:41 AM

March 19, 2019

OpenStack Superuser

Meet the newest members of the OpenStack Technical Committee

The OpenStack Technical Committee provides technical leadership for OpenStack as a whole. Responsibilities include enforcing OpenStack ideals (such as openness, transparency, commonality, integration and quality), deciding on issues that impact multiple programs, providing an ultimate appeals board for technical decisions and general oversight. It’s a fully-elected Committee that represents contributors to the project.

Made up of 13 directly elected members, the community hits refresh with partial elections every six months. There were nine candidates for the seven spots open this round. While some have served on the TC before, we wanted to highlight their thoughts on where the OpenStack community is headed.

Here are the newly elected members, in alphabetical order, with excerpts from their candidacy statements:

Zane Bitter, Red Hat, was elected for a second term on the TC. Some of the efforts he backed in his first term include a document on code-review techniques, work on the Vision for OpenStack Clouds document and input on adding new projects to the OSF. “Because the TC is the only project-wide elected body, leading the community to all move in the same direction is something that cannot happen without the TC. I plan to continue trying to do that and encouraging others to do the same. “

Thierry Carrez, VP of engineering at the OSF, admits that he’s been on the TC “forever,” and while new insights are important an historical perspective matters right now. “OpenStack is in the middle of a transition — from hyped project driven by startups and big service providers to a more stable project led by people running it or having a business depending on it. A lot of the systems and processes that I helped put in place to cope with explosive growth are now holding us back. We need to adapt them to a new era and I feel like I can help bringing the original perspective of why those systems were put in place, so hopefully we do not end up throwing the baby with the bath water.

Graham Hayes, who currently works at Microsoft, is the project team lead for Designate. This is his second term on the TC, and while stressing that turnover is important, he underlines his recent experience as a reason for returning. “I have spent time recently working on a very large OpenStack cloud in a day-to-day operations role and I think that this experience is important to have on the Technical Committee. The experience that a lot of our users have is very different to what we may assume and knowing how end users deal with bugs, deployment life cycles and vendors should guide us.”

Rico Lin, a software engineer at EasyStack, has been involved with OpenStack since 2014. His main goals for serving on the TC include cross-community integration (Kubernetes, CloudFoundry, Ceph, OPNFV), strengthening the structure of Special Interest Groups (SIGs) and cross-cooperation between users, operators and developers.

Mohammed Naser, CEO of Vexxhost, also serving a second term. “I think that we should work on increasing our engagement with other communities. In addition, providing guidance and leadership for groups and projects that need help to merge their features, even if it involves finding the right person and asking them for a review. I’d like to personally have a more “hands-on” approach and try to work more closely with the teams, help shape and provide input into the direction of their project while looking at the overall picture.

Jim Rollenhagen, principal software engineer at Oath. A new member of the TC, he’s been involved with OpenStack since 2014, primarily upstream on as PTL and core reviewer. One of his suggestions is to encourage more part-time contributors. “People like someone scratching an itch in their lab at home, a user getting curious about a bug, or an operator that finds an edge case. I think it’s easier for these types of people to contribute today than it has been in the past, but I believe we can keep improving on this. Our onboarding process can continue to improve. We should have more people willing to walk a new contributor through their first patch (kudos to the people doing this already!)”

Alexandra Settle who works at Suse is also a first timer on the TC. She’s been an active contributor to OpenStack manuals since early 2014 and a been a core member since early 2015. She aims to focus on three areas: breaking down barriers between projects (new and old) and contributors (new and old); the openness of the community and maintaining that focus; and embracing change. “Over the years, there has been an unspoken consensus that we are all aiming for the success of OpenStack as free and open software, fully developed and used by a welcoming and supportive community. I hope to further promote this statement.”

So who gets a vote in these elections? All Foundation individual members who are also committers for one of the official project teams repositories during the release time frame (for this round, it was Train). More on the process here, including how to get involved for next time.

“Having a good group of candidates helps engage the community in our democratic process,” says Foundation staffer Kendall Nelson, one of the election officials. “Thank you to all who voted and who encouraged others to vote. We need to ensure your voices are heard!”

The post Meet the newest members of the OpenStack Technical Committee appeared first on Superuser.

by Superuser at March 19, 2019 02:06 PM

CERN Tech Blog

Openstack Day CERN

CERN, the European Organization for Nuclear Research, is organizing an OpenStack Day (OSD) on May 27th, 2019. The event theme is “Accelerating Science with OpenStack” In this event we would like to gather the OpenStack community for one day discussion on how OpenStack is helping thousands of scientists around the world. Talks from CERN, SKA and SWITCH are already confirmed! There are opportunities for lightning talks on topics around the use of OpenStack in scientific domains.

by CERN (techblog-contact@cern.ch) at March 19, 2019 10:21 AM

March 18, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on… The Project Teams Gathering (PTG) in Denver

In open collaboration, it is important for contributors to regularly meet in person. It allows open source projects to build shared understandings, discuss common priorities, iterate quickly on solutions for complex problems, and make fast progress on critical issues. It is a major step to establish a project identity beyond each separate organization participating.
The Project Teams Gathering (PTG) is a work event for contributors to various open source projects, special interest groups or working groups, organized by the OpenStack Foundation. It provides meeting facilities allowing those various groups to meet face-to-face, exchange and get work done in a productive setting. The co-location of those various meetings, combined with the dynamic scheduling of the event, make it easy to cross-pollinate between groups, or participate in multiple team meetings.

Historically, the PTG was organized as a separate event, run at a different time and location from our other events. For the first time in Denver in May 2019, the PTG will be run just after the Summit, in the same venue. This should make it accessible to a wider set of contributors.

As the OpenStack Foundation evolved to more broadly support openly developing open infrastructure, the PTG is now open to a larger set of open source projects. In Denver we’ll obviously have various OpenStack project teams taking the opportunity to meet, but also OSF pilot projects like Kata Containers, StarlingX and Airship. Beyond that, the event is open to other open infrastructure projects: at the last event we welcomed a Tungsten Fabric developers meeting, and in Denver we’ll have Rust-VMM developers leveraging the event to meet in person. Rust-VMM is a nascent open collaboration to develop common Rust virtualization crates, reusable between CrosVM and Firecracker.

You can learn more about the upcoming PTG, and see the full list of teams that will meet there by visiting the PTG website. If you are a contributor to one of those projects, we’d really like to see you there!

OpenStack Foundation news

  • Here are the latest updates on the Open Infrastructure Summit in Denver, April 29 – May:
    • The schedule is live and registration is open. Check out the lineup of speakers and get your tickets now before prices increase on April 11 at 11:59 p.m. PT.
    • After Denver, the Open Infrastructure Summit heads to Shanghai, the week of November 4. Sponsor Sales are now open, learn more here.
  • Last week, the OpenStack Foundation Board of Directors reviewed confirmation guidelines for new Open Infrastructure Projects under the Foundation. After reviewing the process by which the guidelines were drafted and their current state, the Board unanimously approved the guidelines.

OpenStack Foundation Project News

OpenStack

Airship

StarlingX

  • The community reached their first milestone to containerize the control plane services of StarlingX for the upcoming release. For details, check out the Wiki.
  • There will be a hands-on workshop at the Open Infrastructure Summit. If you’re interested in learning how to deploy StarlingX and trying out some of the cool features of the platform, sign up for the workshop in Denver.

Zuul

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org and to receive the newsletter, sign up here.



The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at March 18, 2019 12:26 PM

CERN Tech Blog

Splitting the CERN OpenStack Cloud into Two Regions

Overview The CERN Cloud Infrastructure has been available since 2013 for all CERN users. During the last 6 years it has grown from few hundred to more than 300k cores. The Cloud Infrastructure is deployed in two data centres (Geneva, Switzerland and Budapest, Hungary). Back in 2013 we decided to have only one region across both data centres for simplicity. We wanted to offer an extremely simple solution to be adopted easily by our users.

by CERN (techblog-contact@cern.ch) at March 18, 2019 12:00 PM

March 15, 2019

OpenStack Superuser

Strengthening open infrastructure: Integrating OpenStack and Kubernetes

OpenStack and Kubernetes are currently the most popular open infrastructure solutions, so it’s worthwhile to provide users access to a platform that provides both services, using a single personal account. Currently this is hardly possible, since the two systems provide different authentication mechanisms. OpenStack uses its own identity system, Keystone, while Kubernetes delegates authentication to external providers through a mechanism of plug-ins.

Previous attempts to integrate assumed password-based authentication for OpenStack and enabled Kubernetes users to authenticate with their OpenStack passwords through Keystone.

However, there other means to authenticate to Keystone, for example through federated authentication, which is a more secure and scalable solution, redirecting to an external trusted identity provider. In particular, the federated GARR cloud platform uses federated authentication through EduGain, enabling SSO for all users of the worldwide research community.

To provide a general solution, we developed an innovative technique, based on a novel feature of OpenStack called application credentials that became fully available with the Rocky release.

The proposed solution requires a modified version of one of the official SDK libraries for OpenStack. This change has been approved by the project maintainers and will be released as part of the next official distribution of the library.

The implementation of Keystone authentication for Kubernetes relies on a WebHook, one of the authentication modules provided by Kubernetes authentication. When WebHook authentication is enabled, the Kubernetes API redirects requests to a RESTful service. In our case, we use an OpenStack-Keystone service as authentication provider.

To simplify usage, we’ve extended the OpenStack dashboard by adding a button for downloading a config file, ready to use with kubectl, that includes the application credentials.

GARR deployed a multi-tenant Kubernetes cluster on bare metal to reduce management overhead resource fragmentation and delays in cluster creation. We used the same declarative modeling tools for deploying the cluster with MaaS and Juju, side by side with our OpenStack infrastructure. This facilitates maintenance and scaling of the infrastructure with a single set of tools. Tough multi-tenancy limits restrict the rights for users to access common extensions. Therefore, we provide a set of specific roles and bindings to give normal users without privileges on namespace kube-system but with rights to perform installations, for example through Helm.

What follows is the architecture of the solution, its components and their implementation in a real-world production environment as well as an installation and configuration guide for users. We conclude with suggestions for future extensions in order to deal with role-based access control (RBAC.)

Introduction

We’ll start with how to integrate authentication between OpenStack, an infrastructure-as-a-service (IaaS) provider, and Kubernetes, a container deployment service to allow OpenStack users to seamlessly access Kubernetes services.

In Kubernetes, processing of a request goes through the following stages:

  • Authentication
  • Authorization
  • Admission control

Kubernetes supports several authentication strategies that may invoke external authenticator providers (e.g. LDAP or OpenID Connect) available through plug-ins, as shown in step one in the following diagram:

Kubernetes authentication architecture.

Each plug-in, which is invoked as an external command through the library k8s.io/client-go, implements its protocol-specific logic, then returns opaque credentials. Credential plug-ins typically require a server-side component with support for WebHook token authentication to interpret the credential format produced by the client plug-in.

The solution we developed to provide Keystone authentication for Kubernetes consists of the following modules:

  • A credential plug-in for Keystone authentication
  • A service implementing a WebHook for authentication
  • A service implementing a WebHook for authorization (currently the same as module two)

Workflow of Kubernetes authentication through Keystone.

Here are the steps for the authentication process:

  1. A user issues a kubectl command or issues an API call, which is handled by client-go.
  2. The credential plug-in obtains the user’s Keystone credential, either from the kubeconfig file or by prompting the user and requests a token to Keystone using the aforementioned credentials.
  3. The token from Keystone returns to the client through the credential plug-in.
  4. The client uses this token as a bearer token against the Kubernetes API server.
  5. The Kubernetes API server uses the WebHook token authenticator to validate the token against the Keystone service.
  6. The Keystone service verifies the token and returns the user’s username and groups.

The solution we present is of general interest, since it allows cloud providers to offer both a container deployment platform based on Kubernetes and IaaS services provided by OpenStack, both accessible through a single set of credentials.

An earlier solution for integrating Kubernetes authentication with OpenStack relied on password authentication. But OpenStack can be configured to use federated authentication, like the one used in the GARR Federated Cloud Platform, provided by Idem or EduGain. Consequently, password authentication isn’t available for normal users in this scenario.

A development team from SWITCH and GARR worked jointly to find a more general solution. The recent Queens release for Keystone introduced the mechanism of application credentials. Through this mechanism, an application can request a token that can be used thereafter to validate user requests before performing operations on his behalf. Furthermore, in the Rocky release of the Horizon dashboard, a panel has been added allowing users to create application credentials.

The key idea of this solution is to use an application credential obtained from Keystone and pass it to Kubernetes for validating user requests. This requires exploiting the plug-in architecture provided by Kubernetes to insert suitable steps in the authentication process. In particular, Kubernetes needs to convert credentials into a token and later use that token whenever needed to validate each individual request before performing it.

The ability to obtain credentials directly from the dashboard allows users to be completely autonomous in setting up integrated Kubernetes/Keystone authentication. For example, the given credentials can be inserted in the user configuration file for kubectl, the standard command-line interface for operating on Kubernetes. Afterwards, the user can access Kubernetes without any further complications.

A limitation of the current solution is that it requires installing a plug-in on the user’s machine, which has these drawbacks:

  • Binary versions for each machine architecture and for each Kubernetes release must be maintained
  • Mobile devices are not supported

Keystone authentication with application credentials for Kubernetes

Since the Queens release of OpenStack, Keystone has supported application credentials. These credentials can be used by applications to authenticate through Keystone with the assigned privileges by the user who created them. In particular, such credentials can be used by the Kubernetes API to authenticate and authorize operations.

In the solution presented here, authentication is performed by a plugin (kubectl-keystone-auth), while authorization is delegated by the Kubernetes API through a WebHook to a RESTful web service (k8s-keystone-auth).

In the next section, we describe how to use Keystone Application Credentials for authenticating to Kubrernetes and use them for Kubernetes services.

Create application credentials with Horizon

The following screenshots illustrate the steps needed to an application credential through the OpenStack Horizon dashboard.

Select Application Credentials in the Identity Panel:

Fill out the form to create an application credential:

Download both an openrc file to set OpenStack environment variables for using the generated application credential and a configuration file for kubectl:

The button “Download kubeconfig file” is an extension that we developed for the Horizon dashboard, which creates a a preconfigured ./kube/config file ready to use to work on Kubernetes. It contains the application credentials and other parameters for connecting to the Kubernetes API server.

The code for this extension is available on GitLab and mirrored on GitHub.

Enable Kubernetes authentication via application credentials

Once the application credential is created, you can download the kubectl config file with the button “Download kube config file”.

The credential plugin kubectl-keystone-auth is required in order to enable application credentials authentication. It can be either downloaded or compiled from sources.

Download the credential plugin

Download kubectl-keystone-auth for your architecture from:

https://git.garr.it/cloud/charms/kubernetes-keystone/blob/master/bin/linux-amd64/kubectl-keystone-auth

Install it in a folder accessible by kubectl, for example:

$ mkdir -p ~/.kube/bin

$ cp -p kubectl-keystone-auth ~/.kube/bin

Build the credential plugin

A working installation of Golang is needed to build the plugin. Follow the instructions at: https://golang.org/doc/install#install.

Clone the repository for cloud-provider-openstack:

$ git clone https://github.com/kubernetes/cloud-provider-openstack

$GOPATH/src/kubernetes/cloud-provider-openstack

 

 

 

 

$ cd $GOPATH/src/kubernetes/cloud-provider-openstack

Build the plugin with:

$ sudo make client-keystone-auth

Install it in a folder accessible by kubectl, for example:

$ mkdir -p ~/.kube/bin

$ cp -p client-keystone-auth ~/.kube/bin/kubectl-keystone-auth

Setting up Keystone authentication

This section describes the steps that a cloud administrator needs to perform to setup Keystone authentication in a Kubernetes cluster.

The Kubernetes API server must be configured with WebHook token authentication to invoke an authenticator service for validating tokens with Keystone. The service to be invoked cannot be Keystone itself, since the payload produced by the WebHook has a different format than the requests expected by the Keystone API for application credentials.

Here’s an example of a WebHook payload:

{

"apiVersion": "authorization.k8s.io/v1beta1",

"kind": "SubjectAccessReview",

"spec": {

"resourceAttributes": {

"namespace": "kittensandponies",

"verb": "get",

"group": "unicorn.example.org",

"resource": "pods"

},

"user": "jane",

"group": ["group1"]

}

}

While this token validation request to Keystone has an empty payload and parameters are passed as follows: token of authorized user in the X-Auth-Token request header, token to validate in the X-Subject-Token request header. The response has the following form:

{

"token": {

"audit_ids": [

"mAjXQhiYRyKwkB4qygdLVg"

],

"expires_at": "2015-11-05T22:00:11.000000Z",

"issued_at": "2015-11-05T21:00:33.819948Z",

"methods": [

"password"

],

"user": {

"domain": {

"id": "default",

"name": "Default"

},

"id": "10a2e6e717a245d9acad3e5f97aeca3d",

"name": "admin",

"password_expires_at": null

}

}

}

The program that implements the authenticator service is called k8s-keystone-auth. Steps to obtaining it are described below.

Configure the Kubernetes API server

The Kubernetes API receives a request including a Keystone token. In the Kubernetes language, this is a Bearer Token. To validate the Keystone token the Kubernetes API server will use a WebHook. The service invoked through the WebHook will in turn contact the Keystone service that generated the token in order to validate it.

Here we describe how to configure the Kubernetes API server to invoke the k8s-keystone-auth authenticator through a WebHook.

Create the following file in /path/to/webhook.kubeconfig:

apiVersion: v1

clusters:
- cluster:
insecure-skip-tls-verify: true
server: KEYSTONE_URL
name: webhook
contexts:
- context:
cluster: webhook
user: webhook
name: webhook
current-context: webhook
kind: Config
preferences: {}
users:
- name: webhook

where KEYSTONE_URL is the endpoint of the Keystone service.

Execute the following command in the master Kubernetes API node to configure it:

$ sudo snap set kube-apiserver authentication-token-webhook-config-file = /path/to/webhook.kubeconfig

If you do not used snap edit file in /etc/kubernetes/manifests/kube-apiserver.yaml and add this line as parameter to the kubectl command:

- --authentication-token-webhook-config-file=webhook.kubeconfig

Install the Keystone authenticator service

The Keystone authenticator service is the component in charge of validating requests containing bearer tokens.

The Keystone authenticator service is implemented by the program k8s-keystone-auth. You can either download a pre-compiled version or build it from various sources.

Download the Keystone authenticator

You can find pre-compiled versions of k8s-keystone-auth for different architectures in the following repository:

https://git.garr.it/cloud/charms/kubernetes-keystone/raw/master/bin/

Deploy via Juju

In order to deploy the Keystone authorization service on a cluster managed through Juju, we provide a charm that automates its deployment. The service will be automatically replicated on all the Kubernetes Master units, ensuring high availability. The charm is available on the public repository: https://git.garr.it/cloud/charms/kubernetes-keystone.

The k8s-keystone-auth service can be deployed by doing:

$ juju deploy cs:~csd-garr/kubernetes-keystone

--config keystone-url='KEYSTONE_URL'

--config k8s-keystone-auth-url='DOWNLOAD_URL'

--config authn-server-url='AUTHN_URL'

--config authz-server-url='AUTHZ_URL'

$ juju add-relation kubernetes-master kubernetes-keystone

The configuration parameters are:

KEYSTONE_URL URL of the Keystone endpoint.

DOWNLOAD_URL URL for downloading the Keystone authenticator server program.

AUTHN_URL URL of the WebHook authentication service.

AUTHZ_URL URL for the WebHook authorization service.

Configuration parameters can also be passed through a YAML file as explained here: https://docs.jujucharms.com/2.4/en/charms-config.

Alternatively, the WebHook authenticator service can be deployed as a Kubernetes pod. This requires a Docker image for k8s-keystone-auth to be deployed within a Docker container

The steps for building the Docker image are described in the section “Build the Keystone authenticator,” including the following:

$ make image-k8s-keystone-auth

The following deployment file is used for deploying the WebHook authenticator service on Kubernetes itself.

——————————————————————————–

kind: Deployment

metadata:

name: k8s-keystone-auth

namespace: kube-system

labels:

app: k8s-keystone-auth
spec:
replicas: 1
selector:
matchLabels:
app: k8s-keystone-auth
template:
metadata:
labels:
app: k8s-keystone-auth
spec:
nodeSelector:
dedicated: k8s-master
containers:
- name: k8s-keystone-auth
image: rdil/k8s-keystone-auth:latest
imagePullPolicy: Always
args:
- ./bin/k8s-keystone-auth
- --tls-cert-file
- /etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file
- /etc/kubernetes/pki/apiserver.key
- --keystone-url
- KEYSTONE_URL
- k8s-auth-policy
- --sync-config-file
- /etc/kubernetes/pki/identity/keystone/syncconfig.yaml
volumeMounts:
- mountPath: /etc/kubernetes/pki

name: k8s-certs

readOnly: true

- mountPath: /etc/ssl/certs

name: ca-certs

readOnly: true

ports:

- containerPort: 844

volumes:

- name: k8s-certs

hostPath:

path: /etc/kubernetes/pki

type: DirectoryOrCreate

- name: ca-certs

hostPath:

path: /etc/ssl/certs

type: DirectoryOrCreate

---

kind: Service

apiVersion: v1

metadata:

name: k8s-keystone-auth-service

namespace: kube-system

spec:

selector:

app: k8s-keystone-auth

ports:

- protocol: TCP

port: 8443

targetPort: 8443

----------------------------------------------------------------------------------

where KEYSTONE_URL is https://keystone.cloud.garr.it:5000/v3 for the GARR Cloud Platform.

In order to deploy the component on the master node (because the master node contains the sync config file and the WebHook that forward the auth request to the localhost), we labeled the master node (kubectl label nodes name_of_your_node dedicated=k8s-master).

The pod can be deployed on the master node (kubectl taint nodes –all node-role.kubernetes.io/master-) and we added to the deployment specification a section to choose the master node as node for the deployment.

We also added “hostNetwork: true” in order to put the pod on the same network as the master node in order to make the communication with it possible.

OpenStack client using application credentials

Application Credentials can be used also with the OpenStack Client for authenticating to Keystone.

In order to use application credentials, the following variables must be set:

export OS_AUTH_TYPE=v3applicationcredential

export OS_AUTH_URL=KEYSTONE_URL

export OS_APPLICATION_CREDENTIAL_NAME=CREDENTIAL_NAME

export OS_APPLICATION_CREDENTIAL_SECRET=CREDENTIAL_SECRET

where KEYSTONE_URL for the GARR Cloud Platform is https://keystone.cloud.garr.it:5000/v3, while CREDENTIAL_NAME and CREDENTIAL_SECRET are the values from the application credential.

The openstack command, so configured, will then pass authentication:

$ openstack token issue

+------------+----------------------------------+

| Field | Value |

+------------+----------------------------------+

| expires | 2018-09-04T09:58:33+0000 |

| id | 99e74e7a8ec14b1bb945672662580ea7 |

| project_id | daadb4bcc9704054b108de8ed263dfc2 |

| user_id | 4ae5e9b91b1446408523cb01e5da46d5 |

+------------+----------------------------------+

Conclusions

We’ve presented a solution for integrating Kubernetes with OpenStack, exploiting Keystone as a common authentication service and using application credentials provided by Keystone.

In a follow-up post, we’ll describe how we set up a multi-tenant Kubernetes cluster on bare metal using automation tools MaaS and Juju. The cluster is shared among users of the GARR comunity, providing better performance and reduced costs. However, we needed to provide some common functionality in order to enable users to install services that are normally installed n the system namespace kube-system. In particular, we’ll show how to deal with the creation of Kubernetes dashboard and to exploit Helm for installing packaged containerized applications.

About the authors

The work was carried out by GARR‘s Giuseppe Attardi, Alberto Colla, Alex Barchiesi, Roberto Di Lallo, Fulvio Galeazzi Claudio Pisa and Saverio Proto at SWITCH as part of the GÉANT project (GN4-2.)

The post Strengthening open infrastructure: Integrating OpenStack and Kubernetes appeared first on Superuser.

by Giuseppe Attardi, Alberto Colla, Alex Barchiesi, Roberto Di Lallo, Fulvio Galeazzi, Claudio Pisa, Saverio Proto at March 15, 2019 02:15 PM

Chris Dent

Placement Update 19-10

Placement update 19-10 is here. We're fast approaching placement's first official release.

Most Important

There are several tasks left before we can cut the release, mostly related to documentation and other writing related things. I've attempted to enumerate them in a "Prepping the RC" section below. These are things that need to be done before next Thursday, preferably sooner.

It's also important to be thinking about how placement would like to engage (as a group) with the PTG (the Forum is already decided: there will be an extraction related Forum session).

What's Changed

  • Oh, hey, I'm, like, the placement PTL. Mel and I decided early in the week that whatever the official timetable, I'll take the baton from here. Thanks to everyone who helped to get placement to where we are now.

  • The stack of code that got rid of the List classes in favor of module level methods, which also happened to move each "object" type to its own module, has merged. I'm glad we got this in before release as it ought to make debugging and digging around a bit easier.

  • Lots of little documentation tuneups (from story 2005190) have merged, including pointing to storyboard for bugs. These chanages scrape the surface of what remains (listed below).

  • I wrote up a blog post on profiling wsgi apps which I'd been doing to confirm that the many refactorings that have happened recently weren't having a negative impact (they are not).

  • We decided to wait for Train for the negative-member-of functionality and the allocation ratio change in osc-placement.

  • Kolla has merged several changes for extracted placement. Thanks!

Specs/Blueprint/Features

Skipping this section until after the release candidate(s) are done.

Bugs

We've got a StoryBoard project group now. I've started using it. Tagging bugs with bug and also making use of a cleanup tag to indicate things that needs to be cleaned up. There are worklists for both of these:

Please be prepared for these structures to evolve as we gain some understanding of how StoryBoard works.

There are still bugs in launchpad and we need to continue to watch there:

Many of these are about nova's use of placement. At some point after RC we should do a bug review, and port placement-only things to StoryBoard.

osc-placement

osc-placement is currently behind by 13 microversions.

Pending changes:

Prepping the RC

Things that need to happen so we can cut a placement release candidate:

  • Anything currently open that we want in. There are only 6 pending patches that might be options (everything else is either waiting for Train or already +W), so a quick look at them is worth the effort.

  • We've started a cycle-highlights etherpad, as announced by this email. We've probably got enough, but feel free to add to it if you think of something.

  • There's a story for preparing placement docs for stein. The story includes several tasks, many of which are already merged. Have a look and assign yourself a task if you can commit to having it done by early next week. There are some biggies:

    • Creating the canonical how to upgrade from placement-in-nova to placement-in-placement document. As stated very well by Tetsuoro, this is effectively translating the grenade upgrade script to English.

    • Ensuring the install docs are sane and complete. I have asked packaging-related people for their input, as they're the ones who know how their packages are (or will be) set up, but there's also an "install from-pypi" hole that needs to be filled.

  • The releasenotes need to be evaluated for correctness and effective annotation of upgrade concerns. They will also need a prelude, probably pointing to the "upgrading from nova" doc mentioned above. For a sample, see nova's rocky prelude.

Main Themes

We'll come back to themes once the RC is cut.

Other Placement

Other Service Users

We'll also hold off here until the RC is cut. In the future if you stick "placement" somewhere in your commit message I'll probably eventually find your in-progress placement-related changes.

End

Once the release is released, it will be time to start thinking about what we want Train to look like. There are pending Stein feature specs that we will want to do (and will need to be put in our specs directory, once it exists), but other than the various ideas about ways to do multi-nova/cloud partitioning of resource providers and multi-service partitioning of allocations (both of which need much more well-defined uses cases before we start thinking about the solutions) I've not heard a lot of clamouring from services and operators for features in Placement. If you have heard, or are clamouring, please make yourself known. I'd personally like us to focus on enabling existing services that use or want to use placement (nova, neutron, blazar, cyborg) and its existing features rather than new features. No need to have any immediate thoughts or decisions on this, but some background thinking is warranted.

Also, OMG, we need a logo. How about an Australian Magpie? They make a cool noise.

by Chris Dent at March 15, 2019 02:03 PM

March 14, 2019

SUSE Conversations

Keep an Open Mind in an Open Source World

 Blog written by Ryan Hagen, Consulting Manager, Global SUSE Services     Change is inevitable.  We’ve said goodbye to VCRs, cassette tapes, and proprietary code and hello to Netflix, Spotify, and open source.  Why?  Because today’s always-on, digitally-defined world demands increased efficiency, improved automation, immediate satisfaction and innovation. It is no secret that VMware owns the […]

The post Keep an Open Mind in an Open Source World appeared first on SUSE Communities.

by stacey_miller at March 14, 2019 05:20 PM

Adam Young

Building the Kolla Keystone Container

Kolla has become the primary source of Containers for running OpenStack services. Since if has been a while since I tried deliberately running just the Keystone container, I decided to build the Kolla version from scratch and run it.

UPDATE: Ozz wrote it already, and did it better: http://jaormx.github.io/2017/testing-containerized-openstack-services-with-kolla/

I had an clone of the Kolla repo already, but if you need one, you can get it by cloning

git clone git://git.openstack.org/openstack/kolla

All of the dependencies you need to run the build process are handled by tox. Assuming you can run tox elsewhere, you can use that here, too:

tox -e py35

That will run through all the unit tests. They do not take that long.

To build all of the containers you can active the virtual environment and then use the build tool. That takes quite a while, since there are a lot of containers required to run OpenStack.

$ . .tox/py35/bin/activate
(py35) [ayoung@ayoungP40 kolla]$ tools/build.py 

If you want to build just the keystone containers….

 python tools/build.py keystone

Building this with no base containers cached took me 5 minutes. Delta builds should be much faster.

Once the build is complete, you will have a bunch of container images defined on your system:

REPOSITORY TAG IMAGE ID CREATEDSIZE
kolla/centos-binary-keystone 7.0.2 69049739bad6 33 minutes ago 800 MB
kolla/centos-binary-keystone-fernet 7.0.2 89977265fcbb 33 minutes ago 800 MB
kolla/centos-binary-keystone-ssh 7.0.2 4b377e854980 33 minutes ago 819 MB
kolla/centos-binary-barbican-keystone-listener 7.0.2 6265d0acff16 33 minutes ago 732 MB
kolla/centos-binary-keystone-base 7.0.2 b6d78b9e0769 33 minutes ago 774 MB
kolla/centos-binary-barbican-base 7.0.2 ccd7b4ff311f 34 minutes ago 706 MB
kolla/centos-binary-openstack-base 7.0.2 38dbb3c57448 34 minutes ago 671 MB
kolla/centos-binary-base 7.0.2 177c786e9b01 36 minutes ago 419 MB
docker.io/centos 7 1e1148e4cc2c 3 months ago 202 MB

Note that the build instructions live in the git repo under docs.

by Adam Young at March 14, 2019 03:43 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
April 19, 2019 12:52 AM
All times are UTC.

Powered by:
Planet