September 24, 2020

OpenStack Superuser

Inside Open Infrastructure: The latest from the OpenStack Foundation

Spotlight on:

Open Infrastructure Summit schedule is live!

The schedule features keynotes and sessions from users like Volvo Cars Corporation, Workday, GE Digital, Société Générale, LINE, Ant Group, and more! The event, held virtually for the first time, takes place October 19-23 and includes more than 100 sessions around infrastructure use cases like cloud computing, edge computing, hardware enablement, and security as well as hands on training and an opportunity to interact with vendors in the Open Infrastructure Marketplace.

10,000+ attendees are expected to participate, representing 30+ open source communities and more than 110 countries. Keynotes begin at 10am Central Time on Monday, October 19 and again on Tuesday, October 20.

Sessions for the Summit focus on the most important open infrastructure issues and challenges facing global organizations today:

  • AI / Machine Learning, HPC
  • Container Infrastructure
  • CI/CD
  • 5G
  • NFV and Edge Computing
  • Cloud Computing: Public, Private and Hybrid
  • Security

View the full Summit schedule!

Still on the fence?

Read this Superuser article and check out some most anticipated Summit sessions that you might love!

Make sure to subscribe to the OpenStack Foundation (OSF) YouTube channel to get exclusive content on how the Summit is being organized!

The Superuser Awards nominees are now available for community review! Check out the 8 nominees and rate your favorites.

Get my free Summit ticket!

OpenStack Foundation news

Open Infrastructure Summit, October 19-23, 2020

Project Teams Gathering (PTG), October 26-30, 2020

Airship: Elevate your infrastructure

  • Airship in the news! Read how AT&T is using Airship to deploy all of the operator’s network cloud deployments running their 5G workloads.
  • The Technical Committee is pleased to announce that the Airship 2.0 beta milestone completion is imminent. This milestone included 135 issues, with the last open issues being targeted for completion by the end of September.
  • Updated meeting cadence!
    • Airship SIG UI: design topics will be included during Airship Open Design meetings, and grooming sessions will now occur during the Airship Flight Plan Call. Read the announcement here.
    • Airship Slack/IRC meeting: Change from weekly to bi-weekly. Please see the original announcement here.

Kata Containers: The speed of containers, the security of VMs

  • The community has just tagged the 2.0.0-rc0 Kata-Containers release which is the first release candidate for Kata Containers 2.0. Next, we will focus on fixing bugs and making it stable for the incoming 2.0.0 release. Check out the 2.0.0-rc0 release highlights.
  • We have four candidate submissions for the Architecture Committee election, where we have three seats available. Thus, we shall enter the Q&A and subsequently the voting phases. The current period (September 18th – 27th) is for asking questions to candidates via this mailing list.

OpenStack: Open source software for creating private and public clouds

  • We are in the last stages of preparation for the Victoria release, this week being the deadline for the first release candidates, in preparation for the final release on October 14.
  • Elections for OpenStack leadership for the upcoming Wallaby development cycle (PTLs and TC seats) began September 22nd with the nomination period extending for one week, until September 29th. Polling will take place October 6th through the 13th. For more details on the technical elections, check out the election site here.
  • A new SIG has been proposed to gather people interested in discussions around packaging. The rpm-packaging team will be transitioned into this SIG, however, the Packaging SIG can also include other packagers like Debian or Ubuntu. This patch discusses the creation of the SIG.
  • Discussion around Wallaby release goals has begun! Graham Hayes sent out a call for updates to goals previously suggested and for goal champions to draft them into proposals for acceptance for Wallaby. 
  • Are you looking for OpenStack-related jobs? Set yourself apart from other candidates by taking the Certified OpenStack Administrator (COA) exam. See more details here!

StarlingX: A fully featured cloud for the distributed edge

  • The fall elections are approaching quickly! The community will elect the Project and Technical Leads as well as some of the TSC seats. If you are interested in nominating yourself or follow the process you can find more information on the elections webpage.
  • The StarlingX community is currently in the 5.0 release cycle. The community is currently deciding about the release maintenance periods which are currently planned for 12 months. The community is also planning a maintenance release and targeting October, 2020 with the availability of 3.0.1. For more information see the release wiki.

Zuul: Stop merging broken code

  • Get prepared for the next version of Zuul. The next release will drop support for Python 3.5 and Ansible 2.7. Redeploy your Zuul services on Python 3.6 or newer and migrate any jobs to Ansible 2.8 or 2.9 before your next Zuul Upgrade. See the release notes for more details.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org.

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside Open Infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at September 24, 2020 08:32 PM

Galera Cluster by Codership

Galera Clustering in MariaDB 10.5 and beyond

Continuing on our series of coverage of what happened at the recent MariaDB Server Fest 2020, now we will focus on the next talk by Seppo Jaakola, titled: Galera Clustering in MariaDB 10.5 and beyond.

A quick overview includes not just information about Galera 4 in MariaDB 10.4, but also the new features available in MariaDB Server 10.5 namely: GTID consistency, cluster error voting, XA transactions within the cluster, non-blocking DDL operations and Black box. There is also a focus on MariaDB Server 10.6 planning.

 

There have been much more wsrep-API changes (which require rolling upgrades) compared to major versions of Galera; currently Galera 4 is the latest major version, and the wsrep-API version that is the latest is 26. Rolling upgrades are key in Galera to ensure that when upgrades happen there is no downtime within the cluster at all. The biggest feature in Galera 4 is the ability to use streaming replication, which helps you execute large (greater than 2GB) and long running transactions. The feature was ready in MariaDB 10.3 but did not quite catch the release train, so it ended up in MariaDB 10.4.

Global Transaction ID (GTID) compatibility and consistency is a new feature implemented by Mario Karuza as there were GTID incompatibilities with Galera and MariaDB. Galera stores GTIDs as <uuid>:<sequence number> whereas MariaDB stores it as: <domain-id>:<node-id>:<sequence number>. Galera Cluster now uses the same domain and node ID, and the software now only stores and shows only the MariaDB format GTID. Galera Cluster can also operate as a secondary slave for a MariaDB primary master server and the GTID coming from the MariaDB master is also preserved in the Galera Cluster (you can find the same GTIDs in the binary log files). You can read more at: Using MariaDB GTIDs with MariaDB Galera Cluster.

Cluster Error Voting (started life as MDEV-17048) is a new feature implemented by Alexey Yurchenko, and it is a protocol for nodes to decide how the cluster will react to problems in replication. When one or several nodes have an issue to apply an incoming transactions (e.g. suspected inconsistency), this new feature helps. In a 5-node cluster, if 2-nodes failed to apply the transaction, they get removed and a DBA can go in to fix what went wrong so that the nodes can rejoin the cluster.

XA Transaction Support is a feature implemented by Daniele Sciaccia and Leandro Pacheco de Sousa and what we want here is for Galera Cluster to operate as a resource manager in an XA infrastructure. There is a transaction coordinator where Galera Cluster can operate as an XA Resource Manager, so that Galera Cluster should be able to prepare or rollback/commit a transaction. XA transactions are supported thanks to the implementation of streaming replication which is a foundation for it. The work has been ready for MariaDB 10.4, it was not accepted into the main tree, and it was likely targeted for MariaDB 10.5 but it still missed the train (since there was other work done by MariaDB Server for XA support and there was a conflict in the work between the two teams). So now it will be in MariaDB Server 10.6.

Non-blocking DDL is a feature implemented by Teemu Ollakka and it is only included in MariaDB Enterprise Server edition. Seppo decided not to go through the details of this feature since it is not in the regular MariaDB Server. This will be available in MariaDB Server eventually, but for now, it is an Enterprise only feature. You might be interested in the documentation for this: Performing Online Schema Changes with Galera Cluster.

 

 

Black Box is a feature implemented by Pekka Lampio, and it is also a MariaDB Enterprise Server edition feature. It allows you to store debug messages in a main memory (shm) ring buffer, and it helps with troubleshooting a crashed server or even in cluster testing.

MariaDB 10.6 planning includes XA transaction support testing and documentation work as well as making it work with SPIDER cluster testing (since the SPIDER storage engine is part of MariaDB and also depends on XA support). This helps MariaDB 10.6 to become a “sharding cluster”. This will also enable extreme write scalability that could come from such a setup.

There is also ongoing work, around the test system improvements (this does not show up for the end user, but extending the test coverage is extremely important for development), dynamic SSL/TLS encryption (so you could change an SSL implementation at runtime) as well as further optimisations to streaming replication.

If you would like to see other new features in MariaDB 10.6 or even in Galera Cluster, do not hesitate to drop us a line to info@codership.com or even our GitHub.

by Sakari Keskitalo at September 24, 2020 09:06 AM

Introduction to MariaDB Galera Cluster at MariaDB Server Fest 2020

Seppo Jaakola, CEO of Codership, makers of Galera Cluster, recently gave a talk titled Introduction to MariaDB Galera Cluster at the MariaDB Server Fest 2020, and it is truly one of the best introductions to the software that is currently available and up-to-date as of September 2020. There is an overview, configuration, feature differences between asynchronous replication as well as the releases and release cycles that are available.

As Seppo says in the talk, we work closely with MariaDB to ensure consistent support and services around Galera Cluster, which is also why we have a good strong partnership, from an engineering to services standpoint.

Galera is a replication plugin and in the configuration you need to configure my.cnf to where the Galera plugin resides. This is wsrep_provider and you can also have wsrep_provider_options. You start by having one node, and then after that you start another node, by telling the other cluster addresses via wsrep_cluster_address. In principle, all nodes have the wsrep_cluster_address as a full list (this is a good idea to ensure that your my.cnf is synced with each other). You also naturally need an State Snapshot Transfer (SST) method, via wsrep_sst_method (which you can do such as: wsrep_sst_method=rsync) – this is how the new joining nodes get a copy of the entire database. In this case the joiner node gets an rsync copy of all the data once the handshake occurs, you get a fresh copy of the database, and once the copy is completed, the server pairs and will operate as a node in the synchronous database cluster.

Two nodes are not naturally very functional for a Galera Cluster, as we prefer 3-nodes as a minimum recommend cluster size. Three nodes are the minimum number of nodes when it comes to effective voting. Every node in a cluster is a full backup of each other due to synchronous replication. All writes in the cluster are replicated; all nodes can be used for reading.

Galera Cluster is based on a generic replication plugin for database servers, and uses the replication API to interact with the Database Management Server (DBMS) via the wsrep API (project is open source on GitHub). MariaDB 10.1 and later have had built-in Galera Cluster which makes it easy to get started. Seppo goes through the complete list of wsrep configuration options, but in practice you need very few to get going.

MariaDB 10.4 have 66 wsrep-specific status variables, and when it comes to monitoring, the most common variables are: wsrep_ready, wsrep_cluster_status and wsrep_cluster_size. Besides configuration and status variables, there are also 3 tables for Galera, located in the mysql schema, and they are wsrep_cluster, wsrep_cluster_members (active members of the cluster) and wsrep_streaming_log (a new feature in Galera 4, and in MariaDB 10.4, streaming replication is a method for replicating very big/long transactions).

With streaming replication, transactions are processing in small chunks in each node in the cluster. Status is stored and kept in the mysql.wsrep_streaming_log table. In case of issues, state information is kept here when it comes to long-running transactions. It can also be used for monitoring of transactions and how long they have been processing. This is a good method for troubleshooting ill-behaving transactions.

Some interesting features that Galera Cluster brings: synchronous replication (close to fully synchronous, but the commit is currently only done in one node in the cluster; so it is the look & feel of synchronous replication – there is wsrep_sync_wait to help with this). Flow control keeps node progress even and all nodes are equal, allowing read and write access. Conflicting writes are supported (hence multi-master use is possible – Galera picks which row to commit to, and can ensure rollback for the conflicting write with a deadlock error, and retry the transaction based on configuration). There is obviously also automatic node joining.

What is different with asynchronous replication (standard in MariaDB) and synchronous replication (Galera replication)? Basically, there are several simple reasons for this:

  1. Asynchronous replication allows writes only to the master (primary) server and you end up with a master-slave (primary-secondary) topology.
  2. Secondary nodes may fall behind and encounter secondary lag as there is also no flow control.
  3. Changing the primary server will require failover management which is a burden one has to manage via external software.

 

Galera development started in 2008 and MariaDB support came in 5.5 and 10.0 versions. MariaDB 10.1 includes Galera Cluster in the main branch (so one download is all you need that comes with the relevant libraries). Feature wise, Galera replication has major version numbers: 1, 2, 3, and 4, and the current major version 4 has been present in MariaDB Server 10.4 and greater. Changes to the wsrep API also mean that you will need to perform a rolling upgrade (e.g. if you were migrating from MariaDB Server 10.3 to MariaDB Server 10.4). It is important to note that Galera can be upgraded, but rolling downgrades are not supported (i.e. if you decided to go from MariaDB Server 10.4 with Galera 4 to MariaDB Server 10.3 with Galera 3).

For MariaDB Server 10.6, there may be some major changes that go into Galera and this is something that will be decided quite soon within the next 1-2 months.

Galera Cluster also works in WAN and cloud installations, and MariaDB 10.4 has significant new Galera 4 features as stated in this blog post, so it is highly recommended that you use MariaDB 10.4 and later for evaluation.

Watch out for our next post about the other session that Seppo gave that focuses on what is around in MariaDB Server 10.5 and what is coming in MariaDB Server 10.6.

by Sakari Keskitalo at September 24, 2020 08:56 AM

September 22, 2020

OpenStack Superuser

Meet the 2020 Superuser Awards nominees

Who do you think should win the Superuser Award for the 2020 Open Infrastructure Summit?

When evaluating the nominees for the Superuser Award, take into account the unique nature of use case(s), as well as integrations and applications of open infrastructure by each particular team. Rate the nominees before September 28 at 11:59 p.m. Pacific Daylight Time.

Check out highlights from the eight nominees and click on the links for the full applications:

  • Adobe Platform Infrastructure Team ranked 14th among the largest corporate open source contributors (per Github data), in May 2019. They make concerted efforts to free their employees to participate in open source development and streamline the approval process for employees to contribute code. Adobe is committed to open infrastructure and has been actively involved in related communities, including OpenStack since 2013 and Kubernetes since 2019. Adobe IT OpenStack has five clusters spread across three locations in North America and Asia. Of these clusters, three are production. Over the last five years, it grew 1,000% and presently hosts 13,000+ VMs on 500+ physical hypervisors. The underlying Ceph infrastructure of 3.5 PB actively serves 200,000+ IOPS on a regular basis. Besides OpenStack, their Hadoop and Kubernetes implementations grew exponentially in the last few years and now account for thousands of nodes.
  • China Mobile‘s network cloud includes more than 60,000 physical servers, 1,440,000 cores so far, all based on OpenStack and KVM. These servers are distributed in eight regions across the country and support core network services of more than 800 million users across China. Their self-developed AUTO platform has been used in every region across the country. So far, they have tested 68 resource pools, covering more than 60,000 servers, 11,000 switches, and more than 500,000 network connections in CI/CD manner. The CI/CD pipeline in China Mobile’s lab is based on Jenkins and other open source CI/CD tools. It now supports continuous deployment and test iteration for four vendors, covering test cases of more than 500 each time.
  • Leboncoin started using Zuul for open source CI two years ago with Zuulv2 and Jenkins. In the beginning, they only used Gerrit and Jenkins, but as new developers joined Leboncoin each new day, this solution was not enough. After some research and a proof-of-concept, they gave Zuul a try, running between Gerrit and Jenkins. In less than a month (and without official thick documentation) they’ve setup a complete new stack. They ran it for a year before moving to Zuulv3. In terms of compute resources, they currently have 480 cores, 1.3To Ram and 80To in their Ceph clusters available. In terms of jobs, they ran around 60,000 jobs per month which means around 2,500 jobs per day. Jobs average time is less than five minutes.
  • LINE uses OpenStack to do 80% of their new instance creation. Their 50,000+ physical servers, including baremetal servers and hypervisors, across four regions and 67,000+ VM instances, give them the capability to reach over 180 million users while decreasing operational costs and decreasing delivery time from weeks to minutes.
  • SK Telecom 5GX Labs, on top of contributing upstream to OpenStack and Airship, an open source project supported by OSF, developed a containerized OpenStack on Kubernetes solution called SKT All Container Orchestrator (TACO), based on OpenStack-helm and Airship. TACO is a containerized, declarative, cloud infrastructure lifecycle manager that enables them to provide operators the capability to remotely deploy and manage the entire lifecycle of cloud infrastructure and add-on tools and services by treating all infrastructure like cloud native apps. They deployed it to SKT’s core systems including telco mobile network, IPTV services (5.5 million subscriptions); also for external customers (next generation broadcasting system, VDI, etc). Additionally, the team strongly engaged in community activity in Korea, sharing all of their technologies and experiences to regional communities (OpenStack, Ceph, Kubernetes, etc).
  • StackHPC was formed about five years ago, with a vision of the opportunities offered by open infrastructure for scientific and research computing In that time, the team has grown with the growth of open infrastructure, but has remained true to its roots and everything it does is contributed upstream where possible. As such, the company is not just transformed but entirely inspired by open infrastructure. Their next vision is the software-defined supercomputer. They will be building giant machines, among the most powerful computers in the world, designed to solve some of the most challenging problems faced by science today. We will provide scientists and users with new ways of interacting with high-performance computing to help them get straight to the science.
  • Trendyol Tech, largest e-commerce company in Turkey, is growing exponentially, and scale growth is driven directly by the Trendyol Tech Team. Using a wide variety of open source technology including OpenStack Keystone, they plan to deploy their third region and increase their total core count to around 50,000 by the end of this year.
  • Workday Private Cloud Team has been actively involved in open infrastructure projects by participating in all the open infrastructure summits since the inception of its private cloud team. The team has presented Workday’s stories on scalability, deployment, performance, and operational challenges in the past six OpenStack and Open Infrastructure Summits. Workday engineering recently added support for encryption at rest on Ceph. It contributed to Chef cookbooks used for deploying open infrastructure, submitted bug fixes, and participated in code reviews. Workday has also actively participated in several operators events and meetups. In 2018, Workday organized several open infrastructure meetup events in the East Bay Area. WPC is currently running 43 open infrastructure clusters running across 5 different data centers in the U.S. and Europe. The current number of cores is 422,000. The number of virtual machines running are 30,000 in production. The number of Kubernetes clusters is 70.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

Previous winners include Baidu, AT&T, City Network, CERN, China Mobile, Comcast, NTT Group, the Tencent TStack Team, and VEXXHOST.

The post Meet the 2020 Superuser Awards nominees appeared first on Superuser.

by Superuser at September 22, 2020 10:33 PM

2020 Superuser Awards Nominee: StackHPC

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

StackHPC is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

StackHPC

How has open infrastructure transformed the organization’s business? 

StackHPC was formed about five years ago, with a vision of the opportunities offered by open infrastructure for scientific and research computing.

In that time, our team has grown with the growth of open infrastructure, but we remain true to our roots and everything we do is contributed upstream where possible.

As such, the company is not just transformed but entirely inspired by open infrastructure.

How has the organization participated in or contributed to an open source project?

  • All of the above, since the moment of our creation!
  • We contribute code, bug reports, reviews, and documentation to open source projects.
  • We participate on the community mailing lists, IRC and Slack channels.
  • We contribute blueprints and implementations for major pieces of new functionality.
  • We present our work at open source conferences and meetups.

What open source technologies does the organization use in its open infrastructure environment?

The main open source technologies in our ecosystem are:

  • OpenStack
  • Ceph
  • Linux
  • Open vSwitch
  • Ansible
  • Kubernetes

What is the scale of your open infrastructure environment?

We work with clients with a broad range of use cases and scales:

  • From tens to thousands of compute nodes.
  • Virtualized, containerized, and bare metal workloads.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

We have worked on issues with performance and scale in a number of areas, including:

  • Provisioning bare metal compute nodes using Ironic at large scale.
  • Telemetry and monitoring of open infrastructure at large scale.
  • High performance virtualization for compute intensive workloads.
  • Using Ansible to manage open infrastructure at large-scale.

How is this team innovating with open infrastructure?

Our next vision is the software-defined supercomputer. We will be building giant machines, among the most powerful computers in the world, designed to solve some of the most challenging problems faced by science today. We will provide scientists and users with new ways of interacting with high-performance computing to help them get straight to the science.

And we will do all this using open infrastructure.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: StackHPC appeared first on Superuser.

by Superuser at September 22, 2020 10:32 PM

2020 Superuser Awards Nominee: Trendyol Tech

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

Trendyol Tech is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

Trendyol Tech

How has open infrastructure transformed the organization’s business? 

Trendyol Tech is growing exponentially and scale growth is driven directly by the Trendyol Tech Team. We value culture before anything else and welcome all people who say “We” before “I,” improves continuously, takes ownership of matters, and have a “Let’s Do It” mindset. To cope with the scale growth, we continuously improve ourselves and excel in our Technical skill set.

The aim for us to make the engineering happen inside the company by developing systems and building new projects to grow together. We have a cloud structure which is following the growth of the company day-to-day. The plans for the future, to focus on faster time-to-market.

How has the organization participated in or contributed to an open source project?

As a team, we are attending Openstack Foundation events to stay updated on the current topics,  being sponsors for events such as OpenInfra Days Turkey, and one of our colleagues was a speaker during the event.

Also, we are organizing meetups to share our knowledge and implementations among Tech enthusiasts who would like to learn more about our technologies.

We also contribute to upstream projects and one of our colleagues is a core reviewer of the Kolla project.

Check out some of Trendyol Tech’s contributions here:

What open source technologies does the organization use in its open infrastructure environment?

MAAS, OpenStack, CEPH, Kubernetes, PostgreSQL, Cassandra, Ansible, Terraform, Saltstack, Consul, Kafka, Rabbitmq, Haproxy, Tengine, istio, Grafana, Elasticsearch, Prometheus, Golang, Java, Python.

What is the scale of your open infrastructure environment?

We have two regions up and running and the third region will be deployed soon. We use shared Keystone on a large-scale. The total core count will be ~50,000 by the end of this year.

Here is a brief detail about our services:

  • Kubernetes: 1,040 VM & 100 Clusters in the first region, 2,000 VM & 100 Clusters both in the second and third regions
  • Couchbase: 750 VM & 100 Clusters in each region
  • ElasticSearch: 536 VM & 64 Clusters in the first region, 1000 VM & 120 Clusters both in the second and third regions
  • HA Proxy: 334 VM & 150 Clusters only in the first region
  • PostgreSQL: 300 VM & 60 Clusters in each region
  • Cassandra: 10 VM & 2 Clusters in the first region, 100 VM & 20 Clusters both in the 2nd and 3rd regions
  • Kafka: 103 VM & 12 Clusters in the first region, 20 VM & one Cluster both 2nd and 3rd

What kind of operational challenges have you overcome during your experience with open infrastructure? 

The main challenge is often the Linux distribution itself. We use Ubuntu and try to work with the upstream. Another challenge is the architecture for a large scale cloud. And also some vendors do not meet our automation criteria. We’re going to contribute to large-scale-sig to share our experiences.

Rolling upgrades are not a big issue. All our processes go to heavy testing before production.

How is this team innovating with open infrastructure?

  • The biggest change is transforming the virtualization technology to KVM.
  • We also succeeded in the transformation from a legacy CDN architecture to an object storage powered CDN. With the power of Ceph, the teams can develop cloud native applications.
  • Our DNS environment runs on Designate anymore.
  • Another ongoing process is testing the OpenStack Barbican project for production use.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: Trendyol Tech appeared first on Superuser.

by Superuser at September 22, 2020 10:32 PM

2020 Superuser Awards Nominee: Leboncoin

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

Leboncoin is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

Leboncoin

How did your organization get started with Zuul?

We started using Zuul for open source CI two years ago with Zuulv2 and Jenkins. At the beginning, we only used Gerrit and Jenkins, but as new developers joined Leboncoin each new day, this solution was not enough. After some research and a proof-of-concept, we gave Zuul a try, running between Gerrit and Jenkins. In less than a month (and without an official thick documentation) we’ve setup a complete new stack. We ran it for a year before moving to Zuulv3. Zuulv3 is more complex in terms of setup but brings us more features using up-to-date tools like Ansible or OpenStack.

Describe how you’re using it:

We’re using Zuulv3 with Gerrit. Our workflow is close to the OpenStack one. For each review, Zuul is trigger on three “checks” pipelines: quality, integration and build. Once results are correct, we use the gate system to merge the code into repositories and build artifacts.

We are using two small OpenStack clusters (3 CTRL / 3 STRG / 5 COMPUTE) on each datacenter. Zuul is currently setup on all Gerrit projects and some GitHub projects too. Below, is our Zuulv3 infrastructure in production and in the case of datacenter loss.

 

Zuulv3 infrastructure in production.

 

Zuulv3 infrastructure in the case of DC loss.

What is your current scale?

In terms of compute resources, we currently have 480 cores, 1.3To Ram and 80To in our Ceph clusters available. In terms of jobs, we ran around 60,000 jobs per month which means ~around 2,500 jobs per day. Jobs average time is less than five minutes.

 

What benefits has your organization seen from using Zuul?

As Leboncoin is growing very fast (and microservices too 🙂 ), Zuul allows us to ensure everything can be tested and at scale. Zuul is also able to work with Gerrit and GitHub which permits us to open our CI to more teams and workflows.

What have the challenges been (and how have you solved them)?

Our big challenge was to migrate from Zuulv2 to Zuulv3. Even if everything is using Ansible, it was very tiresome to migrate all our CI jobs (around 500 Jenkins jobs). With the help of Zuul guys on IRC, we used some Ansible roles and playbooks used by OpenStack but migration time was about a year.

What are your future plans with Zuul?

Our next steps are to use Kubernetes backend for small jobs like linters and improve Zuul with GitHub.

How can organizations who are interested in Zuul learn more and get involved?

Coming from OpenStack, I think meeting the community at Summits or on IRC is a good start. But Zuul needs better visibility. It is a powerful tool but the information online is limited.

Are there specific features that drew you to Zuul?

Scalability! And also ensuring than every commit merge into the repository is clean and can’t be broken.

What would you request from the Zuul upstream community?

Work on a better integration to Gerrit 3, new nodepool features and provider, a full HA and more visibility on the Internet.

Are you a Zuul user? Please take a few moments to fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public

Cover image courtesy of Guillaume Chenuet.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: Leboncoin appeared first on Superuser.

by Superuser at September 22, 2020 10:32 PM

2020 Superuser Awards Nominee: LINE

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

LINE is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

LINE

How has open infrastructure transformed the organization’s business? 

First of all, our open infrastructure extremely reduces time to deliver a new LINE service to end users. Delivery time of infrastructure for a new service decreased from one week to 10 mins.

The operational cost of infrastructure has decreased. After building our infrastructure, the infrastructure team can manage all service’s infrastructure like a centralized management system. Each app developer doesn’t need to care for their infrastructure operation, availability, etc., so they can focus on delivering value to end users.

The standardized and open API, e.g. OpenStack API, helps our global offices and engineers. The API helps communication between the app team and infra team, and also enables both the app team and infra team to build more advanced applications and infrastructure on top of it.

How has the organization participated in or contributed to an open source project?

Our open infrastructure project started four years ago when we were trying to fix a bug we hit in OSS community especially in term of scaling. For OpenStack, we shared some issues and its fix we hit in RabbitMQ scaling and operation. We also joined the Large-scale SIG to discuss scaling issue of OpenStack cluster. As part of the Large-scale SIG, one of our activities was to develop and launch oslo.metrics project, which visualizes OpenStack Oslo messaging layer metrics to OpenStack admin, and another was to participate in the OpenDev event as a Program Committee member and session moderator.

In the Kubernetes and related OSS community, we reported bugs and pushed patches upstream. We hit issues in the data plane software when we scaled our cloud. We also contributed to fixing the network scaling issue in FRR community.

What open source technologies does the organization use in its open infrastructure environment?

To build our open infrastructure: OpenStack, Kubernetes, Rancher, Ceph, Kafka, Knative, elasticsearch, MySQL, RabbitMQ, GlusterFS, Jenkins, DroneCI, HAProxy, Ansible, Redis, RabbitMQ, FRR, Nginx, PowerDNS, dnsmasq, libvirt, Linux.

Our open infrastructure services to app developers: OpenStack basic functionalities, Kubernetes, Kafka, Redis, MySQL, elasticsearch, LB, Ceph.

What is the scale of your open infrastructure environment?

We have 50,000+ physical servers including baremetal servers and hypervisors across four regions. The number of VM instance is 67,000+ total, and the largest region has 31,000+ VM instances. 80% of new instance creation is done by OpenStack and virtualized now.

We also manages 350+ Kubernetes clusters and the number of node is 5,400+ nodes. The amount of data managed in Ceph cluster is 17 PB across three regions.

The possible amount of end users of the infrastructure is 180+ million global users.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

We use OpenStack Keystone as a unified authorization system in our infrastructure and introduced “Request-id” concept of OpenStack to non-OpenStack OSS management.

Our infrastructure serves lots of managed services like Kubernetes clusters, MySQL, Elasticsearch in addition to VMs. All the services run as microservice architecture (MSA). Keystone is an identity service in MSA. By applying the Keystone concept to non-OpenStack services, it’s really easy to integrate all services as our infrastructure, and app developers can operate all of our open infrastructure by Keystone’s token.

The purpose of the “Request-id” concept is to track user’s requests among microservices. By adapting the “Request-id” concept, it makes it easier for the infra operator to investigate the problem when a user request fails and investigation is required.

How is this team innovating with open infrastructure?

  • Some core components, e.g. OpenStack Keystone, are deployed geographically different area for the DR purpose.
  • Some business and security policy are integrated with our open infrastructure in API layer.
  • Unified infra back office GUI is developed to managed all our infrastructure services.
  • Standardized infra control node operations are realized by deploying the node on to shared Kubernetes cluster.
  • CLOS network architecture is introduced in order to scale and to reduce operational cost.
  • SRv6 network mechanism is introduced to handle multitenancy network for some special projects.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: LINE appeared first on Superuser.

by Superuser at September 22, 2020 10:32 PM

2020 Superuser Awards Nominee: SK Telecom 5GX Cloud Labs

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

SK Telecom is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

SK Telecom 5GX Cloud Labs

The core team is Container Platform Dev Team from 5GX Cloud Lab, SK Telecom. The team has 10 members who develop cloud native infrastructure platform, called TACO (SKT All Container Orchestrator). We have been working like a skunkworks project within the company, quickly developing innovative way of managing cloud infrastructure like cloud native apps, then productize it in many important domains; Telco NFV, VDI, IPTV services, Big Data, etc. Importantly, we have worked together with AT&T and others to launch the Airship project, and been an active advocate for the project. The most of us are also core members in OpenStack, Ceph, Kubernetes community, really leading community effort here in Korea.

How has open infrastructure transformed the organization’s business? 

Open Infrastructure provides standardized API sets and common software stacks so that SKT can flexibly work with partners, reducing TCO but accelerating 5G innovations. However, in-house development tightly working with open communities was not a usual case for SKT. My team has introduced and embraced it. It influenced culture and organizational changes to make both open source adoption and open source contribution accelerated (http://thubgate.sktelecom.com/). SKT also launched “private/hybrid cloud” product for B2B market in 2020 to generate new revenue streams. In addition, Joint Venture SKT created with SBG (Sinclair Broadcasting Group) in USA was an important success story for global expansion. The reputation SKT gets from open community activity played an important role.

How has the organization participated in or contributed to an open source project?

SKT is a longtime sponsor for Open Infrastructure Days Korea and support of the community. The company also encourages its employees to participate/contribute regional and global community. In 2017, my team started to work with AT&T to set up containerized OpenStack development activity. Since then, both parties collaborated to launch openstack-helm, and Airship. We presented in several Open Infrastructure Summits, PTGs, and participated in code development, reviews for openstack-helm and Airship. SKT is now opening up its software code to contribute back to community. Most of team members are long-time contributors of OpenStack (from 2010), Ceph and Kubernetes. we have a founding member of OpenStack, Ceph, and Kubernetes Korea community in the team.

What open source technologies does the organization use in its open infrastructure environment?

The important ones are OpenStack (Nova, Cinder, Glance, Keystone, Ironic, Manila, Neutron, Heat, Horizon, Kolla, openstack-helm) and its infrastructure components (mariadb, rabbitmq, memcached, etc), Airship, Ceph, and Kubernetes. In addition, Docker, Kubernetes CAPO (Cluster-API Provider OpenStack), Metal3, EFK, Prometheus and various exporters, Grafana, Lens, Ansible, Jenkins, Mariadb, PostgreSQL and more are forming the entire lifecycle of our solution. Most of them are all containerized. What’s more, leveraging all the mentioned open source technologies, we have an open source project, called HANU (https://github.com/openinfradev), composed of two main sub-projects; tacoplay and decapod.

What is the scale of your open infrastructure environment?

We run many different open infrastructure clusters based on TACO (SKT All Container Orchestrator, it is a containerized, declarative, cloud infrastructure lifecycle manager fully leveraging Kubernetes and Airship). They are not huge scale, but each serves a very important role in the company. First, some NFVs in telco network is running on taco-based OpenStack. Second, SKT’s private cloud is also based on taco-based OpenStack. Third, B2B VDI solution is based on taco-technology. Fourth, BTV (SK’s IPTV offering) services/apps are running on top of taco-based Kubernetes (five clusters). These clusters each deals with 1~6k tps request from IPTV setup boxes all around Korea (SK’s IPTV has around 5.5 million subscription). Lastly, cloud-based ATSC3.0 broadcasting/media system is on the way.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

Difficulties in operating OpenStack (install, partial patches, upgrade, etc) has always been big problem. As a result, company often ends up with having so many different versions of OpenStack creating operation overhead. In addition, since there are lots of loosely coupled modules in OpenStack, it has been real challenge to take care of each of them in the situation. My team’s mission was to solve these problems. Luckily, there were like-minded community members having very similar goals (Airship!). What more, by treating every infrastructure like cloud native apps, we are able to provide operators a capability to remotely deploy and manage the entire lifecycle of cloud infrastructure and add-on tools/services. This helps operating distributed, multiple clouds both for OpenStack and k8s.

How is this team innovating with open infrastructure?

– Developed a containerized OpenStack on Kubernetes solution (TACO) based on openstack-helm and Airship. We continuously extend it toward more generic “declarative cloud infrastructure lifecycle management tools”.

– Developed Gitops, Declarative Continuous Deployment Tool (DECAPOD). Airship v2 design document, discussion recording/note, prototype was a great help.

– Federated LMA covering multiple cloud infrastructure (both OpenStack and k8s), and an admin console UI, all based on open source. Especially, we actively leverage Kubernetes operator framework to control various lma tools.

– Unified management between virtual machines and containers. ONOS is a key open technology for SDN.

– Using the above to build Cloud for Telco, Media, IT, and Broadcasting System (ATSC 3.0)

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: SK Telecom 5GX Cloud Labs appeared first on Superuser.

by Superuser at September 22, 2020 10:32 PM

2020 Superuser Awards Nominee: China Mobile

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

China Mobile is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

China Mobile

How has open infrastructure transformed the organization’s business? 

Cross-vendor integration becomes one of the biggest challenges for telco cloud. Our team builds up automation tools and CI/CD pipeline to deal with the challenge.

  • Fast cross-vendor interoperability: Set up CI/CD pipelines connecting with vendor labs to provide iterate test. Vendor software can integrate continuously for more than 10 times per week.
  • Improve delivery efficiency: We developed an automation tool, named ‘AUTO’, to support OpenStack cloud delivery across the country. Only 20 minutes are needed for configuration and 80 minutes for overall check per pod. IaC solution for fast copy across regions
  • Enhance quality: Full-scale quality check with over 15,000 issues found and solved. The configuration fault rate reduces from 30% due to manual operation to zero.

How has the organization participated in or contributed to an open source project?

We have been contributing to OpenStack since 2013. So far, we have given one keynote and nine sessions at OpenStack and Open Infrastructure Summits, and have served on the Programming Committee six times. We are also active contributors to the Telco and Edge Computing Working Group. We are founding members and active contributors in OPNFV since 2014. We serve as OPNFV TSC and PTLs for multiple projects. We are founding members for CNTT since 2019 and deeply involved in RI and RC activities. We are serving as CNTT Gov co-lead and RI and RC WSL. Since 2013, we have presented two keynote speeches and 25 sessions in LF events, contributing more than 2,000 codes. We are contributing the general pod description model and hardware validation framework we used across the country, and are also planning to open source our CI/CD flow and interfaces.

What open source technologies does the organization use in its open infrastructure environment?

All of our network clouds are based on OpenStack and KVM. In the meantime, we have developed an automation platform based on Ansible, Django, fastapi, traefik, postgres, redis, celery, pgadmin, nginx, Docker, and Docker-compose. Our CI/CD pipeline is based on Jenkins and Docker. And other open source technologies we use include, but not limited to, are  Kubernetes, CentOS, Ironic, Git, DPDK, OVS, ODL, Allure, bootstrap, pytest/unittest, robot framework, and Ceph.

What is the scale of your open infrastructure environment?

China Mobile network cloud includes more than 60,000 physical servers, 1,440,000 cores so far, all based on OpenStack and KVM. These servers are distributed in eight regions across the country and support core network services of more than 800 million users across China. The above mentioned self-developed AUTO platform has been used in every region across the country. So far, we have tested 68 resource pools, covering more than 60,000 servers, 11,000 switches, and more than 500,000 network connections in CI/CD manner. The CI/CD pipeline in China Mobile’s lab is based on Jenkins and other open source CI/CD tools. It now supports continuous deployment and test iteration for four vendors, covering test cases of more than 500 each time.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

  • Network scale: China Mobile has one of the world’s biggest OpenStack clouds. We always keep this in mind when designing our automation platform. We use distributed structure to improve the efficiency of automatic configuration, and testing and firmware upgrades, and apply portable hardware and software integration device to simplify AUTO deployment and remote operation.
  • Multi-vendor: More than 20 vendors are included in our network cloud. It is our team’s mission to build up the cloud by integrating the hardware and software products from these vendors. OpenStack has helped us a lot acting as the de-facto standard. However, we still face challenges when integrating the overall NFV architecture as it also includes other components like distributed storage, SDN controller, VNFM, and MANO.

How is this team innovating with open infrastructure?

To solve the integration challenge, a CI/CD pipeline is designed to provide continuous testing, integration, and delivery for multi-vendor cloud. With this, version updates from vendors will automatically invoke continuous deployment and testing in our lab. Making sure problems are fully revealed and solved, the new version can be automatically delivered on site. The team makes the following innovation:

  • Design common language and data template for multi-vendor integration. Build IaC tools to automatically generate data matrixes.
  • Innovate framework to reduce workload in adapting multi-vendor products by only filling in a configuration file.
  • Design CI modules to fit into changing scenarios for integration, making sure the whole pipeline can cover every scenario that can happen for NFV.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: China Mobile appeared first on Superuser.

by Superuser at September 22, 2020 10:31 PM

2020 Superuser Awards Nominee: Adobe Platform Infrastructure Team

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

The Adobe Platform Infrastructure Team is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Who is the nominee?

Adobe Platform Infrastructure Team

How has open infrastructure transformed the organization’s business? 

At Adobe, we increasingly depend on open source software. Open source sits at the heart of our most innovative products. For example, over half the underlying code of Adobe Experience Manager comes from open source. The product’s core, and its most critical infrastructure parts, consist of open source code from Apache projects we’re active in.

Adobe IT is also in the vanguard of open source transformation. We consciously decided five years ago to transition self-managed public clouds to OpenStack to increase flexibility and scalability for thousands of engineers, to build and maintain integrated CI/CD pipelines and regression test farms. A recent upgrade to implement infrastructure as code provided a boost to DevOps culture and forced a paradigmatic shift in the infrastructure management.

How has the organization participated in or contributed to an open source project?

In May 2019, Adobe ranked 14th among the largest corporate open source contributors (per GitHub data). We make concerted efforts to free our employees to participate in open source development and streamline the approval process for employees to contribute code.

Adobe is committed to open infrastructure and has been actively involved in related communities, including OpenStack since 2013 and Kubernetes since 2019.

  • Open Infrastructure user committee – former member
  • OpenStack Summit: Presented at almost every summit since 2013
  • OpenStack Silicon Valley
  • KubeCon + CloudNativeCon
  • Meetups: Hosted and presented at many local OpenStack/Kubernetes groups
  • OSF White Paper: Adding Speed and Agility to Virtualized Infrastructure with OpenStack – contributor

What open source technologies does the organization use in its open infrastructure environment?

OpenStack, Kubernetes, Mesosphere, Hadoop, Kafka, and Hubble are key open source infrastructure platforms across Adobe.

On a local level, their development and production applications are built to leverage a multitude of open source technologies, including, but not limited to: Chef, Ansible, Salt, Terraform, ELK, Docker, Jenkins, Grafana, InfluxDB, Kibana, and Git.

What is the scale of your open infrastructure environment?

Adobe IT OpenStack has five clusters spread across three locations in North America and Asia. Of these clusters, three are production. Over the last five years it grew 1000% and presently hosts 13,000+ VMs on 500+ physical hypervisors. The underlying Ceph infrastructure of 3.5 PB actively serves 200,000+ IOPS on a regular basis. Besides OpenStack, our Hadoop and Kubernetes implementations grew exponentially in the last few years and now account for thousands of nodes.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

During our first years of OpenStack, lifecycle management was painful with large, forklift upgrades that involved high risk and costly downtime. As a result, we did not update our infrastructure frequently and couldn’t benefit from the latest open innovation. Thus, we upgraded to a Salt and Reclass based platform to implement principles of Infrastructure as Code, enabling continuous updates and automated upgrades with no downtime. Now we are able to rapidly leverage the latest innovations in open infrastructure.

How is this team innovating with open infrastructure?

With open infrastructure, Adobe IT is able to accelerate QA and testing cycles by automatically spinning up all of the various Operating systems to be tested simultaneously, reducing the time to test and push to production. Time is a great competitive advantage for Adobe’s continuing success in the marketplace. Any time we can save by handling processes and procedures more efficiently tremendously benefits both Adobe and our customers.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: Adobe Platform Infrastructure Team appeared first on Superuser.

by Superuser at September 22, 2020 10:30 PM

2020 Superuser Awards Nominee: Workday Private Cloud Team

It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees.

Now, it’s your turn.

The Workday Private Cloud (WPC) Team is one of eight nominees for the Superuser Awards. Review the nomination criteria below, check out the other nominees and rate them before the deadline September 28 at 11:59 p.m. Pacific Daylight Time.

Rate them here!

Who is the nominee?

Workday Private Cloud (WPC) Team

How has open infrastructure transformed the organization’s business? 

  • The adoption of open infrastructure allowed Workday to become more agile and gave developers a faster time to market.
  • Migrating applications running on bare metal to open infrastructure allowed Workday to apply critical security OS updates in a matter of days which used to take months. Workday’s growth and agile upgrade model required a very scalable and API driven infrastructure management framework.
  • With open Infrastructure and a good CI/CD process, Workday is able to create and delete over 30,000 VMs in less than 45 minutes maintenance window. The success rate for these operations is over 99%.
  • Lastly, the open infrastructure platform accelerated the deployment of Kubernetes cluster in Workday data centers. The scalability and reliability of the platform is helping Workday meet its SLA.

How has the organization participated in or contributed to an open source project?

Workday has been actively involved in open infrastructure projects by participating in all the Open Infrastructure Summits since the inception of its private cloud team. The team has presented Workday’s stories on scalability, deployment, performance, and operational challenges in the past six OpenStack Summits.

Workday engineering recently added support for encryption at rest on Ceph. It contributed to Chef cookbooks used for deploying open infrastructure, submitted bug fixes, and participated in code reviews.

Workday has also actively participated in several operators events and meetups. In 2018, Workday organized several open infrastructure meetup events in the East Bay Area.

What open source technologies does the organization use in its open infrastructure environment?

The organization heavily relies on open source technologies. From the open infrastructure environment, we are currently using Keystone, Nova, Heat, Glance, Neutron, Kolla, and Ceph.

What is the scale of your open infrastructure environment?

WPC is currently running 43 open infrastructure clusters running across five different data centers in the U.S. and Europe. The current number of cores is 422,000. The number of virtual machines running are 30,000 in production. The number of Kubernetes clusters is 70.

What kind of operational challenges have you overcome during your experience with open infrastructure? 

  • We have overcome several performance, scaling, and operational challenges. Workday’s application deployment and upgrade model puts a significant load on the open infrastructure controllers.
  • We worked with the community to improve concurrent VM boot time by adding several features or code changes to nova. These changes were presented in great detail at the Berlin summit in 2018.
  • Managing deployments with thousands of servers is challenging with a very small operations team. Our team built a lot of observability tools that allow the team to monitor events and identify performance bottlenecks promptly.
  • To make our deployments scalable, we iteratively changed our architecture and adopted a design that is horizontally scalable.

How is this team innovating with open infrastructure?

Workday is innovating its data center architecture. Some of the biggest architectural changes are in network and storage solutions.
These changes are driven by business needs to scale Workday’s data centers to hundreds of thousands of servers.

The Workday private cloud (WPC) team is working on solutions that allow its open infrastructure platform to work more scalable, manageable, and easily upgradable. One use case requires us to support Border Gateway Protocol (BGP) with Neutron, a solution that is not yet present in the latest release. The team is working on developing a blueprint.

Each community member can rate the nominees once by September 28 at 11:59 p.m. Pacific Daylight Time.

The post 2020 Superuser Awards Nominee: Workday Private Cloud Team appeared first on Superuser.

by Superuser at September 22, 2020 10:30 PM

September 21, 2020

OpenStack Blog

10 Years of OpenStack – SeongSoo Cho at NHN / OpenStack Korea User Group

Happy 10 years of OpenStack! Millions of cores, 100,000 community members, 10 years of you. Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make... Read more »

by Sunny at September 21, 2020 03:00 PM

September 14, 2020

OpenStack Blog

10 Years of OpenStack – Mohammed AbuAisha at Radix Technologies

Happy 10 years of OpenStack! Millions of cores, 100,000 community members, 10 years of you. Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make... Read more »

by Sunny at September 14, 2020 03:00 PM

September 09, 2020

OpenStack Superuser

Open Infrastructure Summit 2020 Schedule is Now Live!

The schedule for the 2020 Open Infrastructure Summit, released today, features keynotes and sessions from users like Volvo Cars Corporation, Workday, Société Générale and Ant Group. The event, held virtually for the first time, takes place October 19-23 and includes more than 100 sessions around infrastructure use cases like cloud computing, edge computing, hardware enablement, and security. 

Thousands of attendees are expected to participate, representing 30+ open source communities and more than 100 countries. Keynotes begin at 10am CT on Monday, October 19 and again on Tuesday, October 20. 

Sessions for the Summit focus on the most important open infrastructure issues and challenges facing global organizations: 

  • AI / Machine Learning, HPC 
  • Container Infrastructure
  • CI/CD
  • 5G
  • NFV and Edge Computing
  • Cloud Computing: Public, Private and Hybrid 
  • Security

View the full Summit schedule here!

A critical feature of Open Infrastructure Summit sessions is the collaboration among numerous open source communities, including Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul and many more. Speaking sessions at the Summit are led by users from global enterprises and research institutions building and operating open infrastructure at scale: 

  • Volvo Cars Corporation will reveal how it successfully uses Zuul for continuous integration in a range of different software components, and how Zuul is a game changer that helps them avoid merging broken code. As its team is now working with autonomous driving software in an NVIDIA central car computer, Volvo will share why it relies on cross repo dependencies and the speculative merge feature in their gates, dramatically reducing queues.
  • Ant Group, the leading peer-to-peer payments processor in China, will talk about how they had thousands of tasks scheduled and running in Kata Containers. Some of those tasks are sensitive to resources and response time, so they will present why other isolation methods, such as scheduling isolation and LLC isolation, are used in combination with Kata Containers. 
  • GE Digital will present the tools, describe the migration procedure, and highlight the biggest challenges in upgrading from OpenStack Newton to Queens with minimal downtime. 
  • Learn how Société Générale, one of the top three French banks with a net banking income of 24.7 billion euros, is leveraging Kolla-Ansible using Neutron routed provider network feature. They are also using the new capability in Glance to import an image into multiple stores in order to provide cloud services in multiple availability zones using a single OpenStack deployment.
  • China Mobile will showcase the automated hardware integration system it built for its NFV cloud project and explain how 50,000 servers have been implemented with plug-and-play. 
  • The European Weather Cloud, a joint project between ECMWF and EUMETSAT, is creating an on-site private cloud infrastructure on which Member and Co‐operating States(~34 states) are able to create virtual resources on demand to gain access to the Centre’s NWP (Numerical Weather Prediction) products and infrastructure (HPC) in a timely and configurable fashion. The ECMWF’s infrastructure is based 100% on open source software: OpenStack (Ussuri) and Ceph (Nautilus). 
  • Workday is back at the Summit, this time with more than 400k cores in production and a story about its OpenStack footprint with 7K hypervisors distributed over 25 clusters. These numbers are projected to grow over 10K hypervisors by the end of 2020.
  • China Tower has a million generator rooms covering China, which is especially suitable for customers with large-scale resource needs. China Tower starts with the CDN business, uses Intel x86 servers as hardware and StarlingX to isolate different CDN vendors, and provides virtual resources to CDN vendors to conveniently manage edge cloud resources on the central cloud.

Now what?

Register for your free virtual Summit and meet the users, developers, and vendors who are building and operating open infrastructure on October 19-23! 

Thank you to our Summit Headline, Premier and Exhibitor sponsors: Huawei, Cisco, InMotion Hosting, Trilio and ZTE. Event sponsors gain visibility with a wide array of open source infrastructure developers, operators and decision makers. Download the Open Infrastructure Summit sponsor prospectus for more information.

Questions? Reach out to summit@openstack.org

Get involved

Follow the #OpenInfraSummit hashtag on Twitter and let us know on Facebook that you’re going to the Summit!

The post Open Infrastructure Summit 2020 Schedule is Now Live! appeared first on Superuser.

by Superuser at September 09, 2020 03:00 PM

September 07, 2020

OpenStack Blog

10 Years of OpenStack – Julia Kreger at Red Hat

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Julia Kreger from... Read more »

by Sunny at September 07, 2020 03:00 PM

September 04, 2020

John Likes OpenStack

My tox cheat sheet

Install tox on centos8 undercloud deployed by tripleo-lab

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py
pip install tox
Render changes to tripleo docs:

cd /home/stack/tripleo-docs
tox -e deploy-guide
Check syntax errors before wasting CI time

tox -e linters
tox -e pep8
Run a specific unit test

cd /home/stack/tripleo-common
tox -e py36 -- tripleo_common.tests.test_inventory.TestInventory.test_get_roles_by_service

cd /home/stack/tripleo-ansible
tox -e py36 -- tripleo_ansible.tests.modules.test_derive_hci_parameters.TestTripleoDeriveHciParameters

by Unknown (noreply@blogger.com) at September 04, 2020 06:31 PM

September 03, 2020

StackHPC Team Blog

Monasca on Kayobe, tips and tricks

Here at StackHPC we've used and experimented with Monasca in a variety of ways, contributing upstream wherever possible.

For the benefit of those using or considering either Monasca or Kayobe we thought we'd share some of our tips for deploying and configuring it.

This tutorial will follow on from the Kayobe a-universe-from-nothing tutorial on Train to demonstrate how to deploy and customise Monasca with Kolla-Ansible.

Assuming you've got a Kayobe environment (see our helpful universe-from-nothing blog post if you haven't already) you're only a few steps away from having a deployed Monasca stack. Here's how.

Before we begin

From your designated Ansible control host, source the Kayobe virtualenv, kayobe-env and admin credentials files. Assuming the virtualenv and kayobe-config locations are the same as in the a-universe-from-nothing tutorial:

$ source ~/kayobe-venv/bin/activate
$ cd ~/kayobe/config/src/kayobe-config/
$ source kayobe-env
$ source etc/kolla/admin-openrc.sh

Any reference to a filesystem path from this point in the guide will be relative to the kayobe-config directory above.

Optional

Optionally, enable Kayobe shell completion:

$ source <(kayobe complete)

Containers

First you'll need Kolla containers, these can either be pulled from Docker Hub or built using Kayobe. Kolla containers are either of the source or binary variety depending on how they were built and this is reflected in their image name. Note that not every component supports both build types and Monasca is only available from source. In practice, this means we'll need to inform Kolla and Kolla-Ansible of the container image to build (unless pulling from Dockerhub) and deploy respectively.

Pulling from Docker Hub

If you've followed a universe-from-nothing build the following script can be used to pull the relevant containers from the Docker Hub Kolla repositories and push them to the seed:

#!/bin/bash
set -e

tag=${1:-train}
images="kolla/centos-binary-zookeeper
kolla/centos-binary-kafka
kolla/centos-binary-storm
kolla/centos-binary-logstash
kolla/centos-binary-kibana
kolla/centos-binary-elasticsearch
kolla/centos-binary-influxdb
kolla/centos-source-monasca-api
kolla/centos-source-monasca-notification
kolla/centos-source-monasca-persister
kolla/centos-source-monasca-agent
kolla/centos-source-monasca-thresh
kolla/centos-source-monasca-grafana"
registry=192.168.33.5:4000

for image in $images; do
    ssh stack@192.168.33.5 sudo docker pull $image:$tag
    ssh stack@192.168.33.5 sudo docker tag $image:$tag $registry/$image:$tag
    ssh stack@192.168.33.5 sudo docker push $registry/$image:$tag
done

Building using Kayobe

Building your own containers is the recommended approach for production OpenStack and is required if customising the Kolla Dockerfiles. The following Kayobe commands can be used to build Monasca and related containers:

$ kayobe overcloud container image build kafka influxdb kibana elasticsearch zookeeper storm logstash --push
$ kayobe overcloud container image build monasca -e kolla_install_type=source --push

The --push argument will push these containers to the Docker registry on the seed node once built.

Configuring Kayobe

StackHPC usually recommends a cluster of 3 separate nodes for monitoring infrastructure but with sufficient available resources it is possible to configure the controllers as monitoring nodes. For separate monitoring nodes see here for an example of adding another node type.

If instead you are running monitoring services on controllers then add the following to etc/kayobe/inventory/groups:

[monitoring:children]
# Add controllers to monitoring group
controllers

Configuring Kolla-Ansible

Add the following to the contents of etc/kayobe/kolla/globals.yml:

# Roles which grant read/write access to Monasca APIs
monasca_default_authorized_roles:
- admin
- monasca-user

# Roles which grant write access to Monasca APIs
monasca_agent_authorized_roles:
- monasca-agent

# Project name to send control plane logs and metrics to
monasca_control_plane_project: monasca_control_plane

This configures Kolla-Ansible with some sane defaults for user and agent roles and finally names the OpenStack project for metrics as monasca_control_plane.

Configuring Monasca

StackHPC makes regular use of the Slack notification plugin for alerts. To demonstrate how this works we'll enable and customise this feature. Customising Monasca requires creating configuration under directories that do not yet exist, so first create both the Kolla config Monasca directory and a subdirectory for alarm notification templates:

$ mkdir -p etc/kayobe/kolla/config/monasca/notification_templates

Monasca-Notification configuration

Populate the monasca-notification container's configuration at etc/kayobe/kolla/config/monasca/notification.conf to enable Slack webhooks and set the notification template:

[notification_types]
enabled = slack,webhook

[slack_notifier]
message_template = "/etc/monasca/slack_template.j2"
timeout = 5
ca_certs = "/etc/ssl/certs/ca-bundle.crt"
insecure = False

[webhook_notifier]
timeout = 5

Slack webhook notification template

Custom Slack notification templates should be placed in etc/kayobe/kolla/config/monasca/notification_templates/slack_template.j2. If you've followed a universe-from-nothing template then the following jinja will work as is:

{% raw %}{% set base_url = "http://{% endraw %}{{ aio_vip_address }}{% raw %}:3001/plugins/monasca-app/page/alarms" -%}
Alarm: `{{ alarm_name }}`
{%- if metrics[0].dimensions.hostname is defined -%}
{% set hosts = metrics|map(attribute='dimensions.hostname')|unique|list %} on host(s): `{{ hosts|join(', ') }}` moved to <{{ base_url }}?dimensions=hostname:{{ hosts|join('|') }}|status>: `{{ state }}`
{%- else %} moved to <{{ base_url }}|status>: `{{ state }}`
{%- endif %}.{% endraw %}

If you've prepared your own deployment then {{ aio_vip_address }} will need to be replaced with the address of an accessible VIP interface as defined in etc/kayobe/networks.yml.

Astute Jinja practitioners may notice that the notification template is wrapped inside {% raw %} tags except for the VIP address: this allows Kayobe to insert a variable not visible at the time Kolla-Ansible templates the file.

Adding Dashboards & Datasources

Monasca-Grafana will need to be configured with the monasca-api address as a metric source. Note elasticsearch can also be configured as a metric source to visualise log metrics. Optionally custom dashboards can also be defined in the same file etc/kayobe/grafana.yml:

# Path to git repo containing Grafana dashboards. Eg.
# https://github.com/stackhpc/grafana-reference-dashboards.git
grafana_monitoring_node_dashboard_repo: "https://github.com/stackhpc/grafana-reference-dashboards.git"

# Dashboard repo version. Optional, defaults to 'HEAD'.
grafana_monitoring_node_dashboard_repo_version: "stable/train"

# The path, relative to the grafana_monitoring_node_dashboard_repo_checkout_path
# containing the dashboards. Eg. /prometheus/control_plane
grafana_monitoring_node_dashboard_repo_path: "/monasca/control_plane"

# A dict of datasources to configure. See the stackhpc.grafana-conf role
# for all supported datasources.
grafana_datasources:
  monasca_api:
    port: 8070
    host: "{{ aio_vip_address }}"
  elasticsearch:
    port: 9200
    host: "{{ aio_vip_address }}"
    project_id: "{{ monasca_control_plane_project_id | default('') }}"

Pulling containers to the overcloud

Once the configuration is in place, it is recommended to prepare for the next step by pulling the new containers from the seed registry to the relevant overcloud nodes:

$ kayobe overcloud container image pull

This also serves to check all required containers are available.

Deploying Monasca

Deploying Monasca and friends using Kayobe can take a considerable length of time due to the number of checks Kayobe and Kolla-Ansible both perform. Being familiar with kolla-ansible we can skip some of these tasks with the --kolla-tags argument:

$ kayobe overcloud service deploy --kolla-tags monasca,elasticsearch,influxdb,mariadb,kafka,kibana,grafana,storm,kafka,zookeeper,haproxy,common

The above command will deploy only Monasca and related services. A word of caution however, limiting tasks in this fashion can have unexpected consequences for inexperienced users and honestly doesn't save much time. If in doubt which tags are required run a full deploy with:

$ kayobe overcloud service deploy

Now would be a good point to grab a cup of tea.

Run in a production environment, this command shouldn't cause any disruption to tenant services (on sufficient hardware) but HAProxy will restart, potentially interrupting connections to the API for a brief period.

Adding Dashboards & Datasources

Assuming the deployment completed successfully, additional tasks are still required to configure Grafana with the datasources and dashboards defined in etc/kayobe/grafana.yml:

$ kayobe overcloud post configure --tags grafana

Testing

You should now be able to navigate to Grafana and Kibana, found by default on ports 3001 & 5601.

To start using the Monasca CLI, install it from PyPI and configure the relevant roles to authenticate in the Monasca project. Create and activate a fresh venv for the purpose:

$ deactivate
$ python3 -m venv ~/monasca-venv
$ source ~/monasca-venv/bin/activate
$ pip install python-openstackclient
$ pip install python-monascaclient
$ source etc/kolla/admin-openrc.sh

Optionally enable shell completion:

$ source <(openstack complete)
$ source <(monasca complete)

Add the admin user to the monasca_control_plane project (and double check):

$ openstack role add --user admin --project monasca_control_plane admin
$ openstack role assignment list --names --project monasca_control_plane

Switch to the monasca_control_plane project and view available metric names:

$ export OS_PROJECT_NAME=monasca_control_plane
$ unset OS_TENANT_NAME
$ monasca metric-name-list

Alarms!

Don't forget to install the Slack Incoming webhooks integration in order to make use of Monasca alerts. Once that is installed and configured in a channel, you'll be provided with a webhook URL - since this is a private URL it should be secured before being added to kayobe-config (for more information see the Kayobe documentation on secrets).

Deploying Alarms from a custom playbook

Alarms and notification definitions can be created using the monasca CLI, but in keeping with the configuration-as-code approach thus far we'd recommend our ansible role for the task - it contains a reasonably sane set of alarms for monitoring both overcloud nodes and OpenStack services.

With some additional configuration, the role can be installed and used by Kayobe.

First create the directory and provide symlinks to Kayobe Ansible as per the documentation:

$ mkdir -p etc/kayobe/ansible
$ cd etc/kayobe/ansible
$ ln -s ../../../../kayobe/ansible/filter_plugins/ filter_plugins
$ ln -s ../../../../kayobe/ansible/group_vars/ group_vars
$ ln -s ../../../../kayobe/ansible/test_plugins/ test_plugins
$ cd -

And then etc/kayobe/ansible/requirements.yml to specify the role:

---
- src: stackhpc.monasca_default_alarms
  version: 1.3.0

An example playbook to deploy only the system level alerts (cpu, disk, memory usage) in etc/kayobe/ansible/monasca_alarms.yml. This assumes you've created a variable for your slack webhook called secrets_monasca_slack_webhook and that the monasca CLI virtualenv is in ~/monasca-venv:

- name: Create Monasca notification method and alarms
  hosts: localhost
  gather_facts: yes
  vars:
    keystone_url: "http://{{ aio_vip_address }}:5000/v3"
    keystone_project: "monasca_control_plane"
    monasca_endpoint_interface: ["internal"]
    notification_address: "{{ secrets_monasca_slack_webhook }}"
    notification_name: "Default Slack Notification"
    notification_type: "SLACK"
    monasca_client_virtualenv_dir: "~/monasca-venv"
    virtualenv_become: "no"
    skip_tasks: ["misc", "openstack", "monasca", "ceph"]
  roles:
    - {role: stackhpc.monasca_default_alarms, tags: [alarms]}

The Ansible galaxy role can be installed using Kayobe with:

$ kayobe control host bootstrap

And the playbook invoking it can be executed with:

$ kayobe playbook run ${KAYOBE_CONFIG_PATH}/ansible/monasca_alarms.yml

by Isaac Prior at September 03, 2020 03:30 PM

Fleio Blog

Fleio 2020.09.0 Beta: Angular staff completed, clusters changes, tweaks on process clients cron and more

Today, 3rd of September, 2020 we have released v2020.09.0 (beta). This beta version includes complete angular frontend for staff, changes on clusters and clusters templates, tweaks on process clients cron and more. Angular staff journey We are happy to announce that with the latest Fleio release we have completed the new staff Frontend. In the […]

by Marian Chelmus at September 03, 2020 01:53 PM

August 31, 2020

OpenStack Blog

10 Years of OpenStack – Shane Wang at Intel

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Shane Wang from... Read more »

by Sunny at August 31, 2020 03:00 PM

August 29, 2020

Ghanshyam Mann

Recap of OpenStack Victoria Virtual PTG, 2020

Due to COVID-19 pandemic, OpenStack PTG for Victoria design discussions held virtually. This was the first-ever virtual event for OpenStack. Overall this was a successful event and nicely organized by OSF. You can find the overall high-level recap of PTG here. This blog covers the QA, TC and nova-oslo cross-project sessions I attended.

All Projects Etherpad for details: http://ptg.openstack.org/etherpads.html

 Technical Committee

Etherpad: https://etherpad.opendev.org/p/tc-victoria-ptg

  Ussuri Retrospective

We discussed things we finished like py2 drop, adding the ideas repository and define the process for dropping projects from OpenStack governance. Next things we discussed on:

  • PTL-less projects which took a lot of time from TC to get an appointment.
  • To make TC office hours more effective. One way is to change back to regular meetings. Naser will be proposing the vote for that.
  • How to connect projects and TC more closely. We decided to continue with liaison things and regular check with PTL about project health.

  Make TC members on-boarding more smooth

When new TC members are elected, we have some on-boarding tasks for them for example, how actively they need to participate in TC activities, what all to review, meeting and other event participants etc. We do not have those steps in a clear document way. When I was selected as TC, Doug(TC chair) sent an email to us for help in those on-boarding things.
We decided to document those steps so that it will be helpful for each new member to join TC.

Action Item:

  • Document how to propose changes for review (tags to use, etc)
  • Document the on-boarding information like they have for PTLs, etc.  Collect up the tribal knowledge. – diablo_rojo

  TC/UC merge

This was ongoing things to fix. User Committee lack of potential candidates nomination in the past couple of elections. In Technical Committee, we decided to merge both the committee into a single governance team. ttx will be coordinating it with UC and propose the merging structure and proposal.
ML thread:  http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014736.html

Action Item:

  • ttx to sync with the UC to line things up

  Help Wanted List

We will be continuing this list and revise it every year. TC has given this list to Board of Directors also to get help from the organization side. We have many areas for 2020 help needed list: https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2020/index.html

  TC position on OSF Foundation member community contributions

OpenStack is facing the issue of less contribution for many years. Many projects are facing this issue and TC presented it to Board of Directors also in Berlin summit and worked on a few things after their feedback. But there is no outcome or help on contribution issues.
In this PTG, TC discussed next steps and propose some ideas to BoD like

  • Enforce or remove the minimum contribution level
  • Give gold members the chance to have increased visibility (perhaps giving them some of the platinum member advantages) if they supplement their monetary contributions with contributor contributions.
    There was no conclusive item or way on this topic yet and Naser (TC chair) who is BoD also will bring this to BoD meeting.

Action Item:

  • Mohammed will take it forward to the board and see what, if any, feedback we get.  Will try to summarize the discussion as much as possible.

  OpenStack user-facing APIs in the wild (i.e. OpenStackClient)

This was long pending things to do from OpenStack usage point of view. OepnStackClient is a standard and consistent client for OpenStack APIs but it is not finished in many projects. For example, glance is one project does not want to migrate to OSC due to resource issues.
Artem tried to propose this as a community goal in the Train cycle but it was not selected due to a few resistance. Everyone agreed to continue this effort and try to
finish this soon. As the next step, we will be forming a pop-up team in the Victoria cycle and try to finish more projects before we declare this a community-wide goal. Below are the next steps on this topic:

Step 1. Make pop Up.
Step 2. Pop upstarts conversation about how this is important and digs up issues. Step 3. Document process (maybe using nova as an example)

  Monitoring in OpenStack: Ceilometer + Telemetry + Gnocchi state

Gnocchi is in a separate repo and not well maintained. Ceilometer is dependent in some ways on Gnocchi. This has been discussed in past many times. One proposal was to move away from Ceilometer and remove dependence on Gnocchi. There is no conclusion on this topic yet and other ongoing ideas are:

  • Ceilosca? Merging Ceilometer and monasca teams ?
  • Use oslo.metrics to become interface above all tools we have

  Cross-project work

Pop-up teams retrospective: We have two active pop-up team currently. Policy and image encryption. Policy team has not made much fast progress yet. I have finished the Nova policy work now and will be able to concentrate more on other projects. Image encryption progress is also slow progress.

  • Reducing community goals per cycle: There is not hard written rule of having two community wide goals per cycle but we have been selecting always two as minimum goal in each cycle. TC discussed and decided to have it document about no minimum number of goal or even to skip the goal for any cycle.
    Action Item: Clear outline of the documentation for goal proposal and selection.  Documentation that we don’t have to have 2 or 3 goals – gmann
    – https://review.opendev.org/#/c/739150/
  • Victoria goal: We selected the one community wide goal of migration of legacy jobs to zuulv3 native. We discussed to have “migrating the CI/CD to ubuntu focal”.
    – https://governance.openstack.org/tc/goals/selected/victoria/index.html
  • W cycle goal discussion kick-off: As per new goal schedule, we need to start the W cycle goal in V cycle so that we can finalize the list of community goal when W cycle development start.
    Action Items: TC Needs to have one or two people to drive W goal selection based on the timeline in the governance repo – njohnston & mugsie

   Detecting unmaintained projects early

Few projects become unmaintained in between or start of the cycle itself but we notice those during PTL election when there are no PTL candidates for those projects. Congress and Tricircle are the two recent examples. TC discussed re-enabling the health-check for projects but liaison things need to do the same activity. It is best practice for TC liaison to do twice checks per cycle. Also release team and QA teams can also trigger the in active projects/repo. TC will continue assigning the project liaison for Victoria cycle too.

  PTL role in today’s OpenStack / Leaderless projects

Every cycle, there are many projects with no PTL candidates and in Victoria cycle numbers were large. Technical Committee discussed new options of replacing the PTL role to decentralized responsibility with liaison for different activities. For example, release liaison, infra liaison, TC liaison etc.

TC has a mixed vote on the new proposal, many TC things that are just renaming the PTL not actually changing anything. In the current model also, PTL can delegate its duty to anyone.

The second option was to allow projects to try multiple maintainer role and others can keep continuing with PTL model.

Summary:  Doesn’t appear that anyone is opposed to allowing teams to experiment with having multiple maintainers rather than a PTL.
Needs to be documented.  Perhaps in the reference.yml file.

Action Item:

  • Resolution for how we want to handle optionally splitting PTL role (summarize discussion)- njohnston & evrardjp

   Reducing systems and friction to drive change

OpenStack had a lot of processes as per the activities and number of contributors in past. But now OpenStack reduced in a number of contributors as well as activities. To make things fast we need to reduce or lose some process around various activities.

TC discussed what all problems we face and list them and solve one by one. Single maintainer team like requirements are one of the key things to solve, there were discussions about defragmenting the OpenStack means merging the relevant team into one for example requirement in Oslo. Below is the list of problems in this area:

  • TC separate from UC (solution in progress)
  • Stable releases being approved by a separate team (solution in progress)
  • Making repository creation faster (especially for established project teams)
  • Create a process blueprint for project team mergers
  • Requirements Team being a one-person hero 🙂
  • Stable Team
  • Consolidate the agent experience
  • Figure out how to improve project <–> openstack client/sdk interaction.

  Discuss tag “tc:approved-release”, should we deprecate/remove it?

This came up from Manila PTG. Goutam pinged me on TC channel about adopting the TC tags in Manila and while checking for this tag we found that this tag was introduced in the old model of OpenStack when we had an incubated vs integrated project concept. This tag was a reference for BoD and interop team to know how mature the project is and they do follow the release model or not so that they can think of that project to be included in the interop certification program.

We concluded this to remove and notify BoD/Interop group to refer the project.yaml for OpenStack released projects.

Action Item:

  • Proposed the removal of the ‘tc:approved-release’ tag and indicate that projects in OpenStack Repos are TC approved -gmann

  OpenStack 2.0: Kubernetes-native

This is a new tag idea from Zane – https://review.opendev.org/#/c/736369/

“A common starting point for an OpenStack cloud that can be used to deploy
Kubernetes clusters on virtual machines in multiple tenants, and provides all
of the services that Kubernetes expects from a cloud.” This was not much discussed as this was during the end of the PTG but the idea was to kick off the discussion and start collecting the feedback on the review.

 Quality Assurance

Etherpad: https://etherpad.opendev.org/p/qa-victoria-ptg

  Ussuri Retrospective

We discussed good things we did in Ussuri cycle and what all to improve. Bug triage was one the key thing and had good progress and credit goes to kopecmartin. Also few new Cores in QA. yoctozepto in devstack and kopecmartin in Tempest.

During py2 drop we faced a lot of issues on maintaining the stable branch testing stable. We were able to fix those issues on time and keep gate healthy.

Few things we need more improvement is on keystone system scope testing and Patrole maintenance. We will keep doing those on priority in Victoria cycle.

Action items:

  • Open bug discussions need to be done in PTG
  • keep bug triage
  • QA Office hour Time (if we have the time to discuss)
  • AGREED: move to 13:00UTC

  Make tempest scenario manager a stable interface

We need to find common manager methods among plugins and define them in Tempest. Plugins should reuse the code from Tempest then and drop any duplicate methods from their repositories.
We discussed a few ideas:

  1. Audit around all the tempest plugins and add all repeated methods within scenario manager
  2. Find the methods which are really actually *used* by the plugins
  3. Audit if any plugins still using the Tempest scenario manager?
  4. Audit around all the tempest plugins and make methods consistent with parameters
  5. Clean the existing methods, since a single class is populated with lot of methods
  6. Break method to be single scope as much as possible.
  7. Adding more detailed docstrings, it should help us to understand the code/method/class

Action items:

  • Gmann, kopecmartin to create all the audit tasks on etherpad
  • Sonia list all plugins using the scenario manager copy under ‘Audit’ section.

  Gates optimization by a better test scheduling

Tempest now has the –worker-file parameter passed to stestr so that we can schedule the test execution over different workers in a balanced way. The idea here is to try distributing the test execution on gate jobs but at the same time we need to think about parallel execution of scenario tests which was made in serial due to ssh issue. We need to try this on tempest-parallel job first and see how it works.

Action Items:

  • arxcruz to add this in tempest-full-parallel
  • arxcruz after that make tempest-full-parallel voting and rename it to tempest-next

  Tempest cleanup

Over the last few months, a tempest-cleanup ansible role has been developed which will help us to use (test) the cleanup tool in the gates. Also lots of improvements have been made which optimized the tool and made it more efficient, new services were added to the scope of the cleanup and of course some bugfixes were made as well. We discussed a few of the improvements as next steps:
Verify if all leaked resources are cleaned up properly or not. For example, verify the dry run data.
Make a flag where cleanup can start failing the job if any leaked resource so that we can fix the tests causing a resource leak. If not failing then somewhere to capture the data.

Action items:

  • kopecmartin to send ML about this cleanup improvement we finished and usage.+ add plugins extension also in ML
  • kopecmartin to do L139 and L141 (Idea 1. and 2.)
  • gmann and masayukig can help plugins extension.

  Feature Freeze idea for new tests in Tempest and Patrole and other QA projects

We want to do Feature freeze for a few of the key repos under QA like devstack, Tempest, Grenade and Patrole. We agreed to call Feature freeze on R-3 week of that cycle release. Feature freeze in those projects means we will be accepting any new test cases or enhancement at the end of cycle to avoid regression during cycle release.

Action item:

  • gmann to add the doc for this.

  Cinder backends specific features testing in case of multi backends

There is a feature flag option for backend-specific feature tests but for multi backend case it is different and feature flag wound not work. In multinode, you can have some backend have that feature implemented and some not so we cannot say the feature is not present in that env.

Action item:

  • let’s add the config option for backend per feature, like encryption_enable_backend
  • If there are too many such config options then we need some other way to test the both cases.
  • Add new test class to cover the backend hint in volume_type.

  How to handle the tox.ini constraint for each Tempest new tag

Tempest is branchless and releases the tags per cycle but tox.ini has hard-coded default constraint to as master constraint. So older Tempest tag’s tox.ini (with the master constraint to use) might not compatible. We need to pin constraint in tox.ini when we release the new tag.

Action item:

  • Document the process for when to merge the tox updates and what all things to do, for example, devstack changes etc.

  Migrate hacking checks from diff. projects to hacking itself

We want to make sure we add/cover pep8/ flake8 most common and important checks to hacking itself: https://etherpad.opendev.org/p/hacking

Action item:

  • Comment on the etherpad and discuss them in the office hour

  Description for testcases as docstrings

This is ongoing work to add the test docstring: https://blueprints.launchpad.net/openstack/?searchtext=testcase-description
For further steps, we discussed to publish those docs with the auto-generation option in tempest doc.

  Victoria Priority & Planning

We discussed the Victoria priority and here is the list: https://etherpad.opendev.org/p/qa-victoria-priority

 

  Nova

Etherpad: https://etherpad.opendev.org/p/nova-victoria-ptg

  Ussuri Retrospective

We discussed what went well and what to improve next. There are few things that went well in Ussuri cycle:

  • After RC1 things went fairly smooth.
  • We recruited two new cores (+ new stable cores \o/)
  • Policy work done finally, whoop! +1 (this was a lot of code)
  • nova-network is dead (along with nova-console, nova-consoleauth, nova-dhcpbridge, …)
  • We went Python 3-only at last.
    On improvement side, more reviews are needed to get fast merge in nova. And still Cores are not as per the projects ongoing work.

  [nova][oslo]: Policy migration to handle scopes and roles.:

There was one bug reported when users generate the JSON format policy file from Oslo tool where the policy file has all new defaults rules without the deprecated values. -https://bugs.launchpad.net/nova/+bug/1875418
We discussed possible issues the operator can face while adopting the new policy. Migrating to a new policy should be smooth and should have a consistent way to use the policy file. Policy file in JSON format is an issue where you cannot comment on the default rules. We should provide a better way to use the policy file.

Policy file should only have override rules not the one with default values. Now we have a policy in code so any rule not present in policy file will be taken from default in code. If no rules to override then it is fine not to pass the policy file itself. Below are the next steps and working item we need to finish in Victoria cycle:

  • Warning if default rules in file
  • Upgrade check will be a good place too
  • Deprecation on the tool for JSON generation and then remove it in the next cycle
  • Warning on policy file being passed as JSON to Oslo
  • Change config policy_name default value from policy.json -> policy.yaml
  • convert the JSON file to YAML with default rules as commented out but keeping the overridden rule uncommented.

As a summary, the first step will be to deprecate the JSON format and migration steps to YAML format. Also, we need to add upgrade checks for the policy file format change and documents it clearly on project side documentation.

  [nova][oslo]: Add a /healthcheck URL

This is to add healthcheck endpoint for nova to check if service running or ready to use for example load balancer can call this endpoint to know the status. There are multiple things to cover to say if service is healthy or not like DB, MQ, services and Cell are able to communicate or not. We cannot cover or report the complete health of node which are more scoped on client side instead of nova.

PoC: https://review.opendev.org/#/c/731396/
Details for flow and example response are in https://etherpad.opendev.org/p/nova-healthchecks

ML: http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015088.html

We discussed and agreed on below things to do as part of healthchecks:

  • Do the auth data collection via cache what Dan mentioned in the review (731396) and healthchecks will return that cache info to unauth users without talking to any DB/MQ(basically no processing of data)
  • API worker checks is ok to do and system level checks can be on client side or external to nova as that include the all node scan etc
  • It will be implemented as Nova healthchecks plugin and what all things to be added in this will be discussed in next sessions I nova PTG.

 

by Ghanshyam Mann at August 29, 2020 10:36 PM

August 24, 2020

OpenStack Blog

10 Years of OpenStack – Alex Xu at Intel

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Alex Xu from... Read more »

by Sunny at August 24, 2020 03:00 PM

Opensource.com

Happy 10th anniversary, OpenStack!

OpenStack has transformed the open source industry since it launched 10 years ago. It was an endeavor to bring greater choice in cloud solutions by combining NASA's Nova with Rackspace's Swift object storage and has since grown into a strong base for open infrastructure.

by Sunny Cai at August 24, 2020 07:00 AM

August 20, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: OpenDev Virtual Event Series

It was bittersweet to wrap up the last OpenDev in the virtual event series – OpenDev: Containers in Production last week. While it was not the same experience as in-person OpenDev events, the community made the best of the current situation. 

In the past three months, we’ve had over 1,400 attendees from more than 100 countries who came together to discuss Large Scale Usage of Open Infrastructure, Hardware Automation and Containers in Production. Thank you to all the community members who have worked together virtually to collaborate without boundaries.

This event was not possible without the support of the OSF Platinum and Gold members who are committed to the Open Infrastructure community’s success. Also, thank you to all the Programming Committee members and discussion moderators who helped to make these discussions possible!

Just because the virtual OpenDev is over doesn’t mean you can’t still experience it! If you would like to watch the previous virtual OpenDev events recordings or catch up with the discussions on the etherpad notes, you can find them here:

OpenDev: Large Scale Usage of Open Infrastructure

OpenDev: Hardware Automation

OpenDev: Containers in Production

Let’s keep talking about Open Infrastructure at the Open Infrastructure Summit on October 19-23! Summit registration is open and you can register at no cost

The following week, the Project Teams Gathering will be virtual! The event will be held from Monday, October 26 to Friday, October 30. The event is open to all OSF projects, and teams are currently signing up for their time slots. The schedule will be posted in the upcoming weeks. Register today

OpenStack Foundation news

Open Infrastructure Summit, October 19-23, 2020

Project Teams Gathering (PTG), October 26-30, 2020

Airship: Elevate your infrastructure

  • Congratulations to the 2020 – 2021 Airship Working Committee!
    • James Gu
    • Kostiantyn Kalynovskyi
    • Matt McEuen
    • Sreejith Punnapuzha
    • Drew Walters 
  • Check out the Airship Case Study segment of the newly released Baremetal White Paper and read about how Airship is using Metal3 with Ironic for provisioning.
  • In case you haven’t heard, Airship 2.0 has reached Alpha status! Architects Alan Meadows and Rodolfo Pacheco released a new blog post summarizing the lessons learned on the road to Alpha, and how those lessons have impacted the technical blog series. Check it out!

Kata Containers: The speed of containers, the security of VMs

  • Thank you to everyone who has attended the backlog review meetings and those who reviewed their issues offline. 
    • We cut down the current bucket from 111 issues to 66 issues. Please put some time to review the remaining issues so we can close this bucket by next week.
  • Check out the demo: Envoy HTTP compression with Intel QuickAssist Technology (QAT) in a Kubernetes cluster at KubeCon EU
    • Eric Adams from Intel delivered an advanced Kata Container use case using an Intel QAT to accelerate http compression offloads in Kubernetes. Watch the demo for free at the virtual Intel booth.
  • Want to contribute a Kata pull request (PR)? Learn more about how to get involved in Kata

OpenStack: Open source software for creating private and public clouds

  • The OpenStack community continues to make progress towards the Victoria release in October. We have now passed the Victoria-2 development milestone, with feature freeze coming up on September 10. Proposed release schedules for the next development cycle, Wallaby, have also been posted.
  • Two new SIGs (Special Interest Groups) have been formed. The Hardware Vendor SIG provides a place for hardware vendors to collaborate on integrating advanced hardware functionalities into OpenStack. The Cloud Research SIG aims to bridge the gap between academic research in Cloud Computing and OpenStack projects.
  • The OpenStack map was recently refreshed to better represent the current landscape of components that make up OpenStack. You can download the latest version (v. 20200701) to integrate in your presentations.
  • Are you ready to take your cloud skills to another level? The updated Certified OpenStack Administrator (COA) exam can help you with that! Check out the OpenStack COA exam and become a Certified OpenStack Administrator.

StarlingX: A fully featured cloud for the distributed edge

  • The StarlingX 4.0 release is now available! Check out highlights about the new features in a recent blog post, browse the code online or download the ISO image to deploy and try out the latest version of the platform.
  • The next TSC and Project and Technical Lead elections are coming up in the fall. Stay tuned for updates and check out the elections web page for further information!

Zuul: Stop merging broken code

  • Pin your Zuul and Nodepool installations to version 3 or deploy with Zookeeper TLS connections to accommodate upcoming changes to Zuul and Nodepool. Find more details on the mailing list.
    Zuul and Nodepool will be available in Fedora 33. Find out more on the mailing list.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

 

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at August 20, 2020 01:00 PM

August 19, 2020

The Official Rackspace Blog

Cloud computing is a lifeline for businesses during COVID-19

Cloud computing is a lifeline for businesses during COVID-19 nellmarie.colman Tue, 08/18/2020 - 20:12

 

The pandemic has revealed a new reality: the cloud is vital for keeping businesses running. From new remote workforces to virtual meetings, businesses have had to make critical adjustments to survive — including adopting and using more cloud services.

COVID-19 has served as a de facto catalyst for proving the value and flexibility of cloud computing, according to Rackspace Technology CTO of Solutions and host of the Cloud Talk podcast, Jeff DeVerter. “We’ve been adopting and supplying this technology to a certain segment of businesses for years, but now the rest of the business world is realizing they can access cloud capabilities to solve their problems. Almost everyone has been impacted in some way — even organizations heavily invested in traditional technologies. They are telling us that the cloud looks much more attractive now.”

From just over 20 years ago, when we only had dialup internet access, no streaming services and no smartphones, the world has changed dramatically — and even more so since the outbreak of COVID-19. “It’s been amazing to watch the growth of the cloud, with the jumps and acceleration year after year, and see how it’s impacted our lives,” said Jeff. “We should consider ourselves extremely lucky that we have these technologies to help us handle this situation now, get through it and continue working.”

As much as things have changed already, Jeff believes this is just the beginning. “Going forward, many more organizations will view the cloud as a strategic tool. I don’t think we’ll ever retreat from that perspective now. It will be the new normal or influence what the new normal will look like.”

Hear more from Jeff as he sits down with our Cloudspotting podcast hosts Alex Galbraith and Sai Iyer. In this recent podcast episode, they explore what it would have been like had the crisis occurred before the advent of the cloud, as well as how cloud computing can benefit businesses going forward. Jeff brings his decade-long experience of working with cloud-based services and technologies and advising companies on making cloud transformation decisions to the conversation.

Join Jeff, Alex and Sai as they discuss insights on the past, present and future of the cloud, including:

  • A time pre-internet and pre-smartphone and its limitations
  • The impact of cloud technology on organizations as a result of COVID-19
  • The progression of cloud adoption, from infrastructure-as-a-service to SaaS
  • What comes next, including the transformation of data to the cloud
  • How scalability will become a strategic advantage in an uncertain future
  • How more organizations will look to the cloud and ask: how can we perform better, cheaper and faster?

 

Cloud computing is a lifeline for businesses during COVID-19Jeff DeVerter, Rackspace Technology CTO, sits down with our CloudSpotting podcast hosts to discuss the past, present and future of cloud. Discover how the cloud can benefit your business — during and after COVID-19.https://cloudspotting.fireside.fm/24Listen now
Cloud InsightsChris SchwartzCloud computing is a lifeline for businesses during COVID-19photo of woman using computer for video conference

by nellmarie.colman at August 19, 2020 01:12 AM

August 18, 2020

Daniel Bengtsson

Presentation of ensure-tox role.

Introduction

The ensure-tox is simple. It's a ansible playbook for zuul to check if tox is install. It looks for tox, and if it can't found it, it install it via pip into a virtual environment for the current user. It can be very useful sometimes and particulary with the usage of ensure_global_symlinks variable.

Example with a bug fix

Recently with a colleague Bernard Cafarelli, we've fixed the upstream CI for oslo-cookiecutter. It is a project that makes it easy to create an oslo project. The problem came from a test script that tried to run the tox command in the newly created project. The tox command was therefore not visible in the virtualenv created by the new project. I could not reproduce the bug in local, because couldn't immediately identify the problem because I had tox installed on my machine. In such cases, ensure-tox comes in handy. With the use of the ensure_global_symlinks variable which allows to make a symlink into /usr/local/bin path. It is needed in this case, because the script expects to have tox in a more standard location. The ensure-tox role was already in use by the parent job. So we just had to add the use of the variable to fix the bug.

Zuul jobs

The oslo-cookiecutter project uses zuul-jobs. It is a project that contains a set of Zuul jobs and Ansible roles suitable for use by any Zuul system. This allows in one of his projects to be able to reuse already existing job, role and not to reinvent the wheel. In our case, oslo-cookiecutter uses the tox job from the zuul-jobs project. The tox job uses the ensure-tox role, as can be seen here. The tox playbook first performs the ensure-tox role. As explained previously, ensure-tox makes sure that tox is installed. If it is not installed, the role will install it in a virtualenv using the ensure-pip-virtualenv role. In our case, we had to use the ensure_global_symlinks variable. To have the tox command accessible in a path of the $PATH which allowed us to correct our problem. The zuul-jobs project contains many roles and playbooks. I will write an article soon to introduce zuul and zuul-jobs.

Conclusion

In conclusion, ensure-tox is very useful to ensure that the job has tox installed and to have a symlink on the host system for scripts that need it. This is quite specific to OpenStack, because few projects use zuul. But if you are using zuul for a python project. I recommend you to use zuul-jobs and ensure-tox too. To enjoy a multitude of jobs, playbooks and ansible roles for zuul already written.

by Damani at August 18, 2020 12:45 PM

August 17, 2020

OpenStack Blog

10 Years of OpenStack – Fu Qiao at China Mobile

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Fu Qiao from... Read more »

by Sunny at August 17, 2020 01:00 PM

August 13, 2020

OpenStack Superuser

OpenDev: Containers in Production, that’s a wrap!

It’s bittersweet to wrap up the third OpenDev in the virtual event series – OpenDev: Containers in Production this week. While it was not the same experience as our traditional in-person OpenDev events, the community made the best of the current situation. This week, we had over 700 people from 470 companies from more than 70 countries who came together to discuss containers in production. Thank you to all the community members who have worked together virtually to collaborate without boundaries. 

700+ registrations from 70+ countries

This event was not possible without the support of the OSF Platinum and Gold members who are committed to the Open Infrastructure community’s success. Also, thank you to the Programming Committee members: Gergely Csatari, Kendall Nelson, Lingxian Kong and Qihui Zhao. You helped to make these discussions possible! 

If you didn’t attend—or if you did and want a replay—we have collected the discussions and next steps you may have missed.

Day 1:

Allison Price, OpenStack Foundation Senior Marketing Manager, and Ashlee Fergerson, OpenStack Foundation Community & Events Coordinator, kicked off the first day of the event by explaining why the OSF chose Containers in Production as one of the OpenDev topics. 451 Research Market Monitor anticipated that the total market revenue of the application containers will reach $4.3 billion by 2022. When it’s combined with the OpenStack market, the total market revenue will reach $12 billion by 2023. 

There is not only a huge demand for containers in production, but this also introduces choices for different technology and implementation strategies, so users are facing different integration challenges. According to the data from the most recent OpenStack User Survey about what container or platform-as-a-Service (PaaS) tools users are using to manage their applications, the survey shows that users have different open source components that they are combining into their environments. Just like what Allison said, “it’s not just one size fits all. Every OpenStack user chooses something else to manage their applications based on what makes sense for their own environment.” 

2019 OpenStack User Survey. Find these metrics and more at openstack.org/analytics

Before the discussion began, Xu Wang and Bin Liu from Ant Group gave a demo on monitoring container usage, featuring Kata Containers, Kubernetes, Prometheus and Grafana. They answered questions from the attendees such as if there is a continuous data stream from the shim to the Kata agent to get the metrics and the size of the Virtual Machines (VM) with Kata Containers. If you have missed or would like to rewatch the demo, you can watch the recording here.

Part One: Using OpenStack + Containers Together: OpenStack + Other Container Projects (not k8s)

Part one of OpenDev’s topic on using OpenStack + containers together was moderated by Spyros Trigazis, CERN computer engineer. This discussion was mainly focused on running containers in the environments without Kubernetes. Spyros kicked off the discussion by asking the attendees to share the kinds of images they consume and how they build images. He explained how CERN uses Zuul, which is an open source CI project supported by the OpenStack Foundation, and OpenStack Magnum to build their images. Gergely Csatari, Nokia Senior Specialist, followed up with the questions of what base images CERN uses and how Spyros’ team started them from scratch. The discussion continued with active participants from various companies on topics such as how to use OpenStack and containers together and the requirements users need. Watch the full session recording and catch up with the discussion on the etherpad.

Part Two: Using OpenStack + Containers Together: OpenStack Projects that Interface with k8s (also related projects like Metal3 and Kata Containers)

Part two of the topic on using OpenStack + containers together was moderated by John Garbutt, Stack HPC principal engineer. This discussion was mainly focused on running containers with OpenStack with Kubernetes or with other open source projects. John kicked off the discussion by asking attendees to take  polls on where they are with OpenStack and Kubernetes today and what they use to create Kubernetes clusters on OpenStack. The pool results showed that the majority of attendees are already using OpenStack and Kubernetes, and they want to find out more to solve the challenges. In addition, the most popular tools that people use to create Kubernetes clusters on OpenStack are OpenStack Magnum, Rancher, and kubespray. Later, the discussion moved to topics on disposable clusters and long lived clusters and how they change the way people use OpenStack and Kubernetes together. Overall, there were a lot of integration challenges and implementation strategies that the users shared in this discussion session. 

If you are interested in discussing further on the performance and accelerators between Kubernetes and OpenStack working together, make sure to join the OpenStack SIGs to continue the discussion. See the discussion notes here and watch the full discussion recording here.

SIG Cloud Provider + Provider OpenStack (openstack w/ k8s)

The moderators, Anusha Ramineni & Christoph Glaubitz, NEC technical specialist & iNNOVO Cloud GmbH cloud architect, kicked off the discussion by giving a brief Introduction on the SIG Cloud Provider and Provider-OpenStack project. This discussion touched on application management on Kubernetes, how Kubernetes deploys on top of OpenStack and how people can use OpenStack to run on top of Kubernetes. Later, the discussion transitioned to how software running in the cluster can help to leverage OpenStack by creating load balancers and more. See the discussion note here and watch the full discussion recording here.

Day 2:

Telco + Network Function

As the moderator for this session, Gergely Csatari, Nokia senior specialist, kicked off the discussion with a question: Why do telecom workloads need additions to Kubernetes to run in production? The discussion touched on topics such as what requirements telecom equipment have, how these requirements lead to the need for different Kubernetes extensions and what kind of network API telcos should have. If you are interested in continuing the discussion on a networking API for Kubernetes, please add your name on line 98 on the etherpad. If you would like to continue the discussion on agreeing on workload separation, please add your name to line 72 here. Watch this discussion recording here

Security + Isolation

Xu Wang, Ant Group senior staff engineer, led the discussion on security and isolation. This discussion was mainly on container runtimes and management of Kubernetes clusters on top of OpenStack to provide tenant isolation. Users from various companies shared their experiences on the ways to improve the security of containers and protect the host from malicious container apps. Later in the discussion, attendees also shared their best practices and guidance for securing containers. If you are interested in the security Guidance for Telco Network Functions, it is maintained here. If you have missed this discussion or want a replay, check out the discussion recording and read the discussion notes on the etherpad.

Next steps:

Just because the Virtual OpenDev is over doesn’t mean you can’t still experience it! If you would like to watch the previous virtual OpenDev event recordings, you can find them here: Large Scale Usage of Open Infrastructure and Hardware Automation.

Don’t forget to join the open infrastructure community at the next virtual event: Open Infrastructure Summit on October 19-23. The registration is open, and you can register at no cost! 

The following week, the Project Teams Gathering will be virtual! The event will be held from Monday, October 26 to Friday, October 30. The event is open to all OSF projects, and teams are currently signing up for their time slots. The schedule will be posted in the upcoming weeks. Register today! 

The post OpenDev: Containers in Production, that’s a wrap! appeared first on Superuser.

by Sunny Cai at August 13, 2020 05:06 PM

August 12, 2020

OpenStack Superuser

Contributor Q&A: A college student’s experience contributing to the OpenStack Ussuri release

Participate. It’s a simple word, but has a lot of meaning packed in.

At a recent virtual OpenDev event, Jonathan Bryce, executive director of the OpenStack Foundation (OSF) told attendees this is of the most critical things to do once you join an open source community. He encouraged participants to think broadly and remember that collaboration is built by people who are actively engaged and participating by sharing their knowledge and experiences so they can learn from each other.

This is exactly what some North Dakota State University students recently did for the OpenStack Ussuri release.

North Dakota State University has a capstone class every spring that matches small groups of students with open source projects. This spring was the second time OpenStack has participated in this program. Last year, the students were involved in StoryBoard, specifically, enhancing the migrations scripts that the community uses to move people off of Launchpad to StoryBoard.

This year, the OpenStack community had four students helping to add support for configuring TLS ciphers and protocols on Octavia load balancers. Along with Michael Johnson, and Adam Harwell, students were mentored through their semester helping them learn not only about Octavia, but about OpenStack, and open source. They had real world experience communicating with a global open source community, responding to comments on patches they pushed, and managing their time to accomplish what they set out to before the end of their semester. With their hard work, they landed TLS cipher and protocol support before the end of the Ussuri release and it became a large part of the promotion and marketing around the release.

I interviewed each of participants to learn how they got started and what their takeaways were. Below are responses from Dawson Coleman, a recent graduate of North Dakota State University.

What was the hardest part about getting started?
I thought about just writing “DevStack” here and moving on to the next question. While that would obviously be hyperbolic, struggling with DevStack was the defining theme of my start with OpenStack development.

The thing that was hardest for me was what were some of the less-than-obvious pitfalls. For example: there’s currently a bug in neutron where if the system is restarted (a virtual machine in my case), all the network interfaces break. The fun part of this is unless you explicitly check all your network interfaces and know what to look for, you would have no idea this happened, and in my case I didn’t notice that anything had gone wrong until load balancer creation failed inexplicably.

From a learning/onboarding standpoint, it’s not particularly hard to deal with an issue like this: just don’t shut down your VM. Getting to the point where you’re aware of that fact, however, isn’t always a pleasant process. It’s those little things that you might have to learn the hard way that I think were the hardest part about starting out.

What could have made the getting started process easier?
While it would be easy for me to sit here and talk about bugs and say “make it not do that”, I recognize that maintaining something like DevStack isn’t simple and spending time on maintenance isn’t free. I think something that’s easy to miss as a junior dev is that in some cases it’s actually more efficient from a productivity standpoint to just leave quirks as they are, even if the workarounds aren’t always convenient, since all the serious users of your tool will eventually learn how to deal with it.

That being said, I think It would be cool if there was a “common problems” section or something equivalent mentioned in the getting started guide for DevStack, even if it was just links to bugs reports or something of a similar nature.

Of course, I couldn’t mention my struggles with DevStack without also mentioning all the great help I received in the #openstack-lbaas IRC channel. At first I was really worried about interrupting others with my questions or being too much of a noob, but I found that everyone I interacted with was very understanding and helpful. I think if I had realized this earlier I could have solved some of my issues even faster.

What was the biggest benefit you got out of your involvement? (hard or soft skills, connections, etc)
All things considered, I think the biggest benefit was all the different aspects of the development process I got to experience and being able to see how all the different facets of that process come together at the end of the day. This wasn’t some throwaway demo for a class presentation, this is a real product that will be run on production servers. To have the opportunity to participate in working on a project like this and see how all the parts fit together was something truly special, and it’s definitely one of the highlights of my college experience. If I had to pick out some highlights, though, these are what they would be:

Outside of one tiny commit to the Skia graphics library, working on Octavia was my first experience participating in any open source project. Octavia was a great introduction to participating in open source development.
Python is everywhere nowadays. I wasn’t particularly well versed in the language coming into the project, but Octavia gave me a taste of a production Python codebase, being able to work on multiple areas such as REST APIs, database management, and unit testing.

Through the process of being introduced to OpenStack’s workflow I also got the opportunity to work with some new tools like IRC chat and Gerrit code review. Experience with these tools is applicable to tons of projects.
The opportunity to participate in code reviews and have technical discussions was a great experience.
Even though they play a comparatively minor role, I found all the various tools that were used to automate processes like documentation and release note generation to be fascinating.

What advice do you have for students who want to get started with open source?
I think the biggest key to success in joining any project would be to start with a willingness to learn, and I would emphasize that a willingness to learn is just as important when it comes to a project’s development process as the code itself. Being considerate, respectful of peoples’ time, and willing to communicate goes a long way. If you run into roadblocks as a newcomer, you definitely want to make sure to do your due diligence in solving problems yourself (making sure to read relevant documentation and resources), but asking questions can help the project as well as yourself. If you run into issues that the project hasn’t documented yet, you might have an opportunity to help document bugs.

Stepping into a new group of people can be intimidating at first, but as long as you make an honest effort to coexist with others it’s hard to go wrong. As with introducing yourself to any new group of people, there will always be that element of uncertainty, but I think it’s a risk that’s worth taking, and one that looks way scarier than it actually is. The OpenStack community I have found to be nothing but friendly and helpful. One phrase that I was told repeatedly during the onboarding process was “don’t be shy”, and I think that sums it all up pretty well.

What made you interested in OpenStack as a project, as opposed to some of the other options?
I was actually introduced to OpenStack when I worked at NDSU’s research computing department, although at the time we were only using it to manage VMs. From there I had a general idea of what OpenStack was, and after seeing just how widespread it was actually used I thought working on OpenStack would be a great opportunity. I’ve also found cloud infrastructure to be an interesting field, and I figured it’s a growing market.

The thing that was unique about the OpenStack Octavia project was the open-source aspect. I was aware of this going in, but I think I really only appreciated the benefits of it in hindsight. Almost all of the other choices were proprietary projects that would belong to their respective companies afterwards. Now, don’t get me wrong–some of those companies were working on some neat stuff! However, at the same time, it’s doubtful that the students who worked on those projects will ever see their code again.

The transparency enabled by open source, on the other hand, opens up a lot more opportunities for me to revisit my contributions after the fact. If I ever want to show explicit examples of what I worked on, I have that. My commits are even linked to my GitHub profile! And if for some reason I eventually wanted to reuse that code for something else, I could. In that sense, even though the code belongs to OpenStack, I can still have some feeling of “ownership” in that I still have a lot of accessibility to the code under the Apache license. Now, even if the code I wrote most likely doesn’t have realistic applications outside of Octavia, it’s cool to be able to have something to show for your work after everything is all said and done.

What did you like most about your involvement with the OpenStack Octavia team? What did you contribute to the Ussuri release?
What contributed the most to making my involvement enjoyable was the atmosphere in the community. We were fortunate to be able to work closely with current Octavia PTL Michael Johnson throughout the entire process, and the weekly video meetings we had as a team were pivotal in answering our questions and keeping us on track towards our goals. I think we owe a lot of what we achieved to him as well as the rest of the Octavia community.

My role on the NDSU Octavia project was doing a majority of the back-end development. In Ussuri, I implemented TLS cipher selection in Octavia. This is a big step forward, as before you could create a TLS load balancer but there was no standard way of even knowing what ciphers would be accepted. In Ussuri this has been fleshed out into its own configuration field, so now it’s straightforward for operators to know and specify exactly what ciphers their load balancers will use.

Due to time constraints, we were only able to get cipher configuration for inbound connections done for Ussuri. This is only one part of the new set of TLS configuration options, which will include configuration of both TLS ciphers and versions for incoming connections and back-end re-encryption, as well as a cipher blacklist and a minimum TLS version option. These features have been merged and will be available as part of the Victoria release.

Cover Photo Source // cc

The post Contributor Q&A: A college student’s experience contributing to the OpenStack Ussuri release appeared first on Superuser.

by Kendall Nelson at August 12, 2020 02:00 PM

Women of Open Infrastructure: Meet Amy Marrich on the OpenStack Foundation Board

This post is part of the Women of Open Infrastructure series to spotlighting the women in various roles in the community who have helped make the Open Infrastructure successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of Open Infrastructure taking shape. If you’re interested in being featured or would like to nominate someone to tell their stories, please email editor@openstack.org.

This time we’re talking to Amy Marrich on the OpenStack Foundation board. She tells Superuser about her role in the community and her advice for anyone who is considering a career in open source.

What’s your role (roles) in the Open Infrastructure community?

OSF Individual Board member, Chair for OSF Diversity and Inclusion Working Group, and Core for OpenStack-Ansible

What obstacles do you think women face when getting involved in the Open Infrastructure community

Women who are further in their career I think are more willing to speak up, while those just starting out may find things intimidating.

Why do you think it’s important for women to get involved with open source?

Women often look at and think of things differently, and the more viewpoints we have the better Open Source becomes.

Efforts have been made to get women involved in open source, what are some initiatives that have worked and why?

Mentoring programs such as GSoC and Outreachy have been successful as they are more structured and supported then internally supported efforts. I think the fact they are supported by external resources and interns are paid helps create more of a ‘work’ environment and also creates an ally to help the interns to get started.

Open source moves very quickly. How do you stay on top of things and what resources have been important for you during this process?

I think it’s important to be on the mailing lists as well as being on IRC with a bouncer, being able to read the backlog helps catch up on anything missed. The mailing lists provide valuable information and conversations. In addition, following people on Twitter gives you a wider view of different subjects and different projects.

What advice would you give to a woman considering a career in open source? What do you wish you had known?

Don’t be afraid to speak up and ask questions. Find a mentor and an ally who you can ask questions but will also support you if you do speak out and someone tries to speak over you.

The post Women of Open Infrastructure: Meet Amy Marrich on the OpenStack Foundation Board appeared first on Superuser.

by Superuser at August 12, 2020 01:00 PM

August 11, 2020

OpenStack Superuser

Where are they now? Superuser Awards winner: VEXXHOST

VEXXHOST continues to embody what a Superuser is—its team continues to contribute upstream, participate on the OpenStack Technical Committee, and grow its footprint of open source projects like OpenStack, Kubernetes, and Zuul. It’s only been a year since its team won a Superuser award (after a record three times being nominated!), but their commitment continues to impress the community with its growth.

What has changed in your OpenStack environment since you won the Superuser Awards?

Since winning the Superuser Award, we’ve upgraded to newer OpenStack releases, increased footprint in data centers all around the world and increased number of managed private clouds, both hosted and on-premise. We’ve also helped more organizations get OpenStack in their environments as well as helped other organizations upgrade their environments.

What is the current size of VEXXHOST’s OpenStack environment?

We have a public cloud spanning over two regions, as well as numerous private clouds that we’ve deployed and managed all over the world. Overall, we can say that we’ve started managing an aggregate of over 100,000 cores.

What version of OpenStack is VEXXHOST running?

OpenStack Train

What open source technologies does your team integrate with OpenStack?

It’s not a secret that we’re huge open source advocates. A lot of the components of our infrastructure and the tools that we use are open sourced.

We deploy:

  • OpenStack using Ansible and Kubernetes
  • Ceph for storage
  • Zuul for CI
  • Prometheus and AlertManager for monitoring.

We also offer Kubernetes solutions and integrate those using OpenStack Magnum and Kuryr.

What workloads are you running on OpenStack?

We run a public cloud, as well as multiple private clouds, which are used by clients who run all sorts of workloads.

How is your team currently contributing back to the OpenStack project? Is your team contributing to any other projects supported by the OpenStack Foundation (Airship, Kata Containers, StarlingX, Zuul)?

We’re excited to say that we do give back to the community! We contribute upstream whenever possible. We do so for OpenStack as well as other projects like OpenStack-Ansible and Zuul. Additionally, many of our team members are cores in different projects. We also contribute back in resources. We’re infrastructure donors to the OpenStack Foundation as well as being supporters of Kata Containers.

What kind of challenges has your team overcome using OpenStack?

OpenStack has allowed us to offer a complete solution with an exhaustive list of services to accommodate clients in so many different industries. With OpenStack being around for so long, we’ve seen it morph, and continue to change and adjust, to be an answer to the needs of companies at different scales, with different requirements, in this ever-changing industry.

Stay tuned for more updates from previous Superuser Award winners!

 

The post Where are they now? Superuser Awards winner: VEXXHOST appeared first on Superuser.

by Ashlee Ferguson at August 11, 2020 01:00 PM

August 10, 2020

OpenStack Superuser

For Blizzard Entertainment, it’s “game over” on scaling complexity

Blizzard Entertainment is a California-based software company focused on creating and developing game entertainment for customers in the Americas, Europe, Asia, and Australia. In late June, two of the company’s engineering leaders, Colin Cashin and Erik Andersson, talked to the open source community about their cloud strategy and scaling challenges at the OpenDev virtual event focused on large scale usage of open infrastructure.

Blizzard employs a multi cloud strategy. The company uses public clouds from Amazon, Google, and Alibaba, and since 2016 also owns and operates an extensive global private cloud platform built on OpenStack. Blizzard currently has 12,000 compute nodes on OpenStack distributed globally, and even managed to upgrade five releases in one jump last year to start using Rocky. The team at Blizzard is also dedicated to contributing upstream.

All in all, Blizzard values simplicity over complexity, and has made consistent efforts to combat complexity by addressing four major scaling challenges.

Scaling challenge #1:
The first scaling challenge that Blizzard faced was Nova scheduling with NUMA pinning. NUMA pinning ensures that guest virtual machines are not scheduled across NUMA zones on dual socket compute nodes, thereby avoiding the penalty of tarversing oversubscribed bus interconnects on the CPU. For high performant game server workloads this is the difference between a great and not great player experience. In 2016, they made the decision to implement NUMA pinning during scheduling to prevent these issues, ahead of the launch of Overwatch, Blizzard’s first team based First Person Shooter (FPS). At scale, this decision caused a lot of pains. NUMA scheduling is expensive and requires a recall to Nova DB, impacting the turnaround time of this process. During particularly large deployments, they regularly ran into race conditions where scheduling failed, and ultimately addressed this issue with configuration tuning to increase the target pool for the scheuler from 1 to 20 compute nodes. Another side effect of NUMA pinning was broken live migrations, a hindrance that is now fixed in Train’s Nova release.

The takeaway: For large environments, NUMA pinning should be implemented in a tested and controlled manner, and that live migrations with NUMA profiling is fixed in new releases.

Scaling challenge #2:
Next, Cashin and Andersson discussed scaling RabbitMQ. RabbitMQ is a tool that acts as a messaging broker between OpenStack components, but in Blizzard’s case has proven to be easily overwhelmed. Recovering serviceswhen something went wrong (e.g. large scale network events in a datacenter) appeared to be their biggest challenge at scale, and this wasn’t something that could be overlooked. To tackle this, Blizzard tuned RabbitMQ configurations to introduce extended connection timeouts with a variance to allow for slower but more graceful recovery. Additional tuning was applied to Rabbit queues to make sure that only critical queues were replicated across clusters and that queues could not grow exponentially during these events.

The takeaway: RabbitMQ needs tuning unique to your environment.

Scaling challenge #3:
Neutron scaling proved to be the third biggest hurdle for Blizzard. Blizzard experienced several protracted operational incidents due to having certain OpenStack services colocated on the same controller hosts. The Blizzard team fixed this in 2019, when they decided to scale horizontally by migrating Neutron RPC workers to virtual machines. Moving to VMs also solved the shared fate of the API and worker pools. Additionally, there was the issue of overwhelming the control plane when metadata services proxied huge amounts of data at scale. After much research and conversation with the community, Andersson was able to extend the interval to 4-5 minutes and greatly reduce load on the control plane by up to 75% during normal operations.

The takeaway: Neutron configuration and deployment should be carefully considered as the scale of your cloud grows.

Scaling challenge #4:
Lastly, the concern of compute fleet maintenance had been an issue for Blizzard for quite some time. As their private cloud went into production at scale, there was an internal drive to migrate more workloads into cloud from bare metal. In many cases, this meant that migrations took place before applications were truly cloud aware. Over time, this severely impacted Blizzard’s ability to maintain the fleet. Upgrades involved lots of toil and did not scale. Over the past 15 months, Blizzard’s software team has built a new product, Cloud Automated Maintenance, that enables automated draining and upgrading of the fleet. The product uses Ironic underneath to orchestrate bare metal and a public cloud style notification system, all automated by lifecycle signaling.

The takeaway: Onboard tenants to OpenStack with strict expectations set about migration capabilities, particularly for less ephemeral workloads. Also, have the processes and/or system in place to manage fleet maintenance before entering production.

Going forward, Blizzard will continue to pinpoint and tackle challenges to eliminate complexity at scale as much as they can. If you’re interested in finding solutions to these challenges, Blizzard is hiring at their offices in Irvine, CA.

The post For Blizzard Entertainment, it’s “game over” on scaling complexity appeared first on Superuser.

by Ella Cathey at August 10, 2020 12:00 PM

August 06, 2020

OpenStack Superuser

Verizon’s Optimum Performance Relies on Owning the Testing Process

Verizon’s cloud platform, built on an OpenStack infrastructure distributed across 32 global nodes, is used by the networking team to provide network services to commercial customers. Because these applications are commercial products such as firewalls, SD WAN and routing functions provided by third party companies, not owned by Verizon, the applications sometimes had odd interactions with the underlying infrastructure and each other.

Issue: Uneven vendor application performance

The product engineering team realized that the applications owned by Verizon’s partners were each configured differently, and depending on how they were each configured had a significant effect on how they behaved in the environment.

SDN vendor performance variance became a pressing issue that significantly affected throughput in the field. For example, it was discovered that in many cases, when encryption was turned on, throughput was reduced by half. With traffic moving through multiple systems, it became difficult to determine the cause (or in some cases, causes) of problems and determine the fixes needed. Dramatic variation in vendor capabilities to fully take advantage of virtualized applications and infrastructures, to optimize those applications to OpenStack became a major challenge.

Solution: Create platform and processes to address issues

Verizon tackled this issue of inconsistency by building a production engineering lab with full testing capabilities. This lab environment, used for product development, production support, and troubleshooting customer configurations, gives a clear and efficient feedback loop that is useful for informing product managers, sales teams and customers with real world results. For instance, when a customer decides to run voice traffic through a firewall (not a common configuration), with the lab Verizon can access and analyze all the different nuances of that configuration. The lab is also used to work closely with vendors to optimize their virtualized applications. It supports the capacity to test both data center environments and edge devices.

As a consequence of developing the production engineering lab, Verizon now has the ability to insist on thorough and consistent testing of each vendor’s application. Verizon is able to take customer their production traffic and run it through the lab, making it possible to reproduce customers’ issues in the lab environment. Through verifying each application, testing them for performance-based on factors like encryption, and making full performance testing on all integrated service chains automated and mandatory, Verizon is able to provide a much higher level of value to their customers to prevent potentially unpleasant surprises.

User Story: Financial Services firm with high security and performance requirements

Customers look to Verizon with high expectations, and Verizon makes sure to work with each of their vendors to provide the testing and support that they need most to meet their SLAs.

One of Verizon’s customers, began to experience problems with low bandwidth and application microfreezing. This was a big problem for their security application. After some testing it was soon obvious that this behavior was common to all security applications running on the virtualized environment. Immediately, the team at Verizon began to make changes to how the VMs were stood up in the environment, without needing to change any aspects of the underlying hosted infrastructure itself.

Because the results of this case study affected nearly every single one of Verizon’s vendor applications, particularly where the customers had latency sensitive deployments and transports larger than 100 Mbps, the company has now developed new standards to support customer configurations.. All VMs for future applications are now automatically configured to be pinned to resources to avoid resource contention, vendors are mandated to support SRIOV networking deployment, and customers are cautioned about throughput behavior if they choose to turn on traffic encryption.

Customers want reliable performance, but they not uncommonly put unexpected demands on their services. By building a full lab and testing center, Verizon was able to test customer configurations and troubleshoot issues down to the individual feature level. As a result, Verizon as a large operator now has even more capacity to facilitate cooperation with all of its vendors—ensuring the integrated service chains perform as expected, solving issues uncovered during development or production, and quickly addressing any customer related issues.

The 2020 OpenDev event series was held virtually. The photo is from OpenDev 2017 that was focused on edge computing where Beth Cohen also presented a session. 

The post Verizon’s Optimum Performance Relies on Owning the Testing Process appeared first on Superuser.

by Ella Cathey at August 06, 2020 02:00 PM

August 05, 2020

The Official Rackspace Blog

Seven common misconceptions about FedRAMP ATO

Seven common misconceptions about FedRAMP ATO nellmarie.colman Wed, 08/05/2020 - 11:45

 

Cloud solutions providers (CSPs) excel at building and delivering technologies that help solve their customers’ biggest challenges. It’s what they’re best at. CSPs are not, however, typically well-versed in comprehensive federal security and compliance standards and the hundreds of requirements involved.

Yet, to sell their solutions to the U.S. Federal Government, CSPs must first achieve a FedRAMP Authority to Operate (ATO), demonstrating they meet these standards.

The FedRAMP ATO certification process can be daunting, expensive and time-consuming for CSPs. And to make matters worse, CSPs often approach the process with misconceptions that can become significant barriers.

Through our experience helping businesses achieve their FedRAMP ATO over the years, we’ve identified seven misconceptions that occur most frequently. By sharing these with you, we hope you can avoid making the same mistakes and have a more-successful journey toward your own FedRAMP ATO.

 

Misconception #1: I do/don’t need to be FedRAMP compliant.

Depending on which services you provide, you may be required to be FedRAMP compliant (in the case of selling SaaS), even if you are not actively seeking a government contract. In other cases, you may be seeking compliance when it’s not actually needed (e.g., you aren’t a cloud service). Do you know your situation?

 

Misconception #2: You can get FedRAMP-ready on our own.

Unfortunately, there’s not an itemized list of best practices that you can check off as you move down the path to authorization. FedRAMP ATO is a formal government designation that must be implemented, assessed by a third-party and validated by the government.

There are timelines to meet, schedules to build and testing to coordinate. Some processes can track in parallel, while others must proceed in tandem. Documentation must be managed properly so that there are easy-to-follow paper trails. Any delay will cost you money.

And don’t forget, you also have your own business to run at the same time, with finite IT resources that might be at risk of being stretched thin.

 

Misconception #3: Once you become authorized, you are authorized forever.

While it would be nice if, after all your hard work to get authorized, you would just stay that way — but unfortunately this is not the case. You must get reauthorized every year, usually at a cost of around $1 million per provider, per year. You must also continuously monitor and document security and governance requirements to maintain your FedRAMP ATO.

 

Misconception #4: JAB authorization is better than an agency authorization.

While a Joint Authorization Board (JAB) Provisional ATO (P-ATO) may streamline some things, an agency ATO is just as effective. In addition, an agency ATO is typically faster and cheaper to achieve, as you get to skip the FedRAMP Ready step.

 

Misconception #5: You must use a 3PAO for advisory services.

Many third-party assessment organizations (3PAOs) pitch costly (and often unnecessary) consulting services up front that can put you “behind the eight ball” financially. It’s better if you can establish the requirements your system meets and plan which actions your team must take to address vulnerabilities before you engage a 3PAO.

 

Misconception #6: Federal agencies are reluctant to sponsor a FedRAMP authorization

With all of the regulation and rules around the FedRAMP ATO process, it’s easy to think that federal agencies are reluctant to sponsor FedRAMP authorization. Thankfully this couldn’t be further from the truth. The federal government realizes that the intrinsic benefits of the cloud (e.g., remote access, scalability, collaboration efficiency) help it achieve its mission to deliver services to the public. They are always looking to sponsor new CSPs.

 

Misconception #7: Attaining a FedRAMP ATO is straightforward.

Attaining a FedRAMP ATO is an arduous process. You must meet more than 300 requirements, as outlined in 1,200+ documentation pages. With an average investment of $2.25M to get authorized, you’ll want to make sure you’re investing your time and money properly. Thankfully, there exists a shortcut of sorts via inheritable security controls, which can minimize the amount of controls your company must complete in-house, saving you time and money.

 

Streamline your FedRAMP ATO journey

With Rackspace Technology, you can leverage the power of inheritable security controls and be FedRAMP ATO authorized in as little as four months. Rackspace Government Cloud became the first JAB-authorized platform-as-a-service, back in 2015. Since then, we’ve helped over a dozen CSPs obtain their FedRAMP ATO. And we can help you, too.

If you’d like to take a deeper dive, I invite you to attend our upcoming interactive workshop, where you’ll learn first-hand from subject matter experts who live and breathe FedRAMP — including an authorized CSP, a compliance ISV and a 3PAO. You’ll also learn how to manage FedRAMP security and governance requirements and get your government cloud solutions to market faster. Topics we’ll cover include:

  • Achieving FedRAMP ATO three times faster while saving 70% on monthly operational costs
  • Reducing advisory, engineering and audit costs to free up time and resources for innovation
  • Automating security governance and documentation to ace the assessment
  • Attaining always-on, scalable and secure infrastructure and accessing managed capabilities and tools when you need them — whether your cloud is private, public or hybrid.

 

Seven common misconceptions about FedRAMP ATOLooking to achieve your FedRAMP ATO? Be sure to avoid these seven common misconceptions. Get one step closer to your FedRAMP ATO./fedrampStart here
FedRAMPGovernment SolutionsGovernmentCloud InsightsBrad SchulteisSeven common misconceptions about FedRAMP ATOFedRAMP text surrounded by question marks

by nellmarie.colman at August 05, 2020 04:45 PM

OpenStack Superuser

Where are they now? Superuser Awards winner: NTT Group

We’re spotlighting previous Superuser winners who are on the front lines deploying OpenStack in their organizations to drive business success. These users are taking risks, contributing back to the community and working to secure the success of their organization in today’s software-defined economy.

Nippon Telegraph and Telephone (NTT Group)  has various OpenStack deployments in production including large scale public cloud, private cloud for internal services and NFV infrastructure. The following are its major OpenStack deployments.

Note: Abbreviations in brackets are used to describe each deployment throughout this document:

  • [Com] NTT Communications provide large scale public cloud service for enterprise, called ECL 2.0
  • [Data] NTT Data has two OpenStack deployments: one is for internal application developers and the other is to host its enterprise customers.
  • [Resonant] NTT Resonant provides various web based services, including web portal service called “goo,” and has a large internal cloud to host its services.
  • [TX] NTT Technocross provides OpenStack support service and has their internal cloud for developers.
  • [R&D] NTT Software Innovation Center operates internal R&D Cloud, “XFarm,” for researchers and developers.

What has changed in your OpenStack environment since you won the Superuser Awards?

[Com] We have scaled out to accommodate our customers. We also stabilized our system by fixing bugs to improve our service quality. We also overcame the challenge of OpenStack version up in production.

[Data] We have deployed two large scale OpenStack clusters: one is for internal app development and the other is for external enterprise customers. We have started these two services since 2017 and are still expanding the scale.

[Resonant] We have doubled our virtual machine (VM) numbers since 2015. We are now working on upgrading our environment due to hardware end-of-life since last year. We will be migrating to OpenStack Queens and introducing all-flash storages.

[TX] We have upgraded OpenStack version and the deployment was automated by using Ansible.

[R&D] In 2015 we had two clusters with 15 nodes mainly for evaluation purpose. Now we have three clusters in production with 22 nodes hosting R&D workloads.

What is the current size of your OpenStack environment?

[Com] More than 3,000 compute nodes, 30,000 VMs and 80,000 virtual cores in production.

[Data] Approx. 80 compute nodes / 7,000 VMs / 21,000 virtual cores for internal app development. Approximately 20 compute nodes / 600 VMs / 2,000 virtual cores for external enterprise customers.

[Resonant] Our new deployment has about 60 compute nodes hosting around 3,500 VMs.

[TX] About 20 compute nodes.

[R&D] 22 Compute nodes, 1550 VMs, 8700 vCPUs

What version of OpenStack are you running?

[Com] Mitaka and Queens.

[Data] Newton and Queens.

[Resonant] Our new deployment runs on Queens

[TX] Mitaka and Queens

[R&D] Mitaka and Queens

What open source technologies does your team integrate with OpenStack?

[Com] Ansible, Chef, Docker, Contrail (TungstenFabric), fluentd, Influxdb, Grafana, telegraph, kibana, elasticsearch, filebeat, metricbeat, haproxy, rabbitmq, percona

[Data] Ansible, Docker, Grafana, Prometheus, Kibana, Elasticsearch, Metricbeat, Hinemos, HAproxy , Pacemaker, Corosync, MariaDB and RabbitMQ.

[Resonant] Puppet,Ansible,Docker,Docker-compose, kibana, elasticsearch, fluentd, haproxy, keepalived, CloudFoundry.

[TX] Ansible, haproxy, rabbitmq, percona, kubernetes, docker, gitlab

[R&D] Sheepodg, Ceph

What workloads are you running on OpenStack?

[Com] More than 1,000 enterprise customers including web application, call center, video conference, data processing, NFVs and IoTs.

[Data] More than 1,000 application development projects are using our internal OpenStack environment for develop and testing their apps. Target industries of the projects cover almost all sectors like public, BFSI, manufacturing, healthcare, retail, etc.

For the external customers, OpenStack is mainly used as a platform for various applications in BFSI customers.

[Resonant] 80+ web services with more than 1 Billion page view/month

[TX] Development software and OSS (e.g. K8s, Docker)

[R&D] Our cluster is an IaaS service for R&D employee to conduct research and development and variety of workloads are running.

How big is your OpenStack team?

[Com] About 30 members for OpenStack related development and tier3 engineering.

[Data] We have 10-20 OpenStack engineers.

[Resonant] About 8 OpenStack operators and hundreds of engineers developing applications on OpenStack.

[TX] About 100 engineers including our business partners.

[R&D] About 10 engineers including our business partners are operating OpenStack cluster. We also have around 10 developers contributing to OpenStack including 3 PTLs.

How is your team currently contributing back to the OpenStack project?

[Com] Reporting bug tickets and requests for engineering (RFEs), mainly in Masakari project. Participating community events to present our experience.

[Data] We’re mainly using Red Hat OpenStack so reporting bugs/challenges to Red Hat, which directly contributes to the community. Also, we have about 10 engineers contributing to the OpenStack project (code review, commits, etc.).

[Resonant] Reporting bugs and sending patches through our business partners.

[TX] Reporting bug tickets.

[R&D] Contributing in various OpenStack projects including Tacker, Nova/Placement, Swift and Masakari. We also report bugs and upstream patches.

Also NTT Group is actively contributing to regional OpenStack community activities including organizing meetups, OpenStack Days event, and running slack groups for operators.

What kind of challenges has your team overcome using OpenStack?

[Com] We increased the capability and agility to integrate our infrastructure with our customer system through OpenStack APIs.

[Data] We have successfully reduced the cost of infrastructure for many application development projects by consolidating their platforms into one. Also, we have successfully provided agility and flexibility to BFSI customers for their digital transformation.

[Resonant] We have managed to reduce server procurement cost by having our infrastructure team conduct collective procurement and assign resources appropriately, instead of having each development team to purchase servers separately.

[TX] Compute resources for developers can be prepared on an on-demand basis.

[R&D] We managed to reduce server cost by having our internal IaaS and at the same time, we were able to advance our capabilities in operating open source cloud, which we share among other group companies.

 

The post Where are they now? Superuser Awards winner: NTT Group appeared first on Superuser.

by Superuser at August 05, 2020 01:00 PM

August 03, 2020

OpenStack Blog

10 Years of OpenStack – Gary Kevorkian at Cisco

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Gary Kevorkian from... Read more »

by Sunny at August 03, 2020 03:00 PM

The Official Rackspace Blog

Build a career path for your technical individual contributors — here’s how

Build a career path for your technical individual contributors — here’s how nellmarie.colman Mon, 08/03/2020 - 07:18

 

Not everyone wants to manage people, but many companies don’t provide alternate pathways to progress a career in tech, leaving technical individual contributors (ICs) stuck. “This is a dilemma if you’ve spent years building skills to become a tech expert,” says Rackspace Technology Principal Architect — Product Architecture Nicholas Garratt, who sits on the Rackspace Technology Technical Career Track (TCT) board. “If your only option is to manage other resources, skills atrophy. You become less competitive through moving away from the skills that brought you to a company in the first place. You lack time to do what you once enjoyed.”

Garratt explains experts often want to mentor people, help companies transform and be public advocates — “what they see as the good parts of a leadership role” — but without being estranged from hands-on skills. This is why some large tech companies develop TCTs. These initiatives identify and nurture technical ICs, providing them with a path to a technical leadership role that’s the equivalent of executive-level leadership within an organization. They drive the business forward, but don’t have to manage people nor leave the technical work they love behind. “It’s about empowering and promoting them to help them become leaders — without a move to management,” says Garratt.

But what benefits does this bring to your organization? And can this model work for companies with smaller, more specialized technical teams and less opportunity for growth?

 

Why highly skilled, technical ICs are vital to every organization

Rackspace Technology has over 6000 staff — and yet only around 50 are TCT members. The program is fastidious in its selection process, choosing only the best technical people who’ve reached the pinnacle of their traditional tracks. These professionals have great technical skills and leadership potential, and a desire to do more. As Garratt notes, smaller organizations have a more limited pool of resources to draw from, and tend toward simpler organizational structures. “Developing highly specialized technical IC roles is something that’s not going to be as practically possible for them,” he says, adding that it’s therefore hardly surprising TCTs are far less common in smaller companies.

However, David Porter, Principal Engineer at Rackspace Technology — and another TCT board member — believes similar structures nonetheless have the potential to bring key benefits to smaller organizations. “Retention is the most obvious,” he says. “If your company has fewer than 100 people, you might still have the original engineers. So you’ll want to keep them to train people up and share critical knowledge.”

For medium-sized businesses, Porter suggests fostering technical leaders is more about retaining talent to get “maximum velocity for change” and to involve them in mentoring and evangelism. “In short, try to keep the people who represent the business.” Garratt adds morale is a factor: “When people leave, that’s a poor signal. Oust a respected, trusted individual through a lack of care and attention and that can cause lasting damage to subsequent recruitment efforts.” By contrast, technical ICs in leadership roles strengthen organizations by being seen as “positions to aspire to — that other tech resources look up to for leadership and guidance.”

 

How to identify technical leaders and help them thrive

Creating a technical career track is easier said than done, though, and the specifics will vary depending on your organization’s size and the composition and disposition of your teams. But four fundamental tips should help get you started.

 

1. Identify your top technical resources

Spotting technical ICs with leadership potential might not come naturally if your company has centered on traditional promotion routes, or doesn’t have a history of identifying such individuals. Garrett says they must be “discovered organically.” Within every tech team is that person others turn to, who can provide answers, opinion and thought-provoking conversation. “Don’t force anything. Their status must be organically earned.”

 

2. Get buy-in

According to Porter, it’s vital any program you pursue “goes all the way to the top.” He warns against delegating programs like the TCT, because relevant policy can’t be made if people’s hands are tied. Also, get buy-in from the participants themselves. That might seem obvious, but some companies miss this step. “Have a dialog with them,” says Porter. “Find out what they want to do, and don’t force people into roles they’ve no interest in.”

 

3. Create a plan

Porter says you must offer people more than a pay increase and a fancy title they might just use to get a job elsewhere: “There needs to be a plan — a set of roles that describe what your technical leaders need to do.” Individuals can measure their efforts accordingly, and the company can monitor their progress and promote automatically based on key milestones. “Ensure you have a clear path, to retain people and keep improving them until they reach the highest possible level,” he adds.

 

4. Be flexible and balanced

Garrett reasons many people who manage tech resources lack direct experience in managing the career paths of technical ICs, and so shifts in thinking might be required. An important one is to “balance their schedule, leaving time to accommodate ad-hoc requests and investigate things.” But he says it’s counter-productive to impose a rigid structure on how and when to engage, and that the technical ICs must be flexible, too, since they’ll have many conversations that aren’t core to their role.

 

Maximize value from people for success

Whether you enact a full TCT program or put in a more modest system to define, develop and empower your technical ICs, the result will add value to your company. As Porter notes, too often the reasoning behind TCTs is about good people “not wanting to be managers,” but he asks: “What if a company loses a person’s value by making them a manager? What if someone would otherwise have been a more effective contributor at accomplishing the business’s goals?”

For medium-sized and smaller companies alike, this is critical. When resources are stretched, make optimum use of them. You want best-of-breed talent, and so must empower your technical leaders and ensure they don’t get stuck in a rut — or leave — because you offer them no other choices. And as Garratt says: “Just because someone doesn’t go down a path to directorship that doesn’t mean they lack valuable insight into improving an organization.”

Failure to appropriately engage with your distinguished technical ICs can lower morale and job satisfaction, potentially driving attrition in a challenging technical skills market. Engage and encourage your high performers who, as Porter says, “manage nothing more than doing the best possible work for your customers,” and you’ll move faster, transform your organization and have the best possible advocates for your business — inside and out.

Our TCT Rackers help us move fast and help our customers to move fast too. Check out our customer stories to hear how our top technical staff can make their expertise work for you.

 

Build a career path for your technical individual contributors — here’s howOur experts explore why highly skilled, technical individual contributors are vital, how to identify and help them thrive, and an approach to consider that can maximize their value.See how our technical leaders put their expertise to work. /customer-storiesRead customer stories
Professional ServicesCloud InsightsJeff HighleyBuild a career path for your technical individual contributors — here’s howgraphic showing ladder with career milestones

by nellmarie.colman at August 03, 2020 12:18 PM

OpenStack Superuser

Contributing to Open Infrastructure: Everybody Wins, So Let’s Get Started

During the 10th Anniversary celebration of OpenStack last month (what a great huge milestone for the community!), the OpenStack Foundation celebrated some remarkable achievements made during the last decade, not the least of which was the number of code contributions made by the global OpenStack community:

  • 500,000+ changes merged
  • 8,000+ individual developers authoring those changes
  • Every day, 900 changes are proposed and 18,000 tests are run to evaluate them

In fact, based on code contributions, OpenStack ranks as the one of the three most active open source projects in the world, along with the Linux kernel and Chromium (the upstream code base for the Chrome browser).

Those metrics are pretty impressive, but they don’t tell the whole story. For starters, they don’t include the growing number of code contributions to other open source projects supporting by OSF, like Airship, Kata Containers, StarlingX, and Zuul. Nor do those metrics include all the other forms of contributions—beyond code—that are made by the community in support of open infrastructure.

It’s important to remember that ‘contribution’ entails more than technical contributions. There are many ways to make valuable contributions to the open infrastructure community: sponsoring and hosting events, serving on working groups and committees, leading meetups, presenting at conferences, contributing to mentoring and diversity efforts, donating to scholarships, voting in elections, serving on the board, just to name a few. Each of these contributions plays a critical role in helping our community thrive and make progress.

City Network comes to mind as a company that contributes extensively in non-technical ways. City Network is the organization behind OpenStack Days Nordic and leads the public cloud group within OpenStack. Johan Christenson, the founder and CEO of City Network, is a member of the OpenStack Foundation board of directors and has volunteered to spearhead an outreach effort to encourage more upstream contributions.

“City Network has been a non-technical contributor to the community for many years, but recently we have been making a concerted effort to become more involved as a technical contributor as well, making some modest code contributions to Stein, Train and Ussuri,” said Johan. “We have been inspired by those who have shown that if you contribute technically, you learn the code and become a better operator. Plus, we believe companies like ours who are making money by using OpenStack in a serious way should be contributing more ourselves.”

Johan also described a collaboration City Network has formed with VEXXHOST.
“One of the companies that we admire for its technical contributions to the community is VEXXHOST,” explained Johan. “Mohammed Naser and his team have been ‘teaching us the ropes,’ helping us to get started on the technical side.”

So, I reached out to Mohammed, and I asked him to let me share some best practices with the broader community as well. Here’s a bit of our conversation:

So, Mohammed, how long have VEXXHOST employees been contributing upstream?

Mohammed Naser: The first time some of our code merged was in 2011, so we’ve been contributing to OpenStack for a long time.

What is your organizational philosophy about contributing upstream to OpenStack?

MN: We don’t believe in keeping “fixes” in our internal team notes. We believe in fixing it upstream. I consider OpenStack to be a core part of our business, so contributing not only helps make OpenStack better for others but helps secure our future as well.

What impact has contributing upstream had on your team’s experience actually using and deploying the software?

MN: Our “upstream-only-first” approach pays dividends for us, because we’re essentially getting free code review from the people who are most knowledgeable about OpenStack. That’s true business value. The reviewer can give suggestions or give assurance that you’re doing the right thing. Or the reviewer might actually find a better solution and explain why and how we should be doing it differently. If you’re making changes on your own, there’s no way you’ll get a cleaner solution. You don’t have to hire someone to evaluate the code. That’s kind of how open source works.

Also, by being heavily involved in upstream, we have a bigger familiarity with the code. We can fix bugs and identify issues more easily, because we can reach out to the community and ask how to fix something. Conversely, by being in close relationship with upstream, we don’t waste time isolated on an island building features that aren’t really needed or won’t be accepted—features that we’ll then end up having to support and maintain by ourselves. Community means we can get valuable feedback before wasting time and resources on coding no one besides us finds valuable.

I can give you many examples of indirect value, too. For instance, there’s a kind of “Contributor’s Karma”—you develop a close relationship with upstream, and they keep you in their thoughts and reach out to you as an operator to get your input. A lot of developers on these projects reach out to us and ask for advice. “Will it make your life easier or harder? Are we doing the right thing here? Can we make it better?” You definitely have a voice that’s heard, even if you are not the one writing the code. It’s a win-win for everyone because we’re building better software by working with each other.

What are some of the biggest obstacles to contributing upstream?

MN: On a corporate level, I think the biggest obstacle is when the management of companies don’t prioritize upstream work. Developers tend to want to contribute, but they can find it difficult to participate when management doesn’t prioritize or encourage it.

On an individual level, it can be intimidating to push up code, especially when you are new to the community, because you’re worried about how people might perceive you or judge you on your code. I truly don’t see that happen much in OpenStack, but I’ve seen that kind of reticence among young developers in general.

What advice would you give to organizations who use OpenStack and who are not currently contributing?

MN: My advice is to have some internal policy that is strict on the fact that you won’t run any local patches of OpenStack. That’s Step 0. If you commit to that, you’ll become an upstream company. You have no choice but to always ship things upstream and get it merged in order to put it in production. If you’re running a fork with local patches, your code is not going to get any performance improvements. It’s going to sit stale and start “bit rotting” with time. By contributing everything upstream, you are no longer maintaining any technical debt, you’ve got a whole community evolving and improving your code, and you’re ready to run on release day.

What’s a good way to get started?

MN: Here’s a practical suggestion: Pick one day every month or two, and call it the day where we all go upstream. Just find a bug and fix it, almost like a hack-a-thon, then everyone shares what they did upstream at the end of the day. You dedicate a whole day to this task, incentivize it, and, this way, no one feels like they are wasting their employer’s time by going upstream.

You might also start by unrolling all of the different local patches you’re maintaining in OpenStack. Document the work-arounds and submit bug fix reports. Start getting rid of your technical debt by pushing that upstream. If you’re a few releases behind, you might discover that some of those bugs are already fixed.

And, finally, if you just use OpenStack as is and it does everything you need it to, you can contribute by sponsoring or funding another company who is doing upstream work. That’s also a fair way to do it.

My thanks to Mohammed for sharing his thoughts. I agree with Johan that VEXXHOST is a great role model for contributing upstream, and we need more companies to follow suit.

I’m particularly pleased that Johan is willing to be the standard bearer for a concerted effort to get more folks involved.

“We have a huge community that relies on our collective efforts,” Johan says. “So, here’s the message we need to get out: ‘If you are a developer, please contribute. If you’re a DevOps manager or CxO, instill upstream contributions in your corporate culture.

“And to everyone in the Open Infrastructure community, I’d appreciate your help in spreading the word about the value of contributing. Let me know if you have ideas, questions, or would like to share your story about how contributing upstream has benefitted you and/or your organization.”

You can reach Johan at johan.christenson@citynetwork.eu. I hope you’ll reach out to him today.

The post Contributing to Open Infrastructure: Everybody Wins, So Let’s Get Started appeared first on Superuser.

by Mark Collier at August 03, 2020 11:00 AM

July 28, 2020

OpenStack Superuser

OpenDev Hardware Automation Recap and Next Steps

Last week, the OpenStack Foundation (OSF) held its second virtual OpenDev event. We’ve been amazed by how many people have joined us online to collaborate across different time zones. This OpenDev event focused on Hardware Automation including topics around hardware provisioning lifecycle for bare metal, bare metal infrastructure, networking and network security. Attendees shared various perspectives on the challenges, asked questions for how to improve, and identified next steps that the community can collectively collaborate on to ease these operator challenges.

This virtual event recruited over 400 participants located in more than 60 countries representing 200+ companies, who spent three days sharing their knowledge and discussing their experience of building and operating software that automates their hardware in the cloud environments.

EdZDLDuWoAERNzq
OpenDev brought developers and operators together to collaborate across boundaries.

Thanks to the OSF platinum and gold members who are committed to the Open Infrastructure community’s success. Also, thank you to the Programming Committee members: Mark Collier, Keith Berger, James Penick, Julia Kreger, Mohammed Naser. You helped to make these discussions possible!

If you didn’t tune in—or if you did and want a replaybelow is a snapshot of the conversations that took place, but I want to encourage you to check out the event recordings as well as the discussion etherpads found in the OpenDev event’s schedule to join the discussion.

Day 1: 

Mark Collier, OpenStack Foundation COO, kicked off the first day by explaining why the OSF chose Hardware Automation as one of the OpenDev topics. According to Help Net Security, the global data center networking market will reach $40.9 billion USD by 2025. Among our users, we’ve been seeing more complex hardware coming into the data centers such as ARM, GPUs for AI/machine learning and FPGAs that people have to manage.

OpenDev: Hardware Automation created an online space for community members to share their best practices and collaborate without boundaries. As more and more open source communities, such as Ironic, MAAS, tinkerbell and metal3, start to grow and solve these challenges, there is a huge demand for hardware automation. Ironic now has more code merged per day than in the history of OpenStack – showing that people want to work together on these problems more than ever! If you are interested in knowing more about OpenStack bare metal and how Ironic allows users to manage bare metal infrastructure, check out  the latest white paper from the OpenStack Bare Metal SIG “Building the Future on Bare Metal, How Ironic Delivers Abstraction and Automation using Open Source Infrastructure“.

Screen Shot 2020-07-24 at 2.21.45 PM (1)

Part One: Hardware Provisioning Lifecycle for Bare Metal

It’s common for users, regardless of their scale, to have a system to manage IT Asset Management (ITAM), Datacenter Infrastructure Management (DCIM), IP Address Management (IPAM), Configuration management database (CMDB). Part One of OpenDev’s topic on hardware provisioning lifecycle for bare metal was moderated by Mohammed Naser, VEXXHOST CEO and OpenStack Technical Committee (TC) chair. Mohammed kicked off the discussion by asking everyone to share how they organize data infrastructure & IP addresses inside their organization, what they wish they can do better and why they have not switched to automation. Later, community members from Verizon Media, China Mobile, CERN, SchwarzIT, VEXXHOST, and Red Hat have shared their various experiences on vendor selection, intake and deployment to the facility floor.

At the end of this discussion, participants signed up to collaborate further after the event in areas, such as collaborating on a set of SOPs/Documents on managing keys in a Trusted Platform Module (TPM) and creating a matrix of firmware installation processes per vendor per platform and building a common database of how to upgrade firmware automatically. If you are interested in discussing these topics and collaborating with the fellow operators, please sign up here, line 205 & line 217.

Part Two: Hardware Provisioning Lifecycle for Bare Metal

Part Two was moderated by James Penick, Verizon Media Architect Director. James continued the discussion on hardware provisioning lifecycle for bare metal, BIOS/firmware automation, how to keep the hardware secure, and how to detect attacks. The topics of this discussion included day-to-day consumption of hardware and power & thermal optimization automation. If you are interested in continuing the discussion on power affinity/grouping/weighting after the event, make sure to sign up on the etherpad, line 245.

As can be expected, this session included a lot of discussion about end-to-end hardware provisioning lifecycle for bare metal / cradle to grave for hypervisors. Check out the full discussion notes on OpenDev: Hardware Automation Day 1 etherpad, and watch the day 1 discussion recording.

Day 2: 

Part One: Bare Metal Infrastructure

Mohammed Naser returned as moderator and opened up the discussion on bare metal infrastructure by asking attendees on their own definition of “hyperconverge” to make sure everyone was on the same page. Arne Wiebalck, CERN Computing Engineer, gave two use cases that are considered as “hyperconverged” which are around massive storage systems that are developed in-house across thousands of servers and combining Ceph with each cell to achieve low latency IO for the VMs. Community members from China Mobile shared a use case on how different types of services can be converged within one type of hardware in the edge scenario. 

Later, community members dived into the discussions about autoscaled bare metal for cloud infrastructure, servers for serverless workloads. If you are interested in forming up a working group to look at some of these use cases and models, sign up here, line 206.

Part Two: Bare Metal Infrastructure

After a short break, Julia Kreger from Red Hat moderated the second half of the discussion on the topic of consuming bare metal infrastructure to provision cloud based workloads. Attendees from various companies gave use cases for turning ‘unused’ bare metal into cloud infrastructure orchestration. If you are interested in continuing the discussion on requirements regarding preemptable/bare metal workloads, please sign up here, line 246.

After the discussions on managing hardware using open standards such as Redfish and IPMI, it was apparent that many people are using both and interacting with their hardware. Questions such as why users care which protocol to use and what sort of issues people are encountering prompted people to share their experiences on how to make the job easier. Check out the full discussion notes on OpenDev: Hardware Automation Day 2 etherpad, and watch the day 2 discussion recording.

Day 3: 

Networking and Network Security

Under the umbrella of Hardware Automation, there is a wide variety of technologies, approaches, and solutions to networking. Open Infrastructure allows us to embrace these differences, and leverage our common ground. This discussion about networking was moderated by Mark Goddard, StackHPC Cloud Engineer with active speakers from China Mobile, Ericsson Software Technology, Verizon and more. The first half of the discussion was around network architectures and network automation. After a short break, in continuing with the theme of hardware automation, the attendees dug deeper into network security and how it relates to hardware and automation. 

The discussion about network security, moderated by Ian Jolliffe, Wind River Vice President Research and Development, explored questions such as how we are operationalizing the developer workflow to ensure network security in a dev ops world. Attendees shared their processes around automated firewall management as well as the security change management and tooling to do batch configuration or continuous configuration management of firewalls. Check out the full discussion notes on OpenDev: Hardware Automation Day 3 etherpad, and watch the day 3 discussion recording.

Wrap Up:

To wrap up, check out the etherpad that includes the OpenDev event feedback and follow up activities from the OpenStack Bare Metal SIG. We encourage you to continue the discussion at the Bare Metal SIG or sign up on the discussion etherpads in the coming weeks after the event. 

Screen Shot 2020-07-24 at 6.34.10 PM

Next Steps:

The goal with the OpenDev events is to extend this week’s learnings into future work and collaboration, so Jonathan Bryce and the event moderators, wrapped up the event to discuss next steps. These include:

Upcoming OpenStack Foundation (OSF) Events:

OpenDev:

Open Infrastructure Summit:

  • The annual Open Infrastructure Summit is going to be virtual (and free!). Register for the virtual Summit and join the global community at the virtual Open Infrastructure Summit on October 19-23 directly from your browser! 
  • The Call For Presentations (CFP) is open now! The CFP deadline is August 4 at 11:59pm PT, so start drafting your presentations and panels around Open Infrastructure use cases like AI/Machine Learning, CI/CD, Container Infrastructure, Edge Computing and of course, Public, Private and Hybrid Clouds.

The post OpenDev Hardware Automation Recap and Next Steps appeared first on Superuser.

by Sunny Cai at July 28, 2020 07:00 PM

StackHPC Team Blog

Kayobe & Kolla - sane OpenStack deployment

Coming up at the London and Manchester (virtual) OpenInfra meetup on Thursday July 30th 6pm-9pm UK time (17:00-19:00 UTC): Mark Goddard, Kolla PTL and StackHPC team member, will be talking on "Kayobe & Kolla - sane OpenStack deployment".

Mark at RCUK Cloud Workshop 2019

In this talk Mark will introduce Kayobe, the latest addition to the OpenStack Kolla project. Learn how Kayobe uses Bifrost to support bare metal provisioning, and extends Kolla Ansible to offer an end-to-end cloud deployment tool.

Mark will be joined by:

  • Ildikó Vancsa and Gergely Csatari, who will present on Edge ecosystem, use cases and architectures.
  • Belmiro Moreira will present on 7 years of CERN Cloud - From 0 to 300k cores

Add to your calendar.

If you're interested in finding out more about OpenStack and Kayobe, check out our OpenStack HIIT training courses.

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Stig Telfer at July 28, 2020 09:00 AM

July 27, 2020

OpenStack Superuser

Submission Tips for the Virtual Open Infrastructure Summit

The world of Open Infrastructure is broad. At the Open Infrastructure Summits, presentations cover 30+ open source projects, diverse use cases like 5G, container orchestration, private cloud architectures, and CI/CD across a large set of industries. This is my first Open Infrastructure Summit, so I know how daunting it can be to look at the set of topics and wonder where to begin.

If you haven’t heard, the upcoming Summit hosted by the OpenStack Foundation (OSF) will be virtual. The great news is that this now opens up the opportunity to speak (and attend!) for folks who have never been able to travel to an in person Summit—like me!

To help folks who are submitting sessions for the first time—or those returning who need a fresh idea—we have asked the Summit Programming Committee to share specific topics within their Tracks on what kind of presentations they want to be submitted.

The deadline to submit a session is Tuesday, August 4 at 11:59pm PT—see what this means in your time zone.

5G, NFV & Edge Computing

  • Bare metal provisioning and pre-deployment validation are a couple of edge topics that need attention.
  • Real-world use cases and stories about how 5G and Edge Computing are being deployed and changing how people are approaching communications, industrial or health care.
  • Where the edge can be taken, provide clarity on definitions, and where the future is.
  • 5G, NFV, Edge – Impact of 5G Rel16, ETSI NFV4, cloud-native micro data centers efforts on:
    • Managed Container Infrastructure Object (MCIO) Pods
    • Interfaces IFA040 and APIs Changes for MANO Orchestrator & VIM framework including OpenStack CNTT Rx1s and Kubernetes Rx2s.
    • Changes to VNF, VNFC, CP a Tosca/Helm templates/Charts for VNFD, VDU, CPD schemas, & CRDs. Updating Config, FCAPS on Deployments, Deamonsets, Clusters, Namespaces starting Bare Metal hosts, nodes, Devices like GPU, FPGA, NVME, and other heterogeneous components.
    • Open Infrastructure role in 5G core to Edge and Edge to Nomadic to Mobile Gateways and Consumer & Enterprise for 5G, IoT, and Edge use cases.
    • Performance optimization for latency-sensitive, bandwidth sensitive and compute-intensive network functions including service function chaining, slicing, mapping, placements.
    • Edge platform automation and scheduling for parallel life cycle management along with local loop monitoring and controls.
    • Data-driven architectures, design, and enabled through hybrid VNF and Container workloads.
    • Effect of merchant Silicon for NPU, GPU, DSPs in programmable solutions on in-line accelerations, and user/kernel offloads for latency optimizations.
    • Language (py3.8, golang 1.4, DPC++, Java18, SYCL) Compilers and Dynamic Builds and Multi-CI and Mid-Stream integrations+Operations for both Upstream codes and Downstream deployments for Edge, NFV and 5G stacks.
    • Impact of pandemic & glocalization of the supply chain on Open technology for different Edge, NFV & 5G, Markets.”
  • Edge computing Stories
  • Typical use cases for different industries
    • telco
    • manufacturing
    • financial banking
    • others
  • How users use software/solutions about edge computing?
    • StarlingX
    • KubeEdge
    • K3s
    • Arkraino
    • others
  • What are the biggest challenges for building an edge cloud?
    • security
    • networking
    • small footprint
  • The relationship between 5G and edge computing
  • How to build a mobile edge computing (MEC) platform for telco to support 5G
  • Edge application innovations
  • How to optimize the edge cloud to meet the requirement, such as low latency and high bandwidth?
  • AI in edge computing environments
  • Edge computing solution based on OpenStack, NFV issues when big scalable deployment, container-based VNF application running on OpenStack system.

Programming Committee: 

  • David Paterson, Sr. Principal Software Engineer, Dell Technologies
  • Ian Jolliffe, VP R&D, Wind River
  • Prakash Ramchandran, Technical Staff, Dell
  • Shuquan Huang, Technical Director, 99Cloud
    Xiaoguang Zhang, Project Manager, China Mobile

Artificial Intelligence (AI), Machine Learning & High Performance Computing (HPC)

  • Submissions that addresses (but not limited to the following) scenarios on an OpenStack cloud:
    • Implementing a test bench or frameworks for running HPC  workloads for scientific research.
    • Benchmark various machine learning algorithms, which show performance metrics on various use cases.
    • How to prevent/eliminate biases in AI/machine learning predictions.
    • Applying AI/machine learning algorithms in the healthcare domain such as predicting cancer for early treatment.
    • Using the machine learning technique to prevent financial frauds
    • Using machine learning algorithms such as NLP and supervised learning to improve the software development process and quality.
    • Implementing AI/machine learning technique to maintain social distances or related preventive mechanisms in fighting COVID-19.
  • The challenges of expanding and scaling a cloud while maintaining availability.
  • Maximizing the performance of workloads on the cloud
  • The use of GPUs and FPGAs in the cloud.
  • Innovative uses of machine learning both in and on the infrastructure, as well as basic topics that introduce users to AI and machine learning’s particular infrastructure quirks.

Programming Committee:

  • Armstrong Foundjem, Researcher, École Polytechnique montréal
  • Alexander Dibbo, Cloud Architect, Science, and Technology Facilities Council (UKRI)
  • Hector Augusto Garcia-Baleon
  • Nick Chase, Head of Technical Content, Mirantis

CI/CD

  • CI/CD innovation to support scenarios, including telco, edge, public and private cloud.
  • Major technical development to address challenges in modern CI/CD
  • CI/CD for cross-vendor/cross-community/cross-toolchain scenarios
  • How reducing test duration can help to improve developer velocity on OpenStack itself.
  • What consumers of OpenStack are doing with OpenStack for their CI.
  • Sharing the knowledge from their CI/CD downstream or adjacent upstream among different communities.
  • How we can speed up the execution time and more important things are stability so that we keep the development process more smooth, less-breaking, and quality verification.

Programming Committee: 

  • Chris MacNaughton, Software Engineer, Canonical
  • Ghanshyam Mann, Cloud Consultant, NEC
  • Qiao Fu, Technical Manager, Chine Mobile

Container Infrastructure

  • Next-generation container orchestration – a world post-Kubernetes…
  • Securing containers
  • Serverless
  • Innovation technologies and practice in a container and cloud-native area, including container runtime, image building and delivering improvement, container networking, container practice in large organizations, public cloud services, or in the edge computing environment, etc.
  • How the telecommunication industry is evolving into cloud-native.
  • Gap analysis between current implementation and telco requirements on container infrastructure/cloud-native platform.

Programming Committee:

  • Matt Jarvis, Director of Community, D2iQ
  • Qihui Zhao, Project Manager, China Mobile
  • Xu Wang, Senior Staff Engineer, Ant Financial Services Group

Getting Started

  • How to get started in contributing to whether code or docs. Getting involved from the operations side. Diversity and inclusion talks about getting started and being a part of the community
  • Talks that break down complex components of OpenStack
  • Talks that introduce to users/developers on how to adapt and contribute.
  • First-time speakers showing their own experiences adopting OpenStack, the challenges and their tips on addressing them
  • Developers from the community presenting on how new users can contribute upstream
  • Speakers from the OpenStack governance talking about how to involved and how to seek help (communication channels ….)
  • A talk showing a user’s use-case of migrating from commercial platforms generically to OpenStack
  • Help more people know OpenStack and 4 opens.

Programming Committee: 

  • Amy Marrich, Principle Technical Marketing Manager, Red Hat
  • Mohamed Elsakhawy, Operational Lead / System Administrator III, Compute Canada / SHARCNET
  • Zhiqiang Yu, Open Source Program Manager, China Mobile Research Institute

Hands-on Workshops

  • Material that is engaging and in an environment that promotes questions and discussions.
  • “Taste of tech” – hands-on sessions designed to allow the experience of new and emerging tech without the intricacies of installation and deployment.
  • Advanced sessions – troubleshooting, scaling, CI/CD demonstrations, maybe even sessions on application modernization and migration, ie, the monolithic to container model.

Programming Committee: 

  • Keith Berger, Senior Software Engineer, SUSE
  • Mark Korondi
  • Russell Tweed, Infrastructure Architect, Red Hat

Open Development

Here is what previous Planning Committees looked for:

  • Content showcasing the power of the “Four Opens
  • What is open development
  • Why do we need open development
  • How does open development work
  • What is the relationship between open development and Conway’s law
  • How community success relates to a successful development
  • How to participate in open development in modern open-source software projects

Programming Committee: 

  • Meghan Heisler, AT&T

Private & Hybrid Cloud

Here is what previous Planning Committees looked for:

  • Private cloud and hybrid cloud implementation success stories, from small to large-scale (20 to 1,000 nodes), especially for industries with stringent requirements such as financial services and government
  • Different methods to deploy OpenStack;
    • How to scale and what, if any, are the bottlenecks?
    • Large deployment references
    • Maintaining cloud deployments
    • User stories, devs updates
    • New approaches for services integration
  • Cross-community collaboration
  • Presentations from actual superusers (providing user experience, testing and upstream at the same time)
  • OpenStack operational challenges

Programming Committee: 

  • Belmiro Moreira, Cloud Architect, CERN
  • Narinder Gupta, Lead Partner Solutions, Canonical
  • Kenneth Tan, Executive, Sardina Systems
  • Danny Abukalam, Solutions Architect, SoftIron

Public Cloud

  • Content for operators of public clouds as well as end user-driven content.
  • For operators, it’s super interesting to hear how others have solved things like billing, image life cycle management, monitoring, and auto-healing, etc, operator stuff that comes super important to keep track of your cloud(s).
  • Talks on how to utilize OpenStack in a great way, what tools are out there, how can I use them, etc, could be tools like Terraform for example.
  • Use case driven talks on how other end users are designing their infrastructure, how they automate that, how they leverage Kubernetes in OpenStack etc.
  • Adaption of OpenStack clouds with other frameworks like Kubernetes, cloud resource selling to end customers, global cloud initiative.
  • Adoption of containers with ease of use
  • The future of OpenStack and public cloud co-existence
  • Multi-cloud and ease of application portability
  • Domain-specific computing
  • Case studies on adoption and knowledge sharing
  • Cloud in the emerging world and how others can help them catch up

Programming Committee: 

  • Frank Kloeker, Deutsche Telekom AG
  • Krishna Kumar, Cloud Architect Sr., Accenture
  • Tobias Rydberg, Senior Developer, City Network

Security

  • Security content providing practical strategies to mitigate cloud security vulnerabilities and strengthen security posture for cloud users.

Programming Committee: 

  • Ashish Kurmi, Senior Cloud Security Engineer II, Uber Technologies

The post Submission Tips for the Virtual Open Infrastructure Summit appeared first on Superuser.

by Helena Spease at July 27, 2020 11:09 PM

OpenStack Blog

10 Years of OpenStack – Alan Clark at SUSE

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Alan Clark from... Read more »

by Sunny at July 27, 2020 03:00 PM

July 26, 2020

The Official Rackspace Blog

Creating great customer experiences, in a new challenging era

Creating great customer experiences, in a new challenging era nellmarie.colman Sun, 07/26/2020 - 15:52

It’s always critical to treat your customers with the respect they deserve. But in these challenging times, you can respond in ways that really set your business apart. With some effort, you can improve your customer experience and show your customers, in tangible ways, that you’re committed to achieving the best results possible for them.

Your business has something to gain from improving your customer experience as well. In fact, 36% of companies that excel in customer experience report they exceed their top business goal by a significant margin. By comparison, only 12% of mainstream companies achieve that outcome.

Through more than a decade of working with customers as a system and network administrator and now as a technical onboarding manager, I’ve identified five keys to creating a great customer experience. You can start today to put them into practice, for yourself and your teams:

 

1. Listen to your customers

This is where the “we’re in it together” sentiment comes to life. You have to really listen to your customers and get to know their needs. Come alongside and learn about their struggles and the goals they want to achieve. Remember that you’re not in this for yourself; you’re in it for them. Work closely with your customers to make sure they’re on track to achieve their desired outcomes.

 

2. Build a great team

Every day, surround yourself with people who can come together to reach common goals. I’m thankful that every person I work with — from the brilliant engineers and architects, to the sales team, project managers and customer success managers — is passionate about delivering the best customer experience possible. From jumping on calls at 5 a.m. or 11 p.m. or helping on weekends to make sure we’re doing right by our customers — it’s that team spirit that can keep everyone moving forward and making a real difference.

 

3. Stay one step ahead

As the first point of contact with customers, technical onboarding managers and project managers need, more than ever, to stay one step ahead. Be aware of possible issues that might come up — and when something does, bring it to everyone’s attention early so you can present alternatives quickly.

 

4. Save your customers money

Customers are especially sensitive to budget changes right now. This is a challenge we must overcome by keeping projects on track and on time, and by setting the appropriate expectations with stakeholders. Transparency is key here. Always look for solutions that are free-of-charge or that will incur the absolute minimum price increase possible for your customer.

 

5. Weigh the real cost

Especially in this difficult economic climate, try to be flexible with those customers who are struggling financially. It’s usually worth it to lose a little short-term revenue in order to keep your customer from going elsewhere and losing their business altogether. By listening to customers and giving them the help they need, your actions will speak louder than words.

At Rackspace, we’ve been focused on creating great customer experiences for over 20 years. We even coined it Fanatical Experience™. Every day, we combine our obsession for customer success with our passion for technology. Get to know us.  

 

Creating great customer experiences, in a new challenging eraShow your customers you’re committed to their success. Start here, with the five keys to creating great customer experiences. Focus on what’s next for your business/solutionsGet started
Cloud InsightsRodrigo Garcia PereiraCreating great customer experiences, in a new challenging erasmart phone displaying customer rating with five stars

by nellmarie.colman at July 26, 2020 08:52 PM

What is a solution architect? Get to know these problem-solving enthusiasts.

What is a solution architect? Get to know these problem-solving enthusiasts. nellmarie.colman Sun, 07/26/2020 - 15:41

 

Solution architects (SAs) play a major role in solving today’s most-complex modern business challenges. In the last few years, the rise in demand for SAs has exploded, fueled by a growing number of transformation projects and, more recently, business continuity and optimization projects. They are now a familiar figure in many organizations, providing a technical mind with a broad skill set to help organizations identify the best path forward with technology.

According to Gartner, SAs combine guidance from different architecture viewpoints — including business, information and technical — to find solutions. You can think of a SA like a translator who quickly interprets certain information and turns that into a technical solution.

SAs devote their lives to understanding your business challenges and needs, but how much you know about them? Isn’t it time we all got to know the people behind the tech, and see them in a different light without the documentation and whiteboard pens?

 

Hear directly from solution architects

In our latest CloudSpotting podcast episode, “A day in the life of a solution architect,” we sit down with SAs from Rackspace Technology EMEA. Listen in to discover what it means to be a SA, what makes them tick and what attracts them to extremely complex business challenges.

CloudSpotting hosts Alex Galbraith and Sai Iyer, who are both UK-based SAs themselves, are joined by SAs Simon Roberts (UK), Sashka Ninchovska (Netherlands) and Markus Schmid (Germany). Tune in to learn:

  • What exactly is a SA
  • What motivates a SA and makes them seek out chaos
  • The challenges of being a SA, such as partial information, restricted budgets and conflicting requirements
  • What value they add in the chain of turning an idea into a finished solution
  • The value of peer reviewing and drawing from the village well of knowledge
  • How to get into this field of work and what a career path might look like

 

Solution architects are problem solvers

Sai explains the different hats of the role during a project, “You need to be not just an architect, but a [subject-matter expert] and a technology leader of sorts, as you need to lead the solution implementation with multiple architects.”

Simon explains that the consultative element can sometimes feel odd if you don’t know some of the specifics, but you need to apply logic. “Part of it is applying logic, and if you are thrown something where you don’t know the particular product or service you need to know how to manage that, which can feel like tapdancing when you don’t know what the tune is. You also need to show humility and let customers know when you have exhausted all the information you have, and you need to gather more information.”

 

How to succeed as a solution architect

Alex gives his perspective of what he thinks helps you succeed in this role. “You can have all the technical nous in the world, but the best architects are those people you want to emulate because of how they work, run meetings and ask questions. You pick up these skills through shadowing people and those human connections.”

Simon delves deeper to explain what drives SAs and attracts them to extremely complex business challenges. “The dopamine hit of getting the solution right is something that you want to go back and experience again, and you seek out larger, more complex and higher value problems to solve. That’s why we do it.”

 

What is a solution architect? Get to know these problem-solving enthusiasts. Learn what it means to be a solution architect and what attracts them to extremely complex business challenges.Get to know our solution architectshttps://cloudspotting.fireside.fm/25Listen here
Professional ServicesCloud InsightsChris SchwartzWhat is a solution architect? Get to know these problem-solving enthusiasts.man sitting at table and talking to coworkers

by nellmarie.colman at July 26, 2020 08:41 PM

July 23, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: 10 years of OpenStack

Millions of cores, 100,000 community members, 10 years of you. Powering the world’s infrastructure, now and in the future. OpenStack, one of the top three most active open source projects, marks its 10th anniversary this month. Thank you for the 10 incredible years of contribution and support.

9fbeee0d69c09abf9c0ff697_882x616

Vietnam Open Infrastructure User Group celebrating 10 years

Launched in 2010 as a joint project between Rackspace and NASA, OpenStack started as two open source projects to orchestrate virtual machines and object storage. It has evolved into one of the three most active open source projects in the world.

Supported by a global community of over 105,000 individuals, the OpenStack project has accomplished several big milestones:

  • 21 on-time releases, from “Austin” to “Ussuri”
  • 451 Research projects a $7.7 billion USD OpenStack market by 2023, citing the most growth in Asia (36%), Latin America (27%), Europe (22%), and North America (17%).
  • From two projects in 2010 to 42 projects in 2020
  • Over 10 million cores in production
  • 500,000+ changes merged
  • 8,000+ individual developers authoring those changes
  • Every day, 900 changes are proposed and 18,000 tests are run to evaluate them

The community has hosted a 10 years of OpenStack virtual birthday celebration today at 8 am PT (1500 UTC). Watch the birthday celebration recording here.

Check out the 10 years of OpenStack blog and celebrate the incredible 10 years with us!

OpenStack Foundation news

Airship: Elevate your infrastructure

  • Congratulations to the 2020 Airship committee members elected last month:
    • Alex Bailey
    • Alexander Hughes
    • Alexey Odinokov
    • Jan-Erik Mangs
    • John (JT) Williams
  • The Technical Committee is excited to announce the 2020-2021 Working Committee election! This election will follow the same two-week cycle that the Technical Committee election followed last month.
    • Nomination Period: 20-26 July
    • Voting Period: 27 July-02 August
  • As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Contribute to the Airship glossary by defining and adding terms that we can use to educate future contributors

Kata Containers: The speed of containers, the security of VMs

  • The community is planning to set up a meeting to review and clean the Kata open issues backlog. The idea is then to add additional labels to the cleaned backlog for identifying the high priority ones being actively worked on and track them closely. We can then help focus on completing those issues and unblocking them if needed. Here is a doodle poll with 5 proposed time slots for next week: 
  • We have just released Kata Containers 2.0.0-alpha21.12.0-alpha01.11.2, and 1.10.6 releases. The 1.10.6 and 1.11.2 stable releases include the latest stable fix backports and users are encouraged to upgrade.
  • Check out The Road to Kata Containers 2.0 on The New Stack!
  • As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Contribute to the Kata Container glossary by defining and adding terms that we can use to educate future contributors.

OpenStack: Open source software for creating private and public clouds

  • The Victoria development cycle is now well underway. With more than 10,000 changes already merged, we are past the victoria-1 milestone, headed toward the victoria-2 milestone on July 30, for a final release planned October 14, 2020.
  • Starting August 1st, the entire OpenStack community will be represented by a single elected body, including developers, operators, and end-users of the OpenStack software, and inclusive of all types of contributions to the project. The User Committee will no longer operate as a separate entity from the Technical Committee. Read the updated UC charter for more details.
  • As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Take a look at the current OpenStack glossary and add missing terms or definitions to the etherpad that we can use to educate future contributors.

StarlingX: A fully featured cloud for the distributed edge

  • The StarlingX community is now in the final testing and bug fixing phase of the 4.0 release cycle. The release is planned to come out in a few weeks.
  • As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Take a look at the current StarlingX glossary and add missing terms or definitions to the etherpad that we can use to educate future contributors.

Zuul: Stop merging broken code

  • We’re looking for folks to help draft a Wikipedia entry about Zuul, so if you’re interested please lend a hand.
  • As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Take a look at the current Zuul glossary and add missing terms or definitions to the etherpad that we can use to educate future contributors.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Helena Spease at July 23, 2020 06:00 PM

July 21, 2020

OpenStack Superuser

Thank You to the Last Decade, Hello to the Next!

Millions of cores, 100,000 community members, 10 years of you. Powering the world’s infrastructure, now and in the future. Thank you for the 10 incredible years of contribution and support.

Wow, 10-years’ today was when OpenStack was first launched. Many amazing tech milestones happened in 2010. Steve Jobs launched the first iPad. Sprint announced its first 4G phone. Facebook reached 500 million users. For us perhaps, one of the most important milestones was the birth of the OpenStack project in 2010.

In real time, the pace of change in the tech industry often feels glacial, but looking at things over a 10-year span, a lot of stark differences have emerged since 2010. So before you plug back your AirPods, fire up Fornite and watch a new show on Disney+, let’s take a look at how OpenStack has transformed the open source industry in the past 10 years.

The Decade Challenge—OpenStack Edition

What began as an endeavor to bring greater choice in cloud solutions to users, combining Nova for compute from NASA with Swift for object storage from Rackspace, has since grown into a strong base for open infrastructure. In 2010, three years before the OpenStack Summit Hong Kong, “the cloud” was barely a thing. Having a standardized, open source platform for public and private clouds was a dream. Now OpenStack is not only the de facto standard, but also a massive market that critical industries rely on.

Looking back to OpenStack in 2010, we were ecstatic to celebrate our first year of growth from a couple dozen developers to nearly 250 unique contributors in the Austin release (the first OpenStack release). Fast Forward to 2020, OpenStack ranks among the top three most active open source projects in the world, and is the most widely deployed open source cloud infrastructure software. Developers from around the world work together on a six-month release cycle with developmental milestones. 10 years in, OpenStack has had

  • 21 on-time releases in 10 years, from “Austin” to “Ussuri”
  • 451 Research projects a $7.7 billion USD OpenStack market by 2023, citing the most growth in Asia (36%), Latin America (27%), Europe (22%), and North America (17%).
  • From two projects in 2010 to 42 projects in 2020
  • Over 10 million cores in production
  • 500,000+ changes merged
  • 8,000+ individual developers authoring those changes

Every day, 900 changes are proposed and 18,000 tests are run to evaluate them.

Last year was a busy year for the open infrastructure community. The community combined 58,000 lines of code changes to support the output of more open source infrastructure software. In the same year, with 100,000 members and millions of visitors to the OpenStack website (check out the new homepage that we just launched, yay!), the community has made significant progress in supporting the goal of OpenStack’s $7.7 billion commercial market size worldwide and the future of over $12 billion OpenStack and container market by 2023.

The achievements of the global community in the past decade have been phenomenal. We are honored to work with many enthusiastic and talented individuals from all around the world to achieve the same goal, and also work together to build and operate open infrastructure.

Top 10 Favorites of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with our community. Here we have gathered some of the top 10 favorites of OpenStack from the community members around the world:

#ForTheLoveOfOpen

Throughout OpenStack’s history, we have been committed to creating an open culture that invites diverse contributions. Looking back to 10 years ago, we didn’t have enough experience to define what the “openness” means to us. In 2010, we described open source with the freedom to innovate, consume and redefine. Even though we didn’t have our “perfect” definition on “open” and “open source”, we drafted three commitments to our community in 2010:

COMMITMENT #1: We are producing truly open source software.

COMMITMENT #2: We are committed to an open design process.

COMMITMENT #3: All development will be done in the open.

OpenStack was started with the belief that a community of equals, working together in an open collaboration, would produce better software, more aligned to the needs of its users and more largely adopted. It was therefore started from day 0 as an open collaboration model that includes as many individuals and organizations as possible, on a level playing field, with everyone invited to design open infrastructure software. It was from these conditions that “The Four Opens” were born:

  • Open Source
  • Open Design
  • Open Development
  • Open Community

After 10 years, the Four Opens have proved very resilient, consistently managing to capture the OpenStack way of doing upstream open source development. They are instrumental in the success, the quality and the visibility of the OpenStack software.

Collaboration without Boundaries

Stackers also collaborate without boundaries. Truly representative of cross-project collaboration, this Open Infrastructure umbrella now encompasses components that can be used to address existing and emerging use cases across data center and edge. Today’s applications span enterprise, artificial intelligence, machine learning, 5G networking and more. Adoption ranges from retail, financial services, academia and telecom to manufacturing, public cloud and transportation.

We have too many examples of community collaborations. Here is a list of the OpenStack cross community collaboration and integration highlights in the past year:

Where Are They Now 

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful. Check out the stories from Tim Bell, Yumeng Bao and Prakash Ramchandran on how they got started with OpenStack and their favorite memories from the last 10 years of OpenStack.

Stay tuned for more content from the community members on the OpenStack stories throughout the year!

Thank You

None of it would be possible without the consistent growth of the OpenStack community. The momentum of growth is not slowing down. Today, we are not only celebrating our community’s achievement for the past 10 years, but also looking forward to the continuous prosperity of the community in the next 10 years.

Thanks to the OpenStack Foundation platinum members, gold members and corporate sponsors who are committed to the Open Infrastructure community’s success. Check out the blogs from China Mobile, Ericsson and Red Hat on what 10 years of OpenStack meant to them and how they have contributed to OpenStack in the past decade.

Check out the video from Red Hat about 10 years of OpenStack:

Stay tuned for more content from the ecosystem companies on what 10 years of OpenStack means to them throughout this year!

Looking Forward

“When the OpenStack project launched in 2010, few people imagined use cases for cloud as diverse as edge, containers, AI and machine learning,” said Jonathan Bryce. It’s crazy to see how far we’ve come for the past 10 years. Let’s remember how we got here — and realize that many of our milestones today may well be a fuzzy memory come 2030. Let’s build on the experience of the last years to continue to grow our software communities. Here’s to another exciting 10 years together as a community.

Get Involved

Follow the #10YearsofOpenStack hashtag on Twitter and Facebook and share your favorite memories of OpenStack with us!

Follow us on

Twitter: twitter.com/OpenStack

Facebook: facebook.com/OpenStack

WeChat ID: OpenStack

The post Thank You to the Last Decade, Hello to the Next! appeared first on Superuser.

by Sunny Cai at July 21, 2020 03:00 PM

VEXXHOST Inc.

How To Choose An Open Source Project

In the world of software development, there is a progressive division, better known as the open source community. The open design and development concept helps everyone through a multitude of available open source projects. Consequently, users don’t have to create projects that have already been developed by another community member.

With a mix of developers contributing to open source projects, there is a variety of experience and expertise clubbed together. Therefore, tinkering projects multiple times makes them more secure and robust. However, not all open source projects are created equal.

These projects vary in terms of security and agility. Thus, choosing the correct one for your use case can get tricky. To help you work out this problem, you can focus on the following key areas:

Users

User statistics are a great indicator of how good an open source project is. If a large number of people are using the project, it is usually a good sign. Other indicators are the number of downloads, reviews, comments, contributors, forks etc.

The important thing is to deep dive into who the users are. Is the development team using their own project? That is a good sign for you to pick it up.

Builders

Are the contributors participating avidly? Pushing changes to open source projects regularly is essential. If the developer team or other community members are not actively involved in updating and improving the project, then it is not worth choosing.

The stakeholder activity associated with an open source project is an excellent indicator of its relevancy. So keep an eye out for that!

Documentation

The open source project you want to adopt or implement must come up with regular release updates. Changelogs show project upgrades and the improvements that came into play with every change. Project lifecycle activity and documentation help determine how secure and updated the code is.

Moreover, the documentation must be well written and easy to understand to deem it an adequate open source project. Readable code is simpler to build upon, secure and fix.

Compatibility

Amid your hunt, do not lose track of your goals. The project in question must be in line with the goals you are trying to achieve. You must be mindful of certain aspects to determine compatibility between the two. They are technologies in use, licensing and code languages.

Using Open Source Project

It is not necessary to find the exact match for your use case. But that’s where the flexibility of open source is of significance. You can still utilize the project with modifications of your own.

You must be responsible for your copy of the project. Moreover, if you are an upstream contributor, do not fret about improving on it. The value of the open source community is to give back. So maintaining the health of the project in use is crucial.

VEXXHOST and Open Source

VEXXHOST has long been involved with the open source community and is an active member of the OpenStack Foundation. Among the plethora of open source projects available out there, we provide Managed Zuul and Kubernetes Enablement as solution offerings.

Know more about our history with OpenStack and our involvement with the community here.

Would you like to know more about Zuul? So download our white paper and get reading!

How to up your DevOps game with Project Gating

How to Up Your DevOps Game with Project Gating:
Zuul – A CI/CD Gating Tool

The post How To Choose An Open Source Project appeared first on VEXXHOST.

by Samridhi Sharma at July 21, 2020 01:42 PM

July 20, 2020

The Official Rackspace Blog

Inside the Innovator’s Mind: A Conversation with Inho Hwang

Inside the Innovator’s Mind: A Conversation with Inho Hwang nellmarie.colman Mon, 07/20/2020 - 12:18

Editor’s note:

Technology is evolving quickly, and to succeed in this industry, you must keep up. But how do you shift from just keeping up to excelling and innovating?

Find out in our new series, “Inside the Innovator’s Mind.” Here, we’ll interview some of today’s leading technical thinkers and doers to learn where they find their motivation, how they stay ahead of the technology curve, and how they approach problem solving — all while keeping an eye on innovation.

For this segment of Inside the Innovator’s Mind, we interviewed Inho Hwang, a senior solutions architect with Rackspace Technology in Singapore. Read on to learn what inspires him to innovate and what he likes most about working with customers.

 

Professional insight

How do you find time to innovate?

As a solutions architect, I am often given requests from customers that push me to be innovative. For me, innovation happens almost spontaneously and can be triggered by almost anything. For example, while chatting with a colleague, I may get an idea that could be adopted into the solution that I am building or even researching at home during my free time. There isn’t a specific time set aside. Instead, I believe innovation comes naturally.

 

How do you keep up with new technologies?

Keeping up with new technology is a requirement for being a successful solutions architect. I need to be able to provide guidance and advice to customers as an expert in multiple technologies. Reading and studying are tasks that I do whenever I have free time, both during business and non-business hours.

In today's ever-changing IT landscape, there are vast domains of technologies to learn and understand, so listening to customers’ needs helps narrow down and prioritize for me which technologies to upskill on. My current focus areas include containerization and DevOps — priorities that several of my customers have.

 

Who or what inspires you?

Doing my job inspires me! The level of satisfaction I get when architecting the best solutions for customers, participating closely during implementations and seeing my solutions go live successfully provide the biggest inspirational and motivational experiences for me. I get to see and learn so much from both internal and external members of the project, sharing ideas on new ways of doing things. And I get to see different perspectives from different individuals.

 

What is your approach to solving big problems?

I usually find a place that provides me with absolute silence, often the study room in my house. The peace of mind that I get from a noise-free environment helps me think and focus on the particular problem. I will often have multiple scenarios in my head with different solutions to tackle the issue, and the silence allows me to think through the variables and choose the best possible solution.

 

How do you manage failure?

Nobody is perfect, and we all encounter failures often in our lives. We learn the most from our mistakes and failures, but it does not mean that one can take chances without good preparation beforehand. As long as I feel confident that I have done my best to prepare, then I would proceed to the next step even if it means risking failure — because I know I will learn something from it. And I’ll also share the experience as an opportunity to educate others on my team.

 

Getting to know you

What did you want to be when you grew up?

I was always fascinated with computer hardware and software since I was young, so it was natural that I pursued a diploma and degree in IT with the hope of landing a job in the IT industry.

 

What do you do now?

Thankfully, I am working in the frontier of technology, and I am able to provide solutions based on a wealth of experience accumulated from multiple companies. My work allows me to continue learning as new technologies continue to evolve.

 

Is it what you imagined?

Honestly, during my academic years, I imagined myself becoming a software developer. Recently I have been seeing more and more technologies and services that are based on serverless technology, and I see that the importance of coding knowledge is becoming even more valued. I see myself venturing into that space again, which is also part of my current role.

 

A day in the life

Do you have a morning routine at work? What is it?

I can’t start my day without a cup of coffee to crank-up my engine,  which is followed by looking through my emails to help prioritize tasks for the day.

 

What types of demands do you encounter?

Generally, my main demands are requests from the sales team for me to attend meetings with customers. It’s during these meetings that I gather requirements and propose solutions.

My other demands would be technical consultation requests from our current customers, especially when they have new projects and are looking for guidance on the best solutions.

And finally, I might be responding to a request for proposal, a request for quotation or a request for information from any one of our larger enterprise customers

 

Which roles/people do you interact with the most? How important is this interaction?

Internally, I work closely with salespeople, technical account managers and technical operations managers solving technical queries on our existing customer accounts. I also work with our partners like AWS, Azure and Google Cloud. Specifically, I collaborate with their various departments, such as employees from infrastructure, applications, security, risk and compliance and marketing, to meet their individual needs.  

 

What do you like about working with customers?

I enjoy understanding customers’ needs in different industries, and also learning about their pain points and challenges, and then using my knowledge to solve these, while also alleviating any concerns they may have.

 

What’s the most challenging part of your day?

The most challenging part of my day is when there are multiple high priority requests for proposals coming in at the same time, and having to instantaneously prioritize and complete them in the given timeframe.

 

What’s the highlight of your day?

For me, the highlight of my day is when I sense I’ve provided a ‘wow’ moment to customers during an initial pre-sales engagement, which leaves them with a good impression and opens the door for the next conversation.

 

Inside the Innovator’s Mind: A Conversation with Inho HwangMeet Inho Hwang, a senior solutions architect with Rackspace Technology in Singapore, and hear what inspires him to keep learning and innovating. Let our innovators put their expertise to work for you/professional-servicesStart here
Professional ServicesCloud InsightsInho HwangInside the Innovator’s Mind: A Conversation with Inho Hwangicon of human head with lightbulb icon inside

by nellmarie.colman at July 20, 2020 05:18 PM

VEXXHOST Inc.

Looking Back At OpenStack’s Ten Year Journey

As OpenStack turns 10, VEXXHOST is here to honor the presence of the community by revisiting some fun and impactful milestones over the years.

OpenStack’s journey began in 2010 – shoutout to RackSpace and Anso Labs for converging their efforts ten years ago which formed the base of OpenStack. If it weren’t for their grit and passion to open source the code, we wouldn’t be where we are today.

OpenStack year 0
openstack year 1

Well, VEXXHOST wasn’t too late in joining in on the fun. With the second release of OpenStack, Bexar, that came out nine years ago, VEXXHOST adopted OpenStack as a base for our cloud infrastructure services. It was definitely a special year for us as we have grown alongside OpenStack ever since!

Two years into OpenStack, the OpenStack Foundation was formed. As an attempt to empower, protect and add more shared resources to OpenStack, a vibrant open-source community came into place eight years ago. Being a member of the Foundation has been such an honour and pleasure for the VEXXHOST team.

OpenStack year 2
OpenStack year 3

Neutron, OpenStack’s networking as a service, took a front seat among OpenStack projects seven years ago. Feels like just yesterday, but with new releases and upgrades made available by OpenStack, Neutron is an essential part of OpenStack cloud infrastructure.

OpenStack’s path and presence were soon being felt worldwide. Six years ago the first OpenStack Design Summit took place in Paris. It is notable as the inaugural event for the SuperUser Award ceremony. It would be hard not to flex about VEXXHOST winning the same award in 2019 at the Denver Summit.

openstack year 4
OpenStack year 5

The global network of OpenStack gained traction, and the community, along with the Foundation staff, made great strides toward cloud interoperability. The mission began five years ago and our team is proud to have supported the OpenStack community in driving adoption by launching the first interoperability testing program for OpenStack Powered products, including public clouds and distributions.

Let’s not forget the first OpenStack Days, from four years ago, that took place in New York. The first of many OpenStack Days that brought the community closer all over Canada, Australia, and Tokyo, to name a few of the many countries where OpenStack is now deployed.

OpenStack year 6
openstack year 7

Three years ago, the OpenStack Foundation spread its wings and embraced some strategic focus areas like container infrastructure, CI/CD and Edge Computing. OpenStack Foundation kick-started the initiative with pilot projects in these areas that are now better known as Kata Containers, Zuul and the Edge Computing Group.

Soon after, OpenStack came out with its 18th release Rocky. This was a significant milestone for the VEXXHOST team as we were running the release on the day of the launch itself, two years ago. We have maintained our momentum by doing the same for OpenStack’s 19th and 20th releases, too!

openstack year 8

No one is a stranger to the role Ironic, the bare metal project, plays to OpenStack deployments. It was one year ago that the OpenStack Foundation launched the OpenStack Bare Metal Program to bring more attention to its usage in an Openstack cloud and encourage collaboration.

The past ten years have been instrumental in making the Openstack community what it is today. Every stakeholder of OpenStack has walked the path of collaborative development and we are all extremely excited to be here today to celebrate OpenStack’s big day. With immense gratitude, the VEXXHOST team would like to wish OpenStack a very HAPPY BIRTHDAY!

Like what you’re reading?
Deep dive into a hands-on ebook about how you can build a successful infrastructure from the ground up!

The post Looking Back At OpenStack’s Ten Year Journey appeared first on VEXXHOST.

by Samridhi Sharma at July 20, 2020 04:13 PM

The Official Rackspace Blog

Rackspace Technology and Telarus expand global partnership in EMEA

Rackspace Technology and Telarus expand global partnership in EMEA nellmarie.colman Sun, 07/19/2020 - 20:10

The Rackspace Technology Partner Program helps all types of businesses — from advisors, digital agencies, app designers and developers, to MSPs and VARs — empower their customers to embrace technology and deliver the future.

In January, we announced that we would be expanding this mission globally in 2020. After several months of scaling and serving our regions with program enhancements, we’re proud to announce an important partnership acceleration.

Telarus, the largest privately held technology services distributor (master agent) in the U.S., will be leveraging Rackspace Technology in the UK and EMEA. This means Telarus partners in these regions will now be able to leverage Rackspace Technology expertise and award-winning leadership across cloud optimization, cloud security, cloud native enablement and data modernization.

“Telarus is very excited about the expansion of our partnership with Rackspace Technology into the UK and EMEA markets to provide access to their products, services and excellent support for their customers.”

Koby Phillips
Vice President of Business Development – Cloud, Telarus

 

Through a dynamic agent-partner community, Telarus sources data, voice, cloud and managed services, with a robust portfolio of over 250 leading service providers. They are best known for their home-grown software pricing tools and mobile apps, with industry-leading support for cybersecurity, SD-WAN, cloud, mobility, contact center and ILEC specialty practices.

“Rackspace Technology has seen huge growth in the U.S. with the channel model, and we are excited to help Telarus launch the channel model in EMEA.  We are already seeing the interest in the partner ecosystem, as they recognize how partnering with Telarus and Rackspace Technology will allow them to bring more value helping their customers with multicloud.”

Michael Stephens
Agent Channel Chief, Rackspace Technology

 

Rackspace Technology and Telarus UK are hosting a Virtual Sales Immersion event on August 18 to educate our partners about the multicloud market, how Rackspace Technology helps customers achieve their business goals and how to engage in conversations with customers. If you’re a Rackspace Technology or Telarus partner, please reach out to Vicki Patten at Vicki.Patten@rackspace.com to register for this live, online event.

 

Rackspace Technology and Telarus expand global partnership in EMEATelarus partners in the UK and EMEA can now leverage Rackspace Technology expertise across cloud optimization, cloud security, cloud native enablement and data modernization. Add multicloud services to your portfolio/partnersStart here
Professional ServicesCloud InsightsLisa Heritage McLinRackspace Technology and Telarus expand global partnership in EMEAman leaning over desk and using a computer mouse

by nellmarie.colman at July 20, 2020 01:10 AM

July 17, 2020

VEXXHOST Inc.

10 Years Of OpenStack – With OpenStack!

OpenStack completed a decade around the sun, and we couldn’t be more thrilled to have walked alongside the vibrant community for nine long years. So, we are taking this trip down memory lane of how it all began for us and how far VEXXHOST has come with OpenStack!

The open source platform was created to provide public and private cloud services to enterprises of all sizes. The Four Opens: Open Source, Open Community, Open Development and Open Design, are the foundation stones for the OpenStack Foundation. It was these fundamentals that drove us to become a part of the community in 2011. Since then, VEXXHOST has been an avid contributor and user of OpenStack ever since. We went on to become both infrastructure donors and corporate members of the OpenStack Foundation.

OpenStack Services & Solutions

With our involvement in the community, we got started with Public and Private cloud services. We deliver consistent performance throughout our OpenStack public cloud. With no vendor lock-in hassles, one can make use of 13 Openstack projects ranging from networking to block storage, identity management to container orchestration and so much more! Through our Private Cloud service, we offer a customizable experience that can be tailored to fit any business needs. We provision the availability of bare metal, virtual machine and kubernetes all in one cloud environment.

As VEXXHOST captured interest among users, we expanded on our OpenStack related solutions through consulting and upgrades offering. Moreover, we have even had the chance to venture outside of the OpenStack software and explore other projects under the OpenStack Foundation, such as Zuul. The CI tool has been tested against our cloud and is a part of our solution offerings as well.

OpenStack has genuinely formed a well-knit community that is always working to improve the product and its ecosystem. That in itself is something worth celebrating.

Right from the second release, Bexar, our journey began as an Openstack based IaaS provider. We are now on Openstack’s twentieth release, Ussuri, and VEXXHOST has been among some of the first to offer it as part of OpenStack Upgrades solution and as part of our Private cloud service.

Events and Interactions

VEXXHOST at OpenStack events

OpenStack has helped us get closer to our users and the community through events, summits and meetups. We have received some astounding exposure as members of the community. You can say we have travelled the world together, from Boston to Sydney, Berlin to Shanghai! It was because of OpenStack that we had the chance to speak at CERN, during an OpenStack Days event. Moreover, by being awarded the Superuser award in 2019, we received the same recognition from the community.

OpenStack itself has evolved plenty since 2010. The vast innovations in OpenStack have enabled great flexibility in cloud computing. Openstack is the leading open source option for public cloud environments. We are also members of the OpenStack Passport Program, allowing us to provide resources at scale to those who wish to test open source projects actively.

Even for private cloud environments, OpenStack has proven to be highly customizable. Be it bare metal, virtual machines or containers, your OpenStack environment can harness the power of all three at once! Furthermore, if you are looking for more affordable alternatives, OpenStack also supports hyper-converged infrastructure.

With the power of OpenStack, VEXXHOST has gone from local to global. We have gained the trust of enterprise clients all over the world. It is our pleasure to be a part of OpenStack when they cross this milestone. The entire VEXXHOST team congratulates OpenStack and its community on their big day, and we express our gratitude for being a part of it!

Like what you’re reading?
Deep dive into a step-by-step guide about how you can bulletproof your cloud strategy with OpenStack!

The post 10 Years Of OpenStack – With OpenStack! appeared first on VEXXHOST.

by Samridhi Sharma at July 17, 2020 05:22 PM

July 16, 2020

Ed Leafe

Day 52: Happy 10th Birthday, OpenStack!

I just saw this announcement from the OpenStack Foundation about OpenStack’s 10th birthday! Yes, 10 years ago this week was the first OpenStack Summit, in Austin, TX, with the public announcement the following week at O’Rielly OSCON. Yet most people don’t know that I played a very critical role in the beginning! OpenStack began as … Continue reading "Day 52: Happy 10th Birthday, OpenStack!"

by ed at July 16, 2020 08:45 PM

OpenStack Blog

Thank You to the Last Decade, Hello to the Next!

Wow, 10 years. Many amazing tech milestones happened in 2010. Steve Jobs launched the first iPad. Sprint announced its first 4G phone. Facebook reached 500 million users. For us perhaps, one of the most important milestones was the birth of the OpenStack project in 2010.  In real time, the pace of change in the tech... Read more »

by Sunny at July 16, 2020 02:00 PM

Favorite OpenStack Swag—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we have gathered people’s favorite OpenStack swag from the community members around the world. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

by Sunny at July 16, 2020 11:00 AM

First OpenStack Event—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we gathered the first OpenStack events from the community members around the world. 2. 3. 4. 5. 6. 7. 8. 9. 10.

by Sunny at July 16, 2020 11:00 AM

Most Unusual Place That an OpenStack Deployment Has Been Built—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we have gathered the most unusual place that people have seen/heard an OpenStack deployment has been built from the community members around the world. 1. 2. 3. 4. 5.

by Sunny at July 16, 2020 11:00 AM

Favorite Cities You’ve Visited for OpenStack—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we have gathered community members’ favorite cities where they have visited for OpenStack in the past 10 years. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.... Read more »

by Sunny at July 16, 2020 11:00 AM

Predictions of OpenStack—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we have gathered community members’ predictions on OpenStack in the next 10 years. 2. 3. 4. 5. 6. 7. 8. 9. 10.

by Sunny at July 16, 2020 11:00 AM

First Contribution—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we gathered the first contribution from the community members around the world. 1. 2. 3. 4. 5.

by Sunny at July 16, 2020 11:00 AM

Most Memorable OpenStack Moments—10 Years of OpenStack

There are so many milestones to celebrate in the past 10 years of OpenStack with the community. Here we have gathered the most memorable OpenStack moments from the community members around the world. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

by Sunny at July 16, 2020 11:00 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
September 27, 2020 11:21 PM
All times are UTC.

Powered by:
Planet