July 02, 2020

OpenStack Superuser

Takeaways and next steps from OpenDev: Large Scale Usage of Open Infrastructure

Event organizers and participants experience a variety of emotions during the lifecycle of an event: excitement, anticipation, stress, and sadness when it’s over. OpenDev: Large Scale Usage of Open Infrastructure was no exception. To be honest, baby Yoda put it best.

This week, the OpenStack Foundation (OSF) held its third OpenDev conference. This iteration focused on scaling challenges for open infrastructure across a variety of topics including upgrades, high availability (HA) networking, bare metal, and more. Jonathan Bryce, OSF executive director, kicked off the event explaining how OpenDev events are an example of how the global Open Infrastructure community collaborates without boundaries. He encouraged participants to think broadly and remember that collaboration is built by people who are actively engaged and participating by sharing their knowledge and experiences so they can learn from each other.

This virtual event recruited participants from over 70 countries who spent three days asking questions, sharing scaling challenges, and explaining implementations that worked for their local environments. Each moderated session combined various perspectives on the challenges, questions for how to improve, and next steps that the community can collectively collaborate on to ease these operator challenges.

Thank you to the OpenDev: Large-scale Usage of Open Infrastructure Software programming committee members: Beth Cohen, Roman Gorshunov, Belmiro Moreira, Masahito Muroi, and Allison Price. You helped to make these discussions possible!

So, what happened? Below is a snapshot of the conversations that took place, but I want to encourage you to check out the event recordings as well as the discussion etherpads found in the OpenDev event’s schedule to join the discussion.

Jump to the User Stories recap
Jump to the Upgrades recap
Jump to the Tools recap
Jump to the HA Networking Solutions recap
Jump to the Software Stack recap
Jump to the Bare Metal recap
Jump to the Edge Computing recap
Jump to the OpenDev event’s next steps


User Stories

Ahead of the weeks’ discussions, speakers from Blizzard Entertainment. OpenInfra Labs, and Verizon shared their own scaling challenges via short user story presentations Monday morning followed by audience questions moderated by Mark Collier, OSF COO.

Colin Cashin, Blizzard Entertainment senior director of cloud engineering, and Erik Andersson, Blizzard Entertainment technical lead, senior cloud engineer, discussed four key scaling challenges that affect their 12,000 node OpenStack deployment that spans multiple regions. Their challenges were: Nova scheduling w/ NUMA pinning, scaling RabbitMQ (a frequent challenge repeated throughout the week), scaling Neutron, and compute fleet maintenance. The Blizzard team received a flurry of questions about their specific challenges and their multi-cloud approach (OpenStack, Amazon, Google, and Alibaba). They also used the opportunity to share that they’re hiring engineers to help manage their massive OpenStack footprint, so if you’re interested, check out their openings.

Michael Daitzman provided an overview of OpenInfra Labs, the newest OSF pilot project, that is a collaboration among several research institutions including Boston University, Harvard University, MIT, Northeastern University, and University of Massachusetts. He covered some challenges for integrating different open source technologies into a single environment including specific problem areas like monitoring and what happens when your Ceph clusters aren’t properly backed up (TL;DR: you lose all your data).

Beth Cohen, OpenDev veteran and moderator of the HA networking solutions discussion, presented an updated Verizon case study centered around some of their challenges. Her team is seeing that application configuration (written by partners) has a significant effect on how the applications behave in the environment (i.e. w/ encryption turned on, throughput halved). With traffic going through all of these systems, it can be hard to identify the source of the problem. In order to isolate and reproduce this, they have built a full production engineering lab with full testing capabilities that is also used by their development team, ultimately providing a really nice feedback loop.


Upgrades

When asking OpenStack operators their number one challenge with their deployments, upgrades are often at the top of the list. This discussion, moderated by Belmiro Moreira, cloud architect at CERN, explored the different setbacks around upgrades, including why operators find them so daunting. Top reasons shared included upgrading across versions, the amount of resources needed, the feeling of “once you’re behind, you’re behind,” and API downtime. Some suggestions included the recent pre-upgrade checks that the OpenStack TC set as a goal in the Stein cycle in 2019 as well as fast forward upgrades.

As can be expected, this session included a lot of discussion about specific upgrade scenarios. One of the next steps that Jonathan mentioned in the event wrap-up was the opportunity to use the OpenDev event as a launching off point for some upgrade documentation. If you’re an operator who has successfully upgraded your OpenStack environment and would like to collaborate on a set of tips, please sign up here.


Tools

Moderated by John Garbutt, principal engineer at StackHPC, the Tools discussion covered a few topics: Different ways to measure scale, the different considerations into what you’re trying to scale and how that will affect tools, and finding and removing bottlenecks, which is naturally where a lot of the conversation centered. Shared challenges included RabbitMQ flatlining, legacy Keystone UUID tokens, and losing access to Cinder volumes.

The next step for this topic was largely centered around sharing tooling, which included the idea of revisiting the OsOps initiative that was started a few years ago so that operators could share tooling with each other and collaborate more closely.


HA Networking Solutions

What does HA mean from a network perspective? Are you concerned with networking connectivity failures in your environment and if so what are you doing about them? Do you have a diverse network? Beth Cohen, cloud product technologist at Verizon, started the discussion with these questions to make sure that everyone knows what networking high availability means in this context and get to know how important HA is in the participants’ deployments.

Later, the participants dived into discussions around different factors that apply to scaling, where you put failure domains and what the most cost-effective way is to do that. Should we pay for a lot of redundant and expensive hardware or handle failure at the rack level, and trust the software to be able to work around that? The interactive discussion continues. If you are interested in adding your thoughts to the etherpad, please add it here.


Software Stack

The Software Stack discussion, moderated by Masahito Muroi, senior software engineer at LINE Corp., centered around what the open source mix will be over time and how to plan for deployment infrastructure software and the resources that the end users will want. A poll during this session helped show that networking and deployment were the two biggest challenges that participants faced.


Bare Metal

James Penick, architect director at Verizon Media, moderated an active discussion around bare metal, including multiple opportunities where he activated his own team to implement learnings back in their own environment. The conversation included use cases for bare metal and where participants see those use cases evolving to, including scenarios like 5G, the tooling required around bare metal management, and lifecycle management. With only an hour available for discussion, participants were only able to scratch the surface on the bare metal conversation, but luckily, the next OpenDev event (Hardware Automation, July 20-22) will be centered around this topic, including an entire day on lifecycle management.


Edge Computing

The last session of the event was around the edge computing use case and was moderated by Shuquan Huang, technical director at 99cloud. The discussion kicked off with use cases that included farming, high performance computing (HPC), 5G, facial recognition, and an impressive China Tower edge computing use case that was planning to leverage StarlingX across 2 million antenna towers in China. The different architectural requirements for the varying edge use cases was a popular question and unsurprising as participants had earlier identified networking as their number one pain point in the Software Stack discussion. This was a good opportunity to promote the OSF Edge Computing Group’s latest whitepaper, Edge Computing: Next Steps in Architecture, Design, and Testing.


Next Steps

The goal with the OpenDev events is to extend this week’s learnings into future work and collaboration, so Jonathan Bryce and Thierry Carrez, OSF VP of Engineering, wrapped up the event to discuss next steps. These include:

  • Join the OpenStack Large Scale SIG to continue sharing challenges and solutions around scaling
  • Take the OpenStack User Survey to share feedback with the upstream community
  • Collaborate on documentation around common pain points, like upgrades
  • Revive OsOps Tools
  • Join the OpenDev: Hardware Automation event to continue discussing bare metal use cases
  • Check out OpenInfra Labs and get involved in their mission to refine a model for fully integrated clouds supporting bare metal, VMs, and container orchestration.

Let’s keep talking about open infrastructure! Check out the next two OpenDev events:

The post Takeaways and next steps from OpenDev: Large Scale Usage of Open Infrastructure appeared first on Superuser.

by Allison Price and Sunny Cai at July 02, 2020 01:00 PM

July 01, 2020

OpenStack Superuser

Turning your challenges to opportunities in Nokia NESC OpenStack

Starting with just a handful of servers 10 year ago, Nokia Enterprise and Services Cloud (NESC) has grown into one of the world’s largest OpenStack private clouds providing 484,000 virtual computing cores, 40PB of storage and a sustained monthly usage in excess of 230 million active core hours in month. NESC is distributed globally across three continents in six Tier three data centers and is used by Nokia for hosting mission critical research and development workloads and customer interfacing business applications. NESC provides standard Infrastructure As A Service components strengthened with ISO27001 certification.

Key to the success of NESC has been constant evolution, empowering our Nokia R&D and applications teams to reliably and rapidly build the virtual environments via self-service and automation. Every day in NESC there are over 200,000 new virtual server instance starts and close to 100,000 API calls per minute during peak hours, a challenge for any cloud.

Fortunately NESC has a great developer team who focus not on telling users what is not possible, but instead on figuring out how. This has driven the team to develop and excel in some key areas like:

Monitoring, logging and metrics analytics: NESC Logstash solution collects around 2 million log lines per minute which is then indexed and enriched in real-time to maximize the value of the data. In addition, NESC continuously collects huge volumes of metrics that are used not only for looking at current status, but also by applying Machine learning, are able to find anomalies,  performance degradations and even to make future predictions of outcomes based on trends.

Operation Automation:  “The best sysadmin is a lazy one”. They don’t want to fix same things over and over again.  Anything that can be automated in NESC is, from upgrades to database clean-ups and other resource leaks. 

Master data:  The key to successful and reliable automation is the reliability of programmable inventory data. You need to know with certainty on which assets what services are running so you can automate them. NESC tried enterprise commercial auto-discovery tools and other methods for many years, but none of them were reliable enough or able to keep dependencies and context consistent. Today, we have developed and implemented our own auto-discovery solution for inventory and configuration management as single source of truth.  

Security: With NESC software development tools & processes and the overall service being ISO 27001 certified, the security journey has been filled with automated controls and dedicated toolsets that are mostly open source based.  

Today NESC is stronger and better as a result, a dependable service within Nokia. The ability to embrace challenges forced the team to learn faster and adopt new ideas and services like GeoDNS, tiered storage solutions or NESC’s latest fast growing Kubernetes as service offering.  NESC DNA of fast development cycles,  fail sooner than later and extensive automation has pushed us this far.

The next chapter for NESC is to offer it as a managed service to Nokia customers worldwide, either as an on-Premises or off-premises solution.

The post Turning your challenges to opportunities in Nokia NESC OpenStack appeared first on Superuser.

by Yilmaz Karayilan at July 01, 2020 01:00 PM

June 27, 2020

Lee Yarwood

Openstack Nova Victoria Cycle Gerrit Dashboards

As in previous cycles I’ve updated some of the Nova specific dashboards available within the excellent gerrit-dash-creator project and started using them prior to dropping offline on paternity. I’d really like to see more use of these dashboards within Nova to help focus our limited review bandwidth on active and mergeable changes so if you do have any ideas please fire off reviews and add me in!

June 27, 2020 11:51 PM

June 26, 2020

John Likes OpenStack

Running tripleo-ansible molecule locally for dummies

I've had to re-teach myself how to do this so I'm writing my own notes.

Prerequisites:

  1. Get a working undercloud (perhaps from tripleo-lab)
  2. git clone https://git.openstack.org/openstack/tripleo-ansible.git ; cd tripleo-ansible
  3. Determine the test name: ls roles

Once you have your environment ready run a test with the name from step 3.


./scripts/run-local-test tripleo_derived_parameters
Some tests in CI are configured to use `--skip-tags`. You can do this for your local tests too by setting the appropriate environment variables. For example:

export TRIPLEO_JOB_ANSIBLE_ARGS="--skip-tags run_ceph_ansible,run_uuid_ansible"
./scripts/run-local-test tripleo_ceph_run_ansible

This last tip should get added to the docs.

by Unknown (noreply@blogger.com) at June 26, 2020 06:39 PM

June 22, 2020

OpenStack Superuser

OSF Edge Computing Group defines architectures, open source components, and testing activities for massively distributed systems

The OpenStack Foundation (OSF) Edge Computing Group is excited to announce we published our latest white paper, a result of collaboration among fellow open infrastructure operators and vendors! With deployments of IoT devices and the arrival of 5G networks, edge computing has rapidly gained popularity over the past few years. While the popularity has rapidly increased, there are still countless debates about the definition of related terms and the right business models, architectures and technologies required to satisfy the seemingly endless number of emerging use cases of this novel way of deploying applications over distributed networks.

In our previous white paper, the OSF Edge Computing Group defined cloud edge computing as resources and functionality delivered to the end users by extending the capabilities of traditional data centers out to the edge, either by connecting each individual edge node directly back to a central cloud or several regional data centers, or in some cases connected to each other in a mesh. From a bird’s eye view, most of those edge solutions look loosely like interconnected spider webs of varying sizes and complexity.

Our second white paper, Edge Computing: Next Steps in Architecture, Design and Testing, delivers the specific ways open source communities are shaping the future of edge computing by collecting use cases, identifying technology requirements and contributing architectures, open source components and considerations for testing activities.

This white paper also highlights the OSF Edge Computing Group’s work to more precisely define and test the validity of various edge reference architectures. To help with understanding the challenges, there are use cases from a variety of industry segments, demonstrating how the new paradigms for deploying and distributing cloud resources can use reference architecture models that satisfy these requirements. 

Check out the lastest edge computing white paper here!

The post OSF Edge Computing Group defines architectures, open source components, and testing activities for massively distributed systems appeared first on Superuser.

by Ildiko Vancsa at June 22, 2020 01:00 PM

Ghanshyam Mann

Submit your first OpenStack patch in 3 steps.

If you are new to OpenStack Community and want to start the contribution, this document can help you in a quick way. OpenStack does not use github pull request instead it uses Gerrit for code collaboration tool. Also, there is some accounts setup required for using the Gerrit system. This guide will quickly help you to set up those accounts and the minimal steps

  Step1: Set up accounts

To get started, first set up the required accounts.

  • Setup OpenStack Foundation Account.

    • Go to the OpenStack Foundation sign up page.
    • Under individual members, click the Foundation Member button.
    • Few tips for filling the form:
      • Use the same e-mail address at every step of the registration procedure.
      • Add your affiliation information if you want otherwise you are contributing on behalf of your  ‘Individual Contributors’.
  • After you submit the application, you will get the email once it is approved.
  • Setup Your Task Tracker Account

    • Go to the https://login.launchpad.net/.
    • If you don’t have a ubuntu One Account, click the I don’t have an Ubuntu One account.
      • Use the same email address that was used during the OpenStack Foundation account setup.
    • Fill all information and ‘Create Account’.

  • Install git:

    • Mac OS
      • Go to the Git download page and click Mac OS X.
      • The downloaded file should be a dmg in your downloads folder. Open that dmg file and follow the instructions on screen.
      • If you use the package manager Homebrew, open a terminal and type:
        brew install git
    • Linux
      • For distributions like Debian, Ubuntu, or Mint open a terminal and type:
            sudo apt install git
      • For distributions like RedHat, Fedora 21 or earlier, or CentOS open a terminal and type:
            sudo yum install git
      • For Fedora 22 or later open a terminal and type:
            sudo dnf install git
      • For SUSE distributions open a terminal and type:
            sudo zypper in git
    • Configure Git
            git config --global user.name "Firstname Lastname"
            git config --global user.email "your_email@youremail.com"
      • Use the same email address that was used during the OpenStack Foundation account setup.
  • Setup Your Gerrit Account

    • Visit OpenStack’s Gerrit page and click the sign in link.
    • You will be prompted to select a username. Choose and type your username carefully. Once it is set, you cannot change the username.
    • From here on out when you sign in to Gerrit, you’ll be prompted to enter your Launchpad login info. This is because Gerrit uses it as an OpenID single sign on.
    • Sign Individual Contributor License Agreement. If you want to contribute from company then ask your company to sign the Company CLA.
      • In Gerrit’s settings click the “New Contributor Agreement” link and sign the agreement.

  • Setup SSH Keys

    • In order to push patches to Gerrit we need to have a way to identify ourselves and ssh key is the one way to authenticate. You can submit patches from any machine but you need to update the ssh key of that machine.
    • Generate SSH Key Pairs
         ssh-keygen -t rsa
    • Copy Public Key
        cat ~/.ssh/id_rsa.pub
    • Add Public Key Into Gerrit

  • Install git-review tool

    • pip install gitreview
    • git config global gitreview.username <username>

  Step2: Push your change

  • Clone the repository.

    • Clone the repo which you want to push the changes:
        git clone https://opendev.org/openstack/<PROJECT_NAME>

                    Here you can find all the OpenStack projects and repo under those projects.

  • Create your local branch to do the changes

    git checkout -b <branch_name>
  • Do the changes in code and perform all required unit or functional tests.

    • To check the files that have been updated in your branch:
        git status
  • Add your changes to the branch

    git add -A

  • Write commit changes

    git commit
    or 
    git commit --amend (if you are ammending the previously written commit msg)
  • Submit your changes

    git review
  • Tracking your Changes

    • You can track the submitted changes at Code Review. After logging in, click on ‘My’ then ‘Changes’ and there you can see all your changes.

  Step3: Practice the above steps using Sandbox Project

To make sure everything is set up correctly or to understand the workflow without trying it on actual projects, you can practice in a Sandbox using the How to Use the Sandbox Projects Guide. Sandbox project is just for practice so do not hesitate or worry about anything.

For more details on OpenStack community contributor, please refer to The OpenStack Contributor Guide or ping me on IRC (#openstack-dev or openstack-upstream-institute channels), I am available with ‘gmann’ nickname.

by Ghanshyam Mann at June 22, 2020 12:53 AM

June 18, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: Project Teams Gathering (PTG) Recap

It has been a week since the first virtual Project Teams Gathering (PTG)! While it was not the same experience as our traditional in-person PTGs, the community made the best of the current situation. We’ve been amazed by how many people have joined us online this year to collaborate on OSF-supported projects. We not only had the highest attendance and gender diversity in this PTG, but also had 20 more countries represented than the Denver PTG last year. Thank you to all the community members who have worked together virtually in this unprecedented time to collaborate without boundaries.

PTG Participation by Day*
*Attendees’ attendance might be counted twice in the same day if they are participating in multiple sessions.

If you didn’t attend—or if you did and want a replay—check out the PTG spotlight on Superuser where we have collected the project announcements, community updates, and discussions you may have missed.

OpenStack Foundation news

  • The OSF Edge Computing Group published its second white paper, Edge Computing: Next Steps in Architecture, Design and Testing.
  • There will be two community meetings next week to cover OSF-supported project updates and event updates. One is on Thursday, June 25 at 1500 UTC and the other is Friday, June 26 at 0200 UTC. You can find the dial-in information here.
  • The OSF has entered into a partnership with ETSI. The partnership will help strengthen collaboration between standardization and open source activities, including the area of edge computing.
  • The OSF Board of Directors approved the 2020.06 RefStack guidelines.
  • OpenDev: Large-scale Usage of Open Infrastructure Software, the first of three virtual OpenDev events, kicks off June 29. Register here.

Airship: Elevate your infrastructure

  • Nominations for the Airship Technical Committee are now open, and will remain open until EOD June 21. More information here.
  • Airship 2.0’s alpha milestone was completed in May. Stay updated on progress toward the beta through the blog.
  • Running or evaluating Airship? The User Survey is available here. 

Kata Containers: The speed of containers, the security of VMs

  • We published two stable releases at the end of last week – 1.11.1 and 1.10.5.
    • Among other bug fixes, these releases include security fixes for CVE-2020-2023 and CVE-2020-2026.
    • Kata Security Advisory for the above CVEs describing impact and mitigation has been published here.
  • All Kata versions prior to 1.11.1 and 1.10.5 are impacted. It is recommended to upgrade to the latest stable releases. The security fixes have been pushed to master as well.
  • We tagged Kata Containers 2.0.0-alpha1 release last week. This release uses the rust agent as default and makes the switch to ttrpc from grpc as the communication protocol. We have also consolidated the agent and runtime repositories and moved them to kata-containers/kata-containers repository for better maintenance and release management.
  • Now you can find the release information for 2.0.0-alpha1 here, and you can also find the features that we are planning for 2.0 here.

OpenStack: Open source software for creating private and public clouds

  • The OpenStack community had a great virtual Project Teams Gathering. While we missed seeing each other in person, it was a productive event, putting the Victoria development cycle to a good start. You can still access all the etherpads for the event, and find summaries posted on the discuss mailing-list.
  • Speaking of the Victoria cycle, the Technical Committee finally selected two community-wide goals around CI/CD for this cycle: switch legacy Zuul jobs to native, and migrate jobs to new Ubuntu 20.04 LTS. For more details, check out Ghanshyam Mann’s post on the mailing-list.
  • Hot on the heels of the recent Ussuri release, Thomas Goirand announced the availability of packages for Debian sid, as well as the buster-ussuri.debian.net repositories. Peter Matulis announced the availability of the 20.05 OpenStack charms release, introducing support for OpenStack Ussuri on Ubuntu 18.04 LTS and 20.04 LTS. Amy Marrich announced the general availability of the RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux.
  • Following discussions at the PTG, the Technical Committee and the User Committee have started to move forward with merging into a single governance body for the OpenStack open source project, representing all contributors (developers, operators, and end users of the software). As a first step, the separate user-committee mailing-list was discontinued and future discussions will be held on the openstack-discuss mailing-list.

StarlingX: A fully featured cloud for the distributed edge

  • StarlingX is now a confirmed top-level Open Infrastructure project supported by the OpenStack Foundation.
  • The community is currently working on the 4.0 version of the platform that they are planning to release in July.
  • You can check out this blog post to find out more about the community’s achievements since the project’s launch and their plans for the next release.

Zuul: Stop merging broken code

  • Zuul 3.19.0 and Nodepool 3.13.0 are the last planned series 3 releases, incorporating a new Ansible 2.9 default, branch guessing for tags, ability to pause mergers, TLS encryption for Zookeeper connections, a new “serial” pipeline manager, a timezone selector for the dashboard, and more; work is underway for version 4 which sets the stage for distributed schedulers, stateful restarts, and high availability across all services.
  • Zuul: A Wazo Platform Case Study
    • Learn why Wazo Platform, An open source software programmable telecommunication platform, leverages Zuul’s cross-repository dependencies for its repositories.
  • Zuul: A T-Systems Case Study
    • Learn why global IT company T-Systems leverages Zuul’s ability to easily test workflows.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Helena Spease at June 18, 2020 04:00 PM

StackHPC Team Blog

Software RAID support in OpenStack Ironic

OpenStack Ironic operates in a curious world. Each release of Ironic introduces ever more inventive implementations of the abstractions of virtualisation. However, bare metal is wrapped up in hardware-defined concrete: devices and configurations that have no equivalent in software-defined cloud. To exist, Ironic must provide pure abstractions, but to succeed it must also offer real-world circumventions.

For decades the conventional role of an HPC system administrator has included deploying bare metal machines, sometimes at large scale. Automation becomes essential beyond trivial numbers of systems to ensure repeatability, scalability and efficiency. Thus far, that automation has evolved in domain-specific ways, loaded with simplifying assumptions that enable large-scale infrastructure to be provisioned and managed from a minimal service. Ironic is the first framework to define the provisioning of bare metal infrastructure in the paradigm of cloud.

So much for the theory: working with hardware has always been a little hairy, never as predictable or reliable as expected. Software-defined infrastructure, the method underpinning the modern mantra of agility, accelerates the interactions with hardware services by orders of magnitude. Ironic strives to deliver results in the face of unreliability (minimising the need to ask someone in the data centre to whack a machine with a large stick).

HPC Infrastructure for Seismic Analysis

As a leader in the seismic processing industry, ION Geophysical maintains a hyperscale production HPC infrastructure, and operates a phased procurement model that results in several generations of hardware being active within the production environment at any time. Field failures and replacements add further divergence. Providing a consistent software environment across multiple hardware configurations can be a challenge.

ION is migrating on-premise HPC infrastructure into an OpenStack private cloud. The OpenStack infrastructure is deployed and configured using Kayobe, a project that integrates Ironic (for hardware deployment) and Kolla-Ansible (for OpenStack deployment), all within an Ansible framework. Ansible provides a consistent interface to everything, from the physical layer to the application workloads themselves.

This journey began with some older-generation HPE SL230 compute nodes and a transfer of control to OpenStack management. Each node has two HDDs. To meet the workload requirements these are provisioned as two RAID volumes - one mirrored (for the OS) and one striped (for scratch space for the workloads).

Each node also has a hardware RAID controller, and standard practice in Ironic would be to make use of this. However, after analysing the hardware it was found that:

  • The hardware RAID controller needed to be enabled via the BIOS, but the BIOS administration tool failed on many nodes because the 'personality board' had failed, preventing the tool from retrieving the server model number.
  • The RAID controller required a proprietary kernel driver which was not available for recent CentOS releases. The driver was not just required for administering the controller, but for mounting the RAID volumes.

Taking these and other factors into account, it was decided that the hardware RAID controller was unusable. Thankfully, Ironic developed a software-based alternative.

Provisioning to Software RAID

Linux servers are often deployed with their root filesystem on a mirrored RAID-1 volume. This requirement exemplifies the inherent tensions within the Ironic project. The abstractions of virtualisation demand that the guest OS is treated like a black box, but the software RAID implementation is Linux-specific. However, not supporting Linux software RAID would be a limitation for the primary use case. Without losing Ironic's generalised capability, the guest OS “black box” becomes a white box in exceptional cases such as this. Recent work led by CERN has contributed software RAID support to the Ironic Train release.

The CERN team have documented the software RAID support on their tech blog.

In its initial implementation, the software RAID capability is constrained. A bare metal node is assigned a persistent software RAID configuration, applied whenever a node is cleaned and used for all instance deployments. Prior work involving the StackHPC team to develop instance-driven RAID configurations is not yet available for software RAID. However, the current driver implementation provides exactly the right amount of functionality for Kayobe's cloud infrastructure deployment.

The Method

RAID configuration in Ironic is described in greater detail in the Ironic Admin Guide. A higher-level overview is presented here.

Software RAID with UEFI boot is not supported until the Ussuri release, where it can be used in conjuction with a rootfs UUID hint stored as image meta data in a service such as Glance. For Bifrost users this means that legacy BIOS boot mode is the only choice, ruling out secure boot and NVMe devices for now.

In this case the task was to provision a large number of compute nodes with OpenStack Train, each with two physical spinning disks and configured for legacy BIOS boot mode. These were provisioned according to the OpenStack documentation with some background provided by the CERN blog article. Two RAID devices were specified in the RAID configuration set on each node; the first for the operating system, and the second for use by Nova as scratch space for VMs.

{
  "logical_disks": [
    {
      "raid_level": "1",
      "size_gb"   : 100,
      "controller": "software"
    },
    {
      "raid_level": "0",
      "size_gb"   : "800",
      "controller": "software"
    }
  ]
}

Note that although you can use all remaining space when creating a logical disk by setting size_gb to MAX, you may wish to leave a little spare to ensure that a failed disk can be rebuilt if it is replaced by a model with marginally different capacity.

The RAID configuration was then applied with the following cleaning steps as detailed in the OpenStack documentation:

[{
  "interface": "raid",
  "step": "delete_configuration"
 },
 {
  "interface": "deploy",
  "step": "erase_devices_metadata"
 },
 {
  "interface": "raid",
  "step": "create_configuration"
 }]

A RAID-1 device was selected for the OS so that the hypervisor would remain functional in the event of a single disk failure. RAID-0 was used for the scratch space to take advantage of the performance benefit and additional storage space offered by this configuration. It should be noted that this configuration is specific to the intended use case, and may not be optimal for all deployments.

As noted in the CERN blog article, the mdadm package was installed into the Ironic Python Agent (IPA) ramdisk for the purpose of configuring the RAID array during cleaning. mdadm was also installed into the deploy image to support the installation of the grub2 bootloader onto the physical disks for the purposes of loading the operating system from either disk should one fail. Finally, mdadm was added to the deploy image ramdisk, so that when the node booted from disk, it could pivot into the root filesystem. Although we would generally use Disk Image Builder, a simple trick for the last step is to use virt-customize:

virt-customize -a deployment_image.qcow2 --run-command 'dracut --regenerate-all -fv --mdadmconf --fstab --add=mdraid --add-driver="raid1 raid0"'

Open Source, Open Development

As an open source project, Ironic depends on a thriving user base contributing back to the project. Our experiences covered new ground: hardware not used before by the software RAID driver. Inevitably, new problems are found.

The first observation was that configuration of the RAID devices during cleaning would fail on about 25% of the nodes from a sample of 56. The nodes which failed logged the following message:

mdadm: super1.x cannot open /dev/sdXY: Device or resource busy

where X was either a or b and Y either 1 or 2, denoting the physical disk and partition number respectively. These nodes had previously been deployed with software RAID, either by Ironic or by other means.

Inspection of the kernel logs showed that in all cases, the device marked as busy had been ejected from the array by the kernel:

md: kicking non-fresh sdXY from array!

The device which had been ejected, which may or may not have been synchronised, appeared in /proc/mdstat as part of a RAID-1 array. The other drive, having been erased, was missing from the output. It was concluded that the ejected device had bypassed the cleaning steps designed to remove all previous configuration, and had later resurrected itself, thereby preventing the formation of the array during the create_configuration cleaning step.

For cleaning to succeed, a manual workaround of stopping this RAID-1 device and zeroing signatures in the superblocks was applied:

mdadm --zero-superblock /dev/sdXY

Removal of all pre-existing state greatly increased the reliability of software RAID device creation by Ironic. The remaining question was why some servers exhibited this issue and others did not. Further inspection showed that although many of the disks were old, there were no reported SMART failures, the disks passed self tests and although generally close, had not exceeded their mean time before failure (MTBF). No signs of failure were reported by the kernel in addition to the removal of a device from the array. Actively seeking errors, for example by running tools such as badblocks to exercise the entire disk media, showed that only a very small number of disks had issues. Benchmarking, burn-in and anomaly detection may have identified those devices sooner.

Further research may help us identify whether the disks that exhibit this behaviour are at fault in any other way. An additional line of investigation could be to increase thresholds such as retries and timeouts for the drives in the kernel. For now the details are noted in a bug report.

The second issue observed occurred when the nodes booted from the RAID-1 device. These nodes, running IPA and deploy images based on Centos 7.7.1908 with kernel version 3.10.0-1062, would show degraded RAID-1 arrays, with the same message seen during failed cleaning cycles:

md: kicking non-fresh sdXY from array!

A workaround for this issue was developed by running a Kayobe custom playbook against the nodes to add sdXY back into the array. In all cases the ejected device was observed to resync with the RAID device. The state of the RAID arrays is monitored using OpenStack Monasca, ingesting data from a recent release candidate of Prometheus Node Exporter containing some enhancements around MD/RAID monitoring. Software RAID status can be visualised using a simple dashboard:

Monasca dashboard

Monasca MD/RAID Grafana dashboard using data scraped from Prometheus node exporter.

The plot in the top left shows the percentage of blocks synchronised on each RAID device. A single RAID-1 array can be seen recovering after a device was forcibly failed and added back to simulate the failure and replacement of a disk. Unfortunately it is not yet possible to differentiate between the RAID-0 and RAID-1 devices on each node since Ironic does not support the name field for software RAID. The names for the RAID-0 and RAID-1 arrays therefore alternate randomly between md126 and md127. Top right: The simulated failed device is visible within seconds. This is a good metric to generate an alert from. Bottom left: The device is marked as recovering whilst the array rebuilds. Bottom right: No manual re-sync was initiated. The device is seen as recovering by MD/RAID and does not show up in this figure.

The root cause of these two issues is not yet identified, but they are likely to be connected, and relate to an interaction between these disks and the kernel MD/RAID code.

Open Source, Open Community

Software that interacts with hardware soon builds up an extensive "case law" of exceptions and workarounds. Open projects like Ironic survive and indeed thrive when users become contributors. Equivalent projects that do not draw on community contribution have ultimately fallen short.

The original contribution made by the team at CERN (and others in the OpenStack community) enabled StackHPC and ION Geophysical to deploy infrastructure for seismic processing in an optimal way. Whilst in this case we would have liked to have gone further with our own contributions, we hope that by sharing our experience we can inspire other users to get involved with the project.

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Stig Telfer at June 18, 2020 12:00 PM

OpenStack Superuser

Where are they now? Superuser Awards: City Network

Local European financial services companies rely on infrastructure that adheres to local regulatory requirements, often partnering with OpenStack cloud provider, City Network. A Gold Member of OSF, City Network provides public and private cloud environments, receiving the Superuser Award specifically for their research and development, professional services, education and engineering teams.

Keep reading to see how they’ve evolved since winning the Superuser Award at the Berlin Summit in 2018.

What has changed in your OpenStack environment since you won the Superuser Awards?

A lot! First, we are adamant about upgrading, so currently most locations run Train, but a few are still on older versions. Ideally, before summer, all of our locations and clouds will run Train. That in itself of course allows for more functionality and yet again improved stability.

In addition, we have looked to start engaging other projects we have not run before. So before going to Train, we activated Magnum as well as Barbican. Now with Train, we are also activating Designate and Manila. Manila is something many customers have wanted for a long time.

We also had a soft launch of our new cloud management system, written in React. It adds a bit of functionality that our public cloud requires.

What is the current size of City Network’s OpenStack environment?

We currently run well over 10,000 VMs in eight different locations from New York to Tokyo. Our focus is the European enterprise where many times regulatory challenges put a little extra work around how each workload is handled.

What version of OpenStack is City Network running?

Most locations are running Train, but we do have a few installations on Rocky as well. Those on Rocky will go to Train before summer.

What open source technologies does your team integrate with OpenStack?

As a whole, there are about 20 open source projects to make it all come together. OpenStack is of course the foundation to create a stack that can be fully automated. No doubt Kubernetes is something we also make sure our customers can run to orchestrate their containers. Magnum is a part of that container strategy. As with most container work loads, VMs are at the heart of those containers and we run it similarly. We are also looking into Kata Containers and are curious about the other OSF projects.

What workloads are you running on OpenStack?

With 1,200 customers in our public cloud we would say just about any and all types of workloads– from more traditional LAMP installations to modern container workloads with the latest CI/CD implementations pushing code at a fierce rate. As a company we focus on enterprise customers where anything from standard applications to the customer meeting is built out in our public cloud. In our Compliant Cloud, we run workloads mostly related to higher level regulatory challenges from banks to security companies. In Europe many of the leading digital ID companies run their workloads with City Network. So overall there is a very broad set of workloads.

How big is your OpenStack team?

We are about 25 people working with and around OpenStack.

How is your team currently contributing back to the OpenStack project? Is your team contributing to any other projects supported by the OpenStack Foundation (Airship, Kata Containers, StarlingX, Zuul)?

Many years ago we started by contributing more by helping out in other ways than just technical– from hosting OpenStack Days to leading the public cloud group to being on the board, helping out in various committees. Another way was to become a gold member and thus support OSF financially as well. One part of this decision has been because we have felt we are not the programmers that can always jump in and add. However this is changing and we have for some time contributed smaller additions like reviews and some code when we find aspects that need corrections.

At this point we have a more aggressive vision of how we want to contribute technically and all around. We truly believe that we will become better operators if we know the code, as well as the people around the code. The more we engage the better we will become. So we have started to assign people in various parts of our engineering team that will start to contribute code. We just added 20% to Designate, and there are a few people that will work on OpenStack-Ansible for instance. From there we expect to get deeper into other projects as well.

What kind of challenges has your team overcome using OpenStack?

We simply would not be the company we are. By allowing our customers to fully automate the stack with the many projects of OpenStack we offer, we are a huge part of their digital transformation. We are on a journey made possible with OpenStack and of course a number of other open source projects. We do not think anybody can build what OpenStack offers unless you are a hyper scaler. For us, open source and the four opens of the OpenStack Foundation have formed us and how we like to see open infrastructure evolve.

That said, it has not always been easy. OpenStack was difficult to operate a few years back, and is still pretty complex. However, it has made huge strides towards being better in all ways including operation aspects. Today, a majority of our large-scale upgrades go by without incidents and no down time. Five years ago that was not always the case. It also allows for a ton more functionality, truly giving our customers all they need. Combining it with Kubernetes, there are next to no work loads we can not take on today in a dynamic way, allowing for that platform of innovation our customers require, whether a bank or a gaming company.

Another challenge is knowledge. We continue to educate internally but also love to see how the OpenStack Foundation is helping spread the word and trying to engage in educating more people. This will continue to be something we would love to see more of.

OpenStack continues to evolve at a rapid rate and just about all serious challenges are today dealt with and overcome. OpenStack has proven to work well with the most critical workloads and with significant volume. We are delighted to not only have OpenStack as our foundation but to also have joined one of the largest and fun open source communities in the world to work with.

 

Stay tuned for more updates from previous Superuser Award winners!

 

Cover Image: City Network

The post Where are they now? Superuser Awards: City Network appeared first on Superuser.

by Ashlee Ferguson at June 18, 2020 11:00 AM

Ghanshyam Mann

OpenStack Ussuri is Python3-Only: Upgrade Impact

    A brief history of Python2 -> Python3:

Python version 2.0 was officially released in 2000, OpenStack was founded in 2010 and has since used Python 2.0 as its base language. The Python Foundation realized that in order to prevent users from having to perform tasks in a backward or difficult way, big improvements needed to be made to the software.

We released Python 2.0 in 2000. We realized a few years later that we needed
to make big changes to improve Python. So in 2006, we started Python 3.0.
Many people did not upgrade, and we did not want to hurt them. So, for many
years, we have kept improving and publishing both Python 2 and Python 3.

In 2015, The Python Foundation made a very clear announcement on multiple platforms to migrate to Python 3 and discontinue Python 2. This initial plan was later extended to 2020.

We have decided that January 1, 2020, was the day that we sunset Python 2.
That means that we will not improve it anymore after that day, even if
someone finds a security problem in it. You should upgrade to Python 3
as soon as you can.

    OpenStack Starting Support of Python 3:

With the announcement of the sunset of Python 2, it became very clear that OpenStack also could not support Python 2 for much longer. Because it would have been impossible to fix any security bugs on Python 2, it was better for OpenStack to drop its support completely and instead concentrate on Python 3.

OpenStack’s support of  Python 3 started in 2013, and many developers contributed towards the enormous task of transitioning the software. After so much hard work from the community, the Stein cycle (September 2018) was the time when running OpenStack under Python3 as default work became a community goal. The community goal is a way to achieve common changes in OpenStack. “OpenStack runs under Python3 as default” was a great effort and includes a lot of hard work by many developers. Doug Hellmann was one of the key developers and showed coordination and leadership with other developers and projects to finish this goal.

    OpenStack Train (Oct 2019): Python3 by default:

In the OpenStack Train release (October 2019), OpenStack was tested on Python 3 by default. This meant that you could upgrade your cloud to Python 3 environment with full confidence. OpenStack Train was released with well tested Python 3 support, but still also supported Python 2.7. At the same time, we kept testing the latest Python 3 version, and the OpenStack Technical Committee (TC) started defining the testing runtime for each cycle. OpenStack will targeting Python 3.8 in the ‘‘Victoria’‘ development cycle.

    OpenStack Ussuri (May 2020): Python3-Only: Dropped the support of Python2:

With the Ussuri cycle, OpenStack dropped all support of Python 2. All the projects have completed updating their CI jobs to work under Python 3. This achievement allows the software to be able to remove all Python 2 testing as well as the configuration that goes along with it..

Very first thing in the Ussuri cycle, we started planning for the drop of Python 2.7 support. Dropping Python 2.7 was not an easy task when many projects depend on each other and also integrate CI/CD. For example, if Nova drops Python 2.7 support and becomes Python 3 only, it can break Cinder and many other projects’s CI/CD. We prepared a schedule and divided the work into three phases, dropping support from services first, then library or testing tools.

    Phase-1: Start of Ussuri -> Ussuri-1 milestone: OpenStack Services to start

             dropping the py2.7 support.

    Phase-2: milestone-1 -> milestone-2:  Common libraries and testing tooling

    Phase-3: at milestone-2: Final audit.

Even still, a few things got broken in initial work. So, we made DevStack as Python 3 by default which really helped move things forward. In phase 2, when I started making Tempest and other testing tools as python3-only, a lot of stable branch testing for python2.7 started breaking. That was obvious because Tempest and many other testing tools are branchless, meaning the master version is being used for testing both the current and older releases of OpenStack. So all Python 2.7 testing jobs were using the Tempest master version. Finally, capping Tempest version or fixing Tempest installed in py3 venv made all stable branches and master testing green.

Just a couple of weeks before the Ussuri release, we completed this work and made OpenStack as python3-only, with an updated wiki page. Two projects, Swift and Storlets, are going to keep supporting Python 2.7 for another one or two cycles.

    What “OpenStack is Python3-Only” means for Users/Upgrades:

If your existing cloud is on Python 3 env, then you do not need to worry at all. If it is on Python 2.7 and you are upgrading to Ussuri,then you need to check that your env has the Python 3.6 or higher version available. From the Ussuri release onwards, OpenStack will be working on Python 3.6 or higher only. For example, if you want to install Nova Ussuri version, then it will give an error if Python 3.6 or higher is not available. It is done via metadata (“python-requires = >=3.6”) in setup configuration file. Below is the screenshot of how the setup config file looks in the Ussuri release onwards:

python-requires = >=3.6

classifier =

      Environment :: OpenStack

      Intended Audience :: Information Technology

      Intended Audience :: System Administrators

      License :: OSI Approved :: Apache Software License

      Operating System :: POSIX :: Linux

      Programming Language :: Python

      Programming Language :: Python :: 3

      Programming Language :: Python :: 3.6

      Programming Language :: Python :: 3.7

      Programming Language :: Python :: 3 :: Only

      Programming Language :: Python :: Implementation :: CPython

If you are using a distribution that does not have Python 3.6 or higher available, then you need to upgrade your distro first. There is no workaround or any compatible way to keep running OpenStack on Python 2.7. We have sunset the Python 2.7 support from Ussuri onwards, and the only way is to also upgrade your python version. There are a few questions on the python upgrade which are covered in the FAQ section below.

    FAQ:

Q1: Is Python 2 to Python 3 upgrade being tested in Upstream CI/CD?

Answer: Not directly, but it is being tested indirectly. We did not set up the grenade testing (upstream upgrade testing) for py2 setup to py3 setup. However, previous OpenStack releases like Stein and Train were tested on both python versions. This means that the OpenStack code was working or well-tested on both python versions before it was python3 only. This makes sure that upgrading the py2->py3 for OpenStack has been tested indirectly. If you are upgrading OpenStack from Stein or Train to Ussuri, then there should not be any issues.

Q2: How are the backport changes from Ussuri onwards to old stable branches going to be python2.7 compatible?

Answer: We still run the Python 2.7 jobs until Stable Train testing so that any backport from Ussuri or higher (which are tested on Python 3 only) will be backported on Train or older stable branches with testing on Python 2.7 also. If anything breaks on Python 2.7, it will be fixed before backporting. That way we will keep Python 2.7 support for all stable branches before Ussuri.

Q3: Will testing frameworks like Tempest which are branchless (using the master version for older release testing) keep working for Python 2.7 as well?

Answer: No. We have released the last compatible version for Python 2.7 for Tempest and other branchless deliverables. Branchless means that the tools master version is being used to test the current or older OpenStack releases. For example, Tempest 23.0.0 can be used as a Python 2.7 supported version and Tempest 24.0.0 or master is Python 3 only. But there is a way to keep testing the older Python 2.7 release also (until you upgrade your cloud and want Tempest master to test your cloud). You can run Tempest on a Python 3 node or virtual env and keep using the master version for testing Python 2.7 cloud. Tempest does not need to be installed on the same system as other OpenStack services, as long as the APIs are accessible from the separate testing node, or the virtual env Tempest is functioning.

For any other questions, feel free to ping on the #openstack-dev IRC channel.

by Ghanshyam Mann at June 18, 2020 01:56 AM

June 17, 2020

OpenStack Superuser

Project Teams Gathering (PTG) Recap

It has been a week since the first virtual Project Teams Gathering (PTG)! While it was not the same experience as our traditional in-person PTGs, the community made the best of the current situation. We’ve been amazed by how many people have joined us online this year to collaborate across different time zones on the OSF projects. We not only had the highest attendance and gender diversity in this PTG, but also had 20 more countries represented than the Denver PTG last year. Thank you to all the community members who have worked together virtually in this unprecedented time to collaborate without boundaries. 

PTG Participation by Day *

If you didn’t attend—or if you did and want a replay—we have collected the project announcements, community updates, and discussions you may have missed.

Airship:

The Airship community has participated in the June Virtual PTG and made progress on secrets, deployment configurations, and AirshipUI. The community saw higher participation than usually seen in person, and good cross-team collaboration with 3 other groups: StarlingX, Ironic, and the Edge Working Group. You can see the Airship project PTG agenda and recordings here.

Kata Containers:

The Kata Containers community had its first PTG event on June 2-3, 2020, including contributors and users from what felt like all of the timezones. The sun did not set on the discussion! 

Peng Tao from Ant Financial and Eric Ernst from Ampere facilitated several sessions covering a range of topics. While it didn’t involve as much hands-on-hacking as in past meetups, the community was able to take the time for more in depth discussions, and demonstrations of ongoing work. Check out the Kata Containers PTG update here.

OpenStack:

The OpenStack community had a great virtual PTG, and several teams have posted summaries on the mailing-list. Kendall Nelson from the OpenStack Technical Committee has summarized and posted the Victoria vPTG summaries and PTG OpenStack TC update: Victoria vPTG Summary of Conversations and Action Items on the OpenStack blog. If there is a particular action item you are interested in taking, please reply to the mailing list thread.

StarlingX

The StarlingX community participated in the first virtual PTG that was held online. At the virtual PTG on June 1-5, the StarlingX community spent almost ten hours discussing current technical challenges and new proposals as well as community-related topics. Check out the StarlingX PTG recap and learn about the discussion that happened at the event. 

In addition, you might also be interested in checking out the summary of the OSF Edge Computing Group sessions at the event.

*Attendees’ attendance might be counted twice in the same day if they are participating in multiple sessions

The post Project Teams Gathering (PTG) Recap appeared first on Superuser.

by Sunny Cai at June 17, 2020 08:00 PM

OSF Edge Computing Group PTG Overview

The first virtual Project Teams Gathering (PTG) was held on June 1-5 in all time zones around the globe providing the opportunity to anyone to join including contributors of the OSF Edge Computing Group (ECG).

The group had three sessions during the first three days of the week where we spent a total of seven hours to discuss topics relevant to edge computing use cases, technologies, and architectures. To take advantage of the virtual format, we also invited adjacent communities to participate like CNTT, OPNFV and the Kubernetes edge groups.

We started the sessions with the introduction of the work that the Edge Computing Group has been doing to define reference models and architectures that satisfy the requirements of most edge computing use cases and prepare for some common error cases. The main discussion point is the level of autonomy that an edge site requires which, among other things, affects the available functionality in case of losing the network connectivity towards the central data center. The two identified models are the Centralized Control Plane and the Distributed Control Plane.

The ECG has defined reference architectures to realize the above models with OpenStack services and started testing activities as well to verify and validate functionality. The purpose of the sessions at the PTG was to gather feedback about the models and to improve the reference architectures with adding new components and discuss the options to run all types of workloads at the edge.

We touched on TripleO’s Distributed Compute Node (DCN) architecture which is an example of the Centralized Control Plane model. Our discussions were circulating around challenges of edge deployments, such as latency: “100ms is a killer for distributed systems over WAN”; nodes getting out of sync can be a big issue. We also talked about improvements like Ceph being available since the OpenStack Train release compared to only ephemeral storage prior to that, and increased number of edge nodes that are running compute services and workloads.

We spent a longer amount of time discussing the Distributed Control Plane model which was in interest for the CNTT community as well therefore we discussed details about ways to implement this option. Some of the meeting participants have already been deploying OpenStack on edge sites which requires shrinking the footprint to prepare for limited hardware resources which is one of the common constraints of edge use cases. In case of running all the controller services at the edge, the resource usage can be a challenging factor, but it’s not an unsolved problem. Another popular option to discuss is the federated approach that is supported by components such as the OpenStack Identity Service (Keystone).

As an example to the distributed model, we had a short discussion about StarlingX and some of the design decisions the project has made to shape the project’s architecture. StarlingX is integrating well-known open source components such as OpenStack, Kubernetes, Ceph, etc into one platform along with services for software and hardware management that are developed by the community. During the PTG session, we discussed the Distributed Cloud feature in more details to understand how StarlingX manages the edge sites which can have full autonomy in case of network failures while still managed centrally. Discussion topics included understanding what is synchronized and shared between the nodes to ensure smooth operation in different scenarios and essential functionality for edge, such as zero touch provisioning.

StarlingX is running the majority of the platform services in containers and also provides the possibility to have edge sites with only container workloads in the architecture. The mention of containers lead the discussion towards better understanding the requirements towards container orchestration tools such as Kubernetes in edge infrastructure. We talked a bit about concepts such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Container as a Service (CaaS) and how the lines between these have started to disappear recently. The focus was on requirements on Kubernetes from a CaaS viewpoint while we also took a look at how it impacts the reference architectures. We need to understand storage and networking configurations as well as handling data crucial to run containers like quotas and certificates.

During the PTG sessions the participants took the opportunity to talk about storage which is an area that the ECG hasn’t gotten the chance yet to look into. We concentrated on object storage this time as the block storage strategies are a bit more straight forward. We agreed that the primary usage of object storage is to provide service for the applications, but it is useful for the platform too, like sharing images as a backend to the OpenStack Image Management (Glance) service. We had participants on the meeting from the OpenStack Object Storage (Swift) team to identify use cases and requirements for this component to take into account during the design and development process. The main use case we discussed was Content Delivery Networks (CDN) to leverage this functionality while online backup and gaming can also be considered. For design decisions we started to discuss architectural considerations and the effects of circumstances such as latency to the components of Swift.

As the PTG is a great opportunity to sync with project teams, we had joint sessions with the OpenStack Network Connectivity as a Service (Neutron) team and the OpenStack Accelerator (Cyborg) team. To also cover cross-community discussions, we had a session with the Airship project as well as KubeEdge from the CNCF community.

One of the ongoing discussions with Neutron is the handling of network segment ranges and to make sure they are defined and handled properly in the distributed environment in both architectural models. There is a feature request for Neutron that is already approved and has the potential to get priority during the Victoria release cycle. The first step is to put together a proof of concept deployment to test the planned configurations and changes. The Neutron team and ECG contributors will work on testing as a joint effort. A further relevant project is Neutron Interconnection which is mainly API definitions at this point as an effort to provide interconnection between OpenStack deployments and regions over WAN. Further networking related topics included functionality such as Precision Time Protocol that the StarlingX community is already working on along with Time Sensitive Networking (TSN).

The next cross-project session was the sync with the Cyborg team. This session was in big interest as the ability to use hardware accelerators is crucial for many edge use cases. During the session we were focusing on learning about the current activities within the project such as the implementation and next steps for the new v2 API. We also touched on device driver integration. Cyborg is concentrating on the ability of programming the acceleration devices made available to the applications and will not include low level device driver integration in the upstream code. The Cyborg team is working with Neutron, OpenStack Placement and OpenStack Compute (Nova) teams to ensure smooth integration and full functionality in these areas.

During the sync sessions we were also focusing on relevant topics such as lifecycle management of the platform services. One of the main challenges is to handle upgrades throughout the whole edge infrastructure which can be a big headache in massively distributed systems. In some use cases downtime is almost unacceptable which means that we need to prepare the edge site to have enough resources to keep services running while you are performing an upgrade. When it is not doable, we need to identify processes to minimize the time when the services are unavailable.

In connection to this theme we were talking to the Airship community as the project’s main mission is to address deployment and lifecycle management of software such as OpenStack and Kubernetes and therefore can be an option to address the aforementioned challenges. Airship is utilizing more and more components from the CNCF landscape as their concepts are using containers heavily for flexibility. For in place upgrades, Airship will use Cluster API and the concept of dual box deployment at edge sites which would ensure that there is always a replica of each service that provides availability during an upgrade process.

Our last session was with the KubeEdge project which is focusing on the usage of Kubernetes for edge computing use cases. It is built on top of Kubernetes with extensions such as application management, resource management and device management for IoT devices. Their main focus areas are IoT, CDN and Multi-access Edge Computing (MEC) scenarios. The project is releasing every three months and has an increased focus on IoT protocols. Their architectural model follows the Centralized Control Plane and running only worker nodes at the edge. As we had limited time during the PTG, we agreed to follow up with the collaboration after the event as well to work together on providing better solutions for edge computing use cases.

After the PTG we have a lot of action items to follow up on to evolve the architecture models as well as to help projects and adjacent communities with our learnings that they can use when they are working on their architecture solutions. We will keep looking closer into the object storage use cases and requirements to add that to our reference models and we are also working on setting up discussions with further Kubernetes groups such as the Kubernetes Edge and IoT group. We will also follow up on relevant implementation work and start testing activities when the software components are ready.

As a project highly relevant for edge, you might also be interested in checking out the StarlingX virtual PTG summary to learn about the community’s current activities and plans for the upcoming releases.

If you are interested in participating in the OSF Edge Computing Group, sign up to our mailing list and check out our weekly calls. You can find more information about the group’s activities on our wiki page. For a more cohesive summary of our work read the new white paper the group just published!

The post OSF Edge Computing Group PTG Overview appeared first on Superuser.

by Ildiko Vancsa at June 17, 2020 01:00 PM

June 16, 2020

OpenStack Superuser

Zuul: A Wazo Platform Case Study

Wazo Platform is an An open source software programmable telecommunication platform. Wazo Platform allows you can pick and choose the components you need to build your infrastructures. Most of Wazo Platform’s 229 git repositories on GitHub use Zuul. Zuul allows Wazo Platforms to have cross repository dependencies in pull requests at the source code level, the Debian packaging level and at the container level. Similarly the nodes support their private OpenStack containers hosted in AWS CM Instance.

We got talking with Frederic Lepied to get some answers to why Wazo Platforms chose Zuul, an open source CI tool, and how they use it with GitHub and OpenStack.

How did your organization get started with Zuul?

We had people coming into the organization with a lot of expertise in OpenStack so it was a natural move for us.

Describe how you’re using it:

Zuul is used with GitHub on most of our 229 git repositories at https://github.com/wazo-platform. The Zuul instance is hosted on AWS and is public at https://zuul.wazo.community/zuul/t/local/status. The nodes are VMs from our private OpenStack and containers hosted in AWS CM instance.

What is your current scale?

The scale in term of nodes is small (around 10) and big in term of repositories (229).

What benefits has your organization seen from using Zuul?

Zuul allows us to have cross repository dependencies in pull requests at the source code level, the Debian packaging level, and at the container level.

It also allows us to reuse job definitions among all the repository of the same type, which is very helpful.

What have the challenges been (and how have you solved them)?

Operations are difficult and we got a lot of help from the Software Factory team.

What are your future plans with Zuul?

We plan to continue to use Zuul for more tasks.

Are there specific features that drew you to Zuul?

Definitely the cross-repository dependencies.

 

The post Zuul: A Wazo Platform Case Study appeared first on Superuser.

by Helena Spease at June 16, 2020 01:00 PM

June 15, 2020

VEXXHOST Inc.

4 Benefits of Going Serverless

Serverless architecture is forecasted to gain massive traction in the years to come. According to researchers, by 2020, 20% of global enterprises will operate in serverless computing. So what is serverless that it garners such interest? Serverless is a cloud service utilized without the worry of an operating system and underlying infrastructure. The definition of serverless architecture itself is a benefit.

Serverless, known as Function-as-a-service does use servers, both physical and virtual. But since the developers do not interact with them, they have the illusion of operating without servers! Therefore, serverless does not mean the elimination of servers from distributed applications.

Why Serverless

In terms of cloud service adoption, serverless leads among all other cloud services. Serverless architecture is suitable for those building light-weight applications and are concerned about time to market. Let’s have a look at what are some known advantages of going serverless.

Operation Cost

The number one reported benefit of going serverless is the reduction in operation cost. Serverless is the alternative to buying racks and racks of servers to operate a full-fledged cloud. You tend to pay for more than the actual resources used when opting for a complete cloud environment. But with serverless, payment is for per unit of resources consumed.

Scalability

Your traditional cloud deployment is according to maximum usage. Also, the scalability of such a set up is not entirely customizable. Whereas, with serverless, resources added are according to current usage. Furthermore, scalability is for the exact amount as needed by your organization. Therefore, no idle cloud resources are running the background.

Business Value

Aren’t we always looking to increase productivity? Be it in terms of the tool you use or the people who run those tools for you; efficiency is vital. With serverless architecture, your developers can do what they do best: code! So, the infrastructure layer is taken care of by the provider while you focus on your business’ core operations. The technology is best utilized for its agility, allowing you to test and make changes in applications through a quicker turnaround.

Multi-Platform Integration

It is noteworthy that serverless technology is increasingly becoming a part of hybrid platforms. Traditional cloud deployments like a public or private cloud do provide a lot more features when building applications. But serverless is for the in-between. Decision-makers are transitioning towards a multi-platform world, making use of PaaS, containers, and serverless to maximize productivity.

OpenStack and Serverless Architecture

If you did not already know, OpenStack too supports serverless architecture. The OpenStack project Qinling provides a platform for serverless functionality. Additionally, Qinling supports container orchestration platforms and function package storage backends.

If serverless architecture is for you or not depends on your use case. There are multiple guides out there to help you get started on OpenStack serverless architecture. But with over nine years of OpenStack experience, VEXXHOST is here to walk you through everything you need. Our OpenStack Consulting service can help you determine if you should reap the benefits of serverless or go the traditional way.

Would you like to know about how you can get Virtual Machines, Bare Metal and Kubernetes in one environment? Download our white paper and get reading!  

Virtual Machines, Bare Metal and Kubernetes: All in One Environment!

The post 4 Benefits of Going Serverless appeared first on VEXXHOST.

by Samridhi Sharma at June 15, 2020 05:28 PM

Mirantis

The ultimate guide to Kubernetes

Here at Mirantis we're committed to making things easy for you to get your work done, so we've decided to put together this guide to Kubernetes.

by Nick Chase at June 15, 2020 04:47 PM

Galera Cluster by Codership

Improved security audit features in Galera Cluster for MySQL 5.7.30, and an updated 5.6.48

Codership is pleased to announce a new Generally Available (GA) release of the multi-master Galera Cluster for MySQL 5.6 and 5.7, consisting of MySQL-wsrep 5.6.48 (release notes, download) and MySQL-wsrep 5.7.30 (release notes, download) with a new Galera Replication library 3.30 (release notes, download), implementing wsrep API version 25. This release incorporates all changes to MySQL 5.6.48 and 5.7.30 respectively, making it a MySQL High Availability solution.

A highlight of this release is that with MySQL 5.7.30, you will now have access to using the Percona audit log plugin, which will help with monitoring and logging connection and query activity that has been performed on specific servers. This implementation is provided as an alternative to the MySQL Enterprise Audit Log Plugin.

The Galera Replication library 3.30 has an enhancement to ensure that upon GCache recovery, all available space will be reclaimed in the ring buffer. Frequent cluster configuration changes handling of errors are also improved.

MySQL-wsrep 5.6.48 is an updated rebase to the 5.6.48 release, but also includes improvements around crash recovery: when binary logs are enabled, there is a more consistent recovery. SSL initialization has seen improvements, and error handling of cluster wide conflicts have been improved when the cluster itself is acting as an asynchronous secondary to a MySQL primary.

MySQL-wsrep 5.7.30 is an updated rebase to the 5.7.30 release, and in addition to what is present in 5.6.48, there is also the audit log plugin as mentioned above. One important note is that for your Galera Cluster, ensure that InnoDB tablespaces are kept within the data directory (if kept outside, they may not be copied over during SST).

Across the board, there is now also support and packages for CentOS 8 and RHEL 8.

You can get the latest release of Galera Cluster from http://www.galeracluster.com. There are package repositories for Debian, Ubuntu, CentOS, RHEL, OpenSUSE and SLES. The latest versions are also available via the FreeBSD Ports Collection.

 

by Sakari Keskitalo at June 15, 2020 10:45 AM

June 12, 2020

OpenStack @ NetApp

My Name is Ussuri!

The OpenStack Ussuri release is now available! NetApp is proud to have contributed in the development of Cinder and Manila. The Ussuri release includes the following new features and feature enhancements that are supported by NetApp’s ONTAP and SolidFire storage platforms for your OpenStack deployment:  Ability to create shares from snapshots in aggregates and controllers other than the source share using Manila  Manila now efficiently creates shares from snapshots […]

The post My Name is Ussuri! appeared first on thePub.

by Carlos Da Silva at June 12, 2020 07:03 PM

Fleio Blog

Fleio 2020.06: Beta release, more angular news, scheduled backups with celery beat and more

Fleio 2020.06.1 was released. The latest version was published on 2020-06-17 and it’s a stable release. Beta release Starting with 2020.06 release we are changing the way on how we release Fleio. The first version (2020.06.0) will be a beta one, and will not be pushed to the public repository. Packages will be available only […]

by Marian Chelmus at June 12, 2020 06:13 AM

June 11, 2020

OpenStack Superuser

Women of Open Infrastructure: Meet Victoria Martinez de la Cruz from the OpenStack Manila Project

This post is part of the Women of Open Infrastructure series to spotlighting the women in various roles in the community who have helped make the Open Infrastructure successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of Open Infrastructure taking shape. If you’re interested in you are interested in being featured or would like to nominate someone to tell their stories, please email editor@openstack.org.

This time we’re talking to Victoria Martinez de la Cruz from the OpenStack Manila project. She tells Superuser about how the Open Infrastructure community has helped her to grow and excel at her role in the community.

What’s your role (roles) in the Open Infrastructure community?

Currently, I’m mainly focused on contributing to feature development, code enhancements and bug fixing for the Shared Filesystems as a Service project for OpenStack (Manila). I also enjoy mentoring, so I’ve been helping with the mentoring efforts for the Outreachy program.

What obstacles do you think women face when getting involved in the Open Infrastructure community?   

There are two factors that I think lead us to this situation: on one hand, I believe that people in tech tend to have the idea that some tasks are harder than others. For example, some people think that web development is easier than drivers development, something I believe is a mistake. Historically everything related to infrastructure administration has been considered a hard branch of computing. On the other hand, we know that there is a tendency for women to aspire to perfection and try to know everything before applying for a position or for taking the lead on a task. And, the harder the task is considered to be, the less women there are. Both things lead to the fact that not many women feel they can contribute to the projects we have in the OpenInfra community. I do believe there are women with the expertise and potential that we need, it’s just a matter of reaching them, and giving them opportunities. Some effort I think is being done by many companies in the community and also by other inclusion efforts such as Outreachy. We could do more though, for sure.

Why do you think it’s important for women to get involved with open source?

It gives them exposure, something that I think is critical in this market: exposure to real-world use cases to learn from, exposure to other people from different backgrounds, cultures and levels of expertise to work with, and exposure for them to become well known in the field, to expand their networks, and find which is the next step in their careers that challenges them and takes them to the next level.

Efforts have been made to get women involved in open source, what are some initiatives that have worked and why?

Yes, there have been many efforts, that makes me very proud of my community. One of the initiatives that worked out very well has been Outreachy, and I think because it helped underrepresented individuals with no experience contributing to open source to their first steps. Guided by mentors, the interns learned how the processes in open source are and how they could contribute to. They also gained experience with the tools and technologies we use. Several good contributions have been made by our interns since they could contribute fresh perspectives and new ideas. Some of them continue to be involved with the community because they could connect with hiring companies and take a full-time role. This is something I’d love to see more. There are not so many remote entry level positions around for our interns to apply to, and I think that is a big loss, because we invest in people with high potential and then we let them go.

Open source moves very quickly. How do you stay on top of things and what resources have been important for you during this process?

It’s definitely one of the biggest challenges we face. I believe that having an open and sharing community is key. It’s easier to keep up with things by sharing curated information between the collaborators than going ahead and reading all the material that is around. Material is good, but we are constantly being overloaded with details and stuff, which I think is good when you have the time or you actually want to focus on that specific topic. Otherwise, a summary by your peers is all that you need. That has been my strategy this last couple of years: I get updates from the community (either over IRC, reading the mailing list or watching presentations at conferences) and if I need to get a better understanding of something, I just go ahead and dig into the sea of articles and blog posts.

What advice would you give to a woman considering a career in open source? What do you wish you had known?

DO IT! No hesitation. I’m very grateful to everything that open source, and especially the Open Infrastructure community, has given to me: my colleagues, the technical experience I gained, my personal and professional growth. All experiences are different, for sure, but you need to go ahead and try. And the OpenInfra community? 10/10 would recommend to a friend.

I wish I had known how to organize my time better. I struggled a bit with that in the first couple of years. The amount of things happening at the same time was too much for me, and the context change used to kill my productivity. Plus, working remotely is not so straightforward as some people might think. Nowadays, taking the advice from several people from the community, I learned how to organize myself better and end my days feeling good about what I have accomplished.

The post Women of Open Infrastructure: Meet Victoria Martinez de la Cruz from the OpenStack Manila Project appeared first on Superuser.

by Superuser at June 11, 2020 02:00 PM

June 10, 2020

Ben Nemec

Oslo Virtual PTG for Victoria

The Oslo team held its second virtual PTG this week. We had a number of good discussions and even ran slightly over the 2 hours we scheduled, so I think it was a successful event. The first hour was mostly topics relating to Oslo itself, while the second hour was set aside for some cross-project discussions with the Nova team. Read on for details of both hours.

oslo.metrics

Thierry gave us a quick update on the status of oslo.metrics. Currently we have all the necessary infrastructure in place to add oslo.metrics, so we are just waiting on a code drop. Once we have code, we'll get it imported and review the API with the Oslo team to ensure that it fits with our general design standards.

However, just creating the library is not the end of the story. Once the library is released, we'll need to add the integration code for services like oslo.messaging. Once that is done it should be possible for deployers to start benefiting from the functionality provided by oslo.metrics.

Update! The above represents what was discussed at the PTG, but since then it has come to our attention that the oslo.metrics code was proposed. So in fact we are ready to start making progress. If this project is of interest to you please review the changes and propose patches.

oslo.cache

There were a couple of things we discussed about oslo.cache because we've been doing a fair amount of work in that library to improve security and migrate to better supported client libraries. The first topic was functional testing for the drivers we have. Some functional testing has already been added, but there are a number of drivers still without functional coverage. It would be nice to add similar tests for those, so if anyone is interested in working on that please let us know.

We've also been looking to change memcached client libraries for a couple of cycles now. Unfortunately this is not trivial, so the current plan is to add the new library as a completely different driver and then deprecate the old driver. That way we just need a migration path, not complete compatibility between the old and new libraries. There was a question about what to do with the dead host behavior from the existing memcache_pool driver, but the outcome of the discussion for now was to continue with the non-pooled driver and leave the pool version for later.

Retrospective

The Good
First, we added a new core, Sean McGinnis. Big thanks to Sean for all his work on Oslo over the past couple of cycles!

The oslo-coresec team was updated. Prior to this cycle it had gotten quite out of date, to the point where I was the only active Oslo contributor still on it. That's not ideal since this is the group which handles private security bugs, so keeping the list current is important. We now have a solid group of Oslo cores to look at any such bugs that may come in now.

We had also held our first virtual PTG after the Shanghai event, and that was counted in the success column. With so many Oslo contributors being part-time on the project it's likely we'll want to continue these.

A couple of our cores were able to meet in person at FOSDEM. Remember meeting in person? Me neither. ;-)
The Bad
We missed completing the project contributor documentation community goal. This goal was more difficult for Oslo than for many other projects because we have so many repositories under our governance. By the end of the cycle we did come up with a plan to complete the goal and have made good progress on it.
Proposed Changes
We discussed assigning a driver for community goals in the future. One of the problems with the contributor docs goal was that everyone assumed someone else would take care of it. Having a person specifically assigned should help with that.

In addition, at least one of the community goals proposed for the Victoria cycle would not require explicit completion by Oslo. It involves users of oslo.rootwrap migrating to oslo.privsep, which would only require the Oslo team to assist other projects, and hopefully work to improve the oslo.privsep docs based on peoples' migration experiences. Otherwise, Oslo isn't a consumer of either library so there is no migration needed.

Another proposed community goal is to make ci jobs zuulv3 native, and I believe that Oslo is already done with that for the most part. I know we've migrated a few of our one-off jobs over the past couple of years since zuulv3 came out so we should be in good shape there too.

Policy Cross-Project with Nova

After the Nova Ussuri release, some deployers reported problems with the new policy rules, despite the use of the oslo.policy deprecation mechanism that is designed to prevent breakage on upgrade. It turned out that the problem was that they were using the sample policy generator tool to create JSON policy files. The problem with this is that JSON doesn't support comments, so when you create a sample file in that format it overrides all of the policy-in-code defaults. When that happens, the deprecation mechanism breaks because we don't mess with custom policies specified by the deployer. This is one of the reasons we don't recommend populating the policy file with default policies.

However, even though we've recommended YAML for policy files since policy-in-code happened, we never changed the default filename in oslo.policy from policy.json. This naturally can lead to deployers using JSON formatted files, even though all of the other oslo.policy tools now default to YAML. One of the main reasons we've never changed the default is that it is tricky to do without opening potential security holes. Policy has a huge impact on the security of a cloud and there isn't a great option for migrating the default filename.

The solution we came up with is documented in an Oslo spec. You can read up on the full details there, but the TLDR is that we are going to coordinate with all of the consumers of oslo.policy to add an upgrade check that warns deployers if a JSON-formatted policy file is found. In addition to release notes, this should give deployers ample warning about the coming change. oslo.policy itself will also log a warning if it detects that JSON is in use after JSON support has been deprecated. As part of this deprecation work, oslo.policy will need to provide a tool to migrate existing JSON policies to YAML, preferrably with the ability to detect default policy rules and comment them out in the YAML version.

Deprecating and eventually removing JSON policy file support should allow us to deprecate policies in the future without worrying about the situation we ran into this cycle. YAML sample files won't override any rules by default so we'll be able to sanely detect when default rules are in use. There was some talk of proposing this as a community goal given the broad cross-project nature of the work, but we'll probably wait and see how the initial effort goes.

Healthcheck Cross-Project with Nova

Another longstanding topic that has recently come up is a standard healthcheck endpoint for OpenStack services. In the process of enabling the existing healthcheck middleware there was some question of how the healthchecks should work. Currently it's a very simple check: if the api process is running it returns success. There is also an option to suppress the healthcheck based on the existence of a file. This allows a deployer to signal a loadbalancer that the api will be going down for maintenance.

However, there is obviously a lot more that goes into a given service's health. We've been discussing how to make the healthcheck more comprehensive since at least the Dublin PTG, but so far no one has been able to commit the time to make any of these plans happen. At the Denver PTG ~a year ago we agreed that the first step was to enable the healthcheck middleware by default in all services. Some progress has been made on that front, but when the change was proposed to Nova, they asked a number of the questions related to the future improvements.

We revisited some of those questions at this PTG and came up with a plan to move forward that everyone seemed happy with. One concern was that we don't want to trigger resource-intensive healthchecks on unauthenticated calls to an API. In the original discussions the plan was to have healthchecks running in the background, and then the API call would just return the latest results of the async checks. A small modification to that was made in this discussion. Instead of having explicit async processes to gather this data, it will be collected on regular authenticated API calls. In this way, regularly used functionality will be healthchecked more frequently, whereas less used areas of the service will not. In addition, only authenticated users will be able to trigger potentially resource intensive healthchecks.

Each project will be responsible for implementing these checks. Since each project has a different architecture only they can say what constitutes "healthy" for their service. It's possible we could provide some common code for things like messaging and database that are used in many services, but it's likely that many projects will also need some custom checks.

I think that covers the major outcomes of this discussion, but we have no notes from this session so if I forgot something let me know. ;-)

oslo.limit

There was quite a bit of discussion during the Keystone PTG sessions about oslo.limit and unified limits in general. There are a number of pieces of work underway for this already. Hierarchical quota support is proposed to oslo.limit and a POC for Nova to consume it is also available. The Glance team has expressed interest in using oslo.limit to add quotas to that service, and their team has already started to contribute patches to oslo.limit (such as supporting configuration by service name and region). This is terrific news! That work also prompted some discussion of how to handle the separate configuration needed for keystoneauth and oslo.limit itself.

There was quite a bit of other discussion, some of which doesn't involve oslo.limit, some of which does. We need to define a way to export limits from one project and import them into Keystone. This will probably be done in the [project]-manage commands and won't involve Oslo.

Some refinement of the usage callback may be in order too. I don't know that we came to any definite conclusions, but encouraging more projects to use Placement was discussed, although some projects are hesitant to do that due to the complexity of using Placement. In addition, there was discussion of passing a context object to the usage callback, but it wasn't entirely clear whether that would work for all projects or if it was necessary.

Finally, the topic of caching came up. Since there can be quite a few quota calls in a busy cloud, caching may be needed to avoid significant performance hits. It's something we've deferred making any decisions on in the past because it wasn't clear how badly it would be needed or exactly how caching should work for limits. We continued to push this decision off until we have unified limits implemented and can gather performance information.

That should cover my recollection of the limits discussion. For the raw notes from the PTG, see the Keystone PTG Etherpad, under Unified Limits.

That's All Folks!

The past few months have been quite...interesting. Everyone is doing the best they can with a tough situation, and this all-virtual PTG is yet another example of that. Huge thanks to all of the organizers and attendees for making it a productive event in spite of the challenges.

I hope this has been useful. If you have any comments or questions feel free to contact me in the usual ways.

by bnemec at June 10, 2020 05:09 PM

Slawek Kaplonski

My summary of the OpenStack Virtual_PTG July 2020

Retrospective From the good things team mentioned that migration of the networking-ovn driver to the core neutron went well. Also our CI stability improves in the last cycle. Another good thing was that we implemented all required in this cycle community goals and we even migrated almost all jobs to Zuul v3 syntax already. Not so good was progress on some important Blueprints, like adoptoion of the new engine facade. The other thing mentioned here was activity in the stadium projects and in the neutron-lib.

June 10, 2020 01:08 PM

OpenStack Superuser

Where are they now? Superuser Awards winner: China Mobile

We’re spotlighting previous Superuser winners who are on the front lines deploying OpenStack in their organizations to drive business success. These users are taking risks, contributing back to the community and working to secure the success of their organization in today’s software-defined economy.

China Mobile won the Superuser Award at the OpenStack Summit in Barcelona. As the world’s largest mobile phone operator with various OpenStack deployments, they are now not only building up the NFV network among world-leading operators based on OpenStack but also expanding the NFV platform to support 5G C in the near future. The following are their major OpenStack deployments.

What has changed in your OpenStack environment since you won the Superuser Awards?

We have carried out in-depth practice of OpenStack as the cloud infrastructure, especially in our Network Cloud, to build China Mobile’s NFV/SDN network. We believe it has directly promoted the maturity of the industry and the maturity of NFV/SDN solutions based on OpenStack, especially in a multi-vendor environment through our NFV/SDN pilot and NovoNet experimental network. China Mobile is now entered into the second year of NFV/SDN network construction since 2019. The network is about to commercialize in the second half of 2020.

What is the current size of your OpenStack environment?

The scale of China Mobiles NFV/SDN network cloud is huge, in eight regions across the whole country.

We believe China Mobile is now building up the biggest NFV network among worlds leading operators based on OpenStack, the total number of servers in the network is more than 50,000 now. For each OpenStack instance, it has to manage 500 to 1,500 servers. The virtual network functions (VNFs) running on top of the virtualization includes IP Multimedia Subsystem (IMS), Evolved Packet Core (EPC) and value added services etc. and We will also expand the NFV platform to support 5G C in the near future.

What version of OpenStack are you running?

Mitaka + (with some enhancements of Pike/Queens version are absorbed)

What open source technologies does your team integrate with OpenStack?

Our team mainly researches NFV system integration. Now we build the CI/CD process to carry out automatic software integration and testing. Different virtual infrastructure manager (VIM) (OpenStack based platform), NFV orchestrator (NFVO) and VNF are automatically deployed and tested through a unified platform. Currently, the CI/CD platform uses Docker technology

What workloads are you running on OpenStack?

  • 4G service, including 18 core network elements and service platforms, such as virtualized IMS, EPC, intelligent network and Short Message Service (SMS)/Multimedia Messaging Service (MMS) platform etc.
  • 5G service.

How big is your OpenStack team?

Its difficult to give the exact numbers, but we have different teams working on OpenStack, with one working on the additional telco requirements based on open source version, another team is to test the vendors product and make sure their commercial OpenStack software can support telcos communication services. After deployment, there would be another team continuously working on the operational aspects of the OpenStack.

How is your team currently contributing back to the OpenStack project?

At present, in terms of Network Cloud, China Mobile is mainly acted as an OpenStack user. With the emergence of Edge Computing, we are also closely following and contributing other OpenStack projects, such as StarlingX.

What kind of challenges has your team overcome using OpenStack?

The feasibility of OpenStack carrying telecommunication service has been fully verified. At present, in the aspect of network operation and management, OpenStack still needs to be enhanced and improved to achieve 5 Nines HA. In addition, more work needs to be done in the hyper-scale scenario to achieve high performance and high reliability of OpenStack.

 

The post Where are they now? Superuser Awards winner: China Mobile appeared first on Superuser.

by Superuser at June 10, 2020 07:00 AM

Stephen Finucane

What Is Nova?

This talk was delivered to a number of Red Hat interns at the start of their internship and served as a brief, high-level overview of the OpenStack Compute (nova) project.

June 10, 2020 12:00 AM

June 09, 2020

Galera Cluster by Codership

Galera Cluster 4 for MySQL 8 Release Webinar recording is now available

The much anticipated release of Galera Cluster 4 for MySQL 8 is now Generally Available. Please join Codership, the developers of Galera Cluster, and learn how we improve MySQL High Availability with the new features in Galera Cluster 4, and how you can benefit from using them. We will also give you an idea of the Galera 4 short term road map, as well as an overview of Galera 4 in MariaDB, MySQL and Percona.

Learn about how you can load data faster with streaming replication, how to use the new system tables in the mysql database, how your application can benefit from the new synchronization functions, and how Galera Cluster is now so much more robust in handling a bad network for Geo-distributed Multi-master MySQL.

WATCH THE RECORDING 

 

The slides for the webinar can found here

WEBINAR SLIDES

by Sakari Keskitalo at June 09, 2020 11:32 AM

June 08, 2020

OpenStack Superuser

Zuul: A T-Systems Case Study

Open Telekom Cloud is a major OpenStack-powered public cloud in Europe. It is operated for Deutsche Telekom Group by its subsidiary T-Systems International GmbH

Artem Goncharov, Open Telekom Cloud architect, shares why Open Telekom Cloud chose Zuul, the open source CI tool, and how they use it with GitHub and OpenStack.

How did your organization get started with Zuul

We started using Zuul for the development of OpenStack client components like SDKs, CLIs, and other internal operational software components. After we managed to get some changes merged into Zuul, we deployed it productively as our continuous integration system. Today it is our CI system for the development of all open source tooling we offer to our clients. Furthermore, Zuul is currently used for monitoring our platform services quality. For that, we periodically execute a set of tests. It also includes monitoring permanently our RefStack compliance. 

We prepare Zuul as an internal service for other departments inside Deutsche Telekom apart from our own projects in the future. We run Zuul on our own public cloud, the Open Telekom Cloud, and also spawn the VMs there. We are all-in OpenStack!

Describe how you’re using Zuul

Currently, we have Zuul working on a public domain interacting with GitHub. Although the CI workflow with Gerrit is very powerful, we observed that some users struggle with its complexity. We thus made a decision to stay with GitHub to allow more people in our community to participate in the development of our projects. Nodepool spins up the virtual machines for the jobs facilitating an OpenStack driver.

What is your current scale?

We have a five-node zookeeper cluster and each one scheduler, a nodepool-builder, and a nodepool-launcher. At present two Zuul executors satisfy our needs. We have about ten projects managed by Zuul but plan to increase this number up to 50 soon. On average we do 50 builds a day.

What benefits has your organization seen from using Zuul?

We are now prepared for growth. Today, our projects are clearly laid out in size and complexity, but we expect the complexity to grow. Therefore we are relieved to have gating in place ensuring all software is tested and consistent all the time. That allows us to scale the number of projects we cover.

Second, we have better control over where and how the build and test processes take place. Since we are testing real-life cloud scenarios, there are credentials for and access to actual cloud resources involved. With Zuul and Nodepool we have better control over these virtual machines and the stored data.

Last, but not least we have a rather complex integration and deployment workflow. It is not just software that we build and package, but we also create other artifacts like documentation, PyPI packages, and a lot more that requires extra steps. We like the flexibility of having Ansible playbooks defining those workflows.

What have the challenges been (and how have you solved them)?

It is important for us to test all aspects of our public cloud. This functional testing obviously includes logging into domains, creating resources, and dealing with all aspects of credentials. Since this setup is connected to GitHub and thus indirectly accessible for the public, we felt a bit uneasy to run the Zuul setup on the same platform where we conducted the actual tests and builds. Eventually, we segregated those scopes by means of several dedicated OpenStack domains, where only Zuul is having API access to. So in the worst case should credentials should ever leak, we just have to clean up and reset one of our test domains, but the Zuul infrastructure itself remains unaffected from that. We facilitate the “project cleanup” feature of the OpenStack SDK for that, to which we also contributed.

We also experienced functional tests or verification of refstack often leave a lot of debris behind, which was not cleaned up by the code, sometimes even because of failing API calls of OpenStack itself. We leverage “project cleanup” also to mitigate this behavior.

Zuul publishes also a lot of information in log files to public readable Swift containers. Our security teams complain about that, even if most of the information is harmless. In some cases, we patched Zuul or its jobs so this data does not accumulate in the first place.

Both for operational and security reasons, we’d like to containerize all workloads as much as possible. Zuul comes with a set of Docker containers. Unfortunately, especially the Nodepool-builder needs a lot of privileges, which is hard to implement with plain old Docker. Our approach is to leverage Podman as an alternative for that.

What are your future plans with Zuul?

The Gerrit code review system implements a sophisticated role model, which enables users to do code reviews, promote revisions, or to authorize the eventual merges. It is a real challenge to implement these access control features just with GitHub. As a workaround for the time being we use “/merge” comments on the pull requests.

Even though Zuul’s prime directive is to automate, sometimes it’s nice to be able to manually intervene. Unfortunately, there’s currently not really a UI for administrative tasks like re-building some artifacts. That would come in handy to migrate even more Jenkins jobs.

The operation of Zuul is complex and we currently don’t have a dedicated ops team. We decrease the effort of operations by implementing Ansible playbooks for that, but this an ongoing effort.

We work on transforming Zuul into an internal offering for other Deutsche Telekom subsidiaries and projects, so they also start using it. We’re also very interested in enabling Kubernetes and OpenShift to act as an operations platform for Zuul. Here the challenge is inherited from multi-cloud issues that are required by high availability.

Are there specific Zuul features that drew you to Zuul?

Zuul fuels the development of OpenStack, which is a remarkable job and a considerable responsibility. We are impressed by how scalable and flexible it is and have even adapted its architecture to internal projects. We’re confident that there is more to come.

The post Zuul: A T-Systems Case Study appeared first on Superuser.

by Helena Spease at June 08, 2020 02:00 PM

June 06, 2020

Ed Leafe

Day 12: Communities and Survivorship Bias

Communities, especially Open Source communities, tend to form some form of governance once they grow beyond a certain size. The actual size isn’t as important as the relationship among the members: when everyone knows everyone else, there’s really no need for governance. But when individuals come from different companies, or who otherwise may have different … Continue reading "Day 12: Communities and Survivorship Bias"

by ed at June 06, 2020 06:39 PM

June 04, 2020

Corey Bryant

OpenStack Ussuri for Ubuntu 20.04 and 18.04 LTS

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can be found at:  https://www.openstack.org/software/ussuri

To get access to the Ubuntu Ussuri packages:

Ubuntu 20.04 LTS

OpenStack Ussuri is available by default for installation on Ubuntu 20.04.

Ubuntu 18.04 LTS

The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on Ubuntu 18.04 by running the following commands:

sudo add-apt-repository cloud-archive:ussuri

The Ubuntu Cloud Archive for Ussuri includes updates for:

aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, watcher-dashboard, and zaqar.

For a full list of packages and versions, please refer to:

http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html

Branch package builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky
sudo add-apt-repository ppa:openstack-ubuntu-testing/stein
sudo add-apt-repository ppa:openstack-ubuntu-testing/train
sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see you in Victoria!

Corey

(on behalf of the Ubuntu OpenStack Engineering team)

by coreycb at June 04, 2020 02:00 PM

May 29, 2020

VEXXHOST Inc.

Exciting Features of OpenStack’s 21st Release: Ussuri

OpenStack has pioneered the concept of open infrastructure since 2010. It achieves this with new releases two times a year, giving users the best of services and experience. With over 24,000 code changes by over 1,000 developers from across over 50 countries and 188 organizations, the 21st OpenStack release, Ussuri, is here.

VEXXHOST couldn’t be more excited to be among the contributors of OpenStack Ussuri. Working with the community is always a pleasure. It is even more so when our efforts realize in the form of new features and improvements.

Ussuri updates bring about changes in two significant areas, the core infrastructure layer and security and encryption, along with plenty of other exciting features serving their use cases.

Reliability of the Core Infrastructure Layer

Nova – is the compute service. It added support for cold migration and resizing support. Live cross cell migration is not supported yet. But, the magic of an open source community is that it can contributions can be made for it. A big new addition is API policy introducing new default roles with scope_type capabilities. This new change enhances security and manageability by being richer at accessing projects at both system and project level.

Ironic – the bare metal provisioning, has added support for hardware retirement workflow to enable automation of hardware decommission in managed clouds.

Kuryr –  the networking bridge, added support for IPv6 and improves policy support.

Security and Encryption Enhancement

Octavia –  the load balancing service. It allows you to specify the Transport Layer Security (TLS) ciphers acceptable for listeners and pools. This feature gets load balancers to enforce security compliance requirements.  Another awaited update that became a part of Ussuri is the support for deployment in specific availability zones, allowing the deployment of load balancing capabilities to edge environments. An interesting thing that took place during these contributions was the mentorship of college students to get them familiarized with OpenStack. The learning by doing experience is very conducive to growing minds!

Neutron – the networking service, brings about several security improvements with this release. Support for stateless security groups to Role Based Access Control (RBAC) for address scopes and subnet pools are some significant improvements.

Other Important Features

The series of advancements in the new update continues and here are some other essential features that are a part of the 21st release.

Cinder added support for Glance multistore. It also supports image data colocation when uploading a volume to the image service. The latest features also included some new backend drivers. The work for volume-local-cache has started and is said to continue in the next release, Victoria.

Swift adds a new system namespace for the service, a versioning API, and S3 versioning.

You can decompress images, import single image or copy existing images in multiple stores and delete images from single-store through improvements in Glance.

User experience is improved as you can be given concrete role assignments without relying on the mapping API through Keystone’s additional features. You benefit most when using the federated authentication method.

A brand new feature of creating shares from snapshots across storage pools has been made available with Manilla.

Kolla is the containerized deployment service of OpenStack. This project has added initial support for TLS encryption of backend API services, providing end-to-end encryption of API traffic.

Magnum has added support in two areas. First, the Kubernetes version upgrade support. Second is the ability to upgrade the operating system of the Kubernetes cluster including master and worker nodes.

OpenStack Ussuri and VEXXHOST

“The extensive list of new features shows just how active the OpenStack community is and the VEXXHOST team is excited to be a part of such a progressive upstream community. As is expected of us, we are bringing the new release to our old and new clients as part of our OpenStack Upgrade Solution, and we hope for all OpenStack users to make the most of it”, said Mohammed Naser, CEO of VEXXHOST.

Come and upgrade to OpenStack Ussuri with us. VEXXHOST is here to guide you and consult with you in your OpenStack deployments every step of the way. Get a seamless experience while transitioning from an older release as our engineers will do the heavy lifting for you. Are you looking to make the most of the benefits that come with every release? Get in touch with our experts for more information on how we can help you get started on your OpenStack Upgrade journey.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Exciting Features of OpenStack’s 21st Release: Ussuri appeared first on VEXXHOST.

by Samridhi Sharma at May 29, 2020 06:12 PM

Thomas Goirand

A quick look into Storcli packaging horror

So, Megacli is to be replaced by Storcli, both being proprietary tools for configuring RAID cards from LSI.

So I went to download what’s provided by Lenovo, available here:
https://support.lenovo.com/fr/en/downloads/ds041827

It’s very annoying, because they force users to download a .zip file containing a deb file, instead of providing a Debian repository. Well, ok, though at least there’s a deb file there. Let’s have a look what’s using my favorite tool before installing (ie: let’s run Lintian).
Then it’s a horror story. Not only there’s obvious packaging wrong, like the package provide stuff in /opt, and all is statically linked and provide embedded copies of libm and ncurses, or even the package is marked arch: all instead of arch: amd64 (in fact, the package contains both i386 and amd64 arch files…), but there’s also some really wrong things going on:

E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: libm
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: ncurses
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli
E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli64
E: storcli: embedded-library opt/MegaRAID/storcli/storcli64: libm
E: storcli: embedded-library … use –no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli64
E: storcli: changelog-file-missing-in-native-package
E: storcli: control-file-has-bad-permissions postinst 0775 != 0755
E: storcli: control-file-has-bad-owner postinst asif/asif != root/root
E: storcli: control-file-has-bad-permissions preinst 0775 != 0755
E: storcli: control-file-has-bad-owner preinst asif/asif != root/root
E: storcli: no-copyright-file
E: storcli: extended-description-is-empty
W: storcli: essential-no-not-needed
W: storcli: unknown-section storcli
E: storcli: depends-on-essential-package-without-using-version depends: bash
E: storcli: wrong-file-owner-uid-or-gid opt/ 1000/1000
W: storcli: non-standard-dir-perm opt/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/
W: storcli: non-standard-dir-perm opt/MegaRAID/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/storcli/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/
W: storcli: non-standard-dir-perm opt/MegaRAID/storcli/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid … use –no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/storcli
E: storcli: dir-or-file-in-opt … use –no-tag-display-limit to see all (or pipe to a file/program)

Some of the above are grave security problems, like wrong Unix mode for folders, even with the preinst script installed as non-root.
I always wonder why this type of tool needs to be proprietary. They clearly don’t know how to get packaging right, so they’d better just provide the source code, and let us (the Debian community) do the work for them. I don’t think there’s any secret that they are keeping by hiding how to configure the cards, so it’s not in the vendor’s interest to keep everything closed. Or maybe they are just hiding really bad code in there, that they are ashamed to share? In any way, they’d better not provide any package than this pile of dirt (and I’m trying to stay polite here…).

by Goirand Thomas at May 29, 2020 10:56 AM

May 28, 2020

OpenStack Superuser

Annual Superuser Awards Open!

Nominations are open for the annual Superuser Awards. The deadline is September 4. Nominees will select to nominate their organization depending on its Open Infrastructure use case:

  • AI / Machine Learning
  • Containers
  • CI/CD
  • Edge Computing
  • Data Center

This year, we will be recognizing award recipients by use case category.

All nominees will be reviewed by the community, and the Superuser editorial advisors will determine the winners. The nominees and winners will be announced in October by the OpenStack Foundation and the previous winner, Baidu.

Open Infrastructure provides resources to developers and users by integrating various open source components. The benefits are obvious, whether that infrastructure is in a private or a public context: the absence of lock-in, the power of interoperability opening up new possibilities, the ability to look under the hood, tinker with, improve the software and contribute back your changes.

The Superuser Awards recognize teams using Open Infrastructure to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the open source communities.  They aim to cover the same mix of open technologies as our publication, namely OpenStack, Kubernetes, Kata Containers, Airship, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, Zuul, OPNFV and more.

Teams of all sizes are encouraged to apply. If you fit the bill, or know a team that does, we encourage you to submit a nomination here.

After the community has reviewed all nominees, the Superuser editorial advisors will select winning organization(s).

When evaluating a winner for the Superuser Awards, advisors take into account the unique nature of use case(s), as well as integrations and applications of a particular team. Questions include how this team innovates with open infrastructure, for example working with container technology, NFV, and other unique workloads.

Additional selection criteria includes how the workload has transformed the company’s business, including quantitative and qualitative results of performance as well as community impact in terms of code contributions, feedback and knowledge sharing.

Winners will be recognized in a ceremony presented by the OpenStack Foundation and the previous winner, Baidu. Submissions are open now until September 4, 2020. You’re invited to nominate your team or someone you’ve worked with, too.

Launched at the Paris Summit in 2014, the community has continued to award users who show how open infrastructure is making a difference and provide strategic value in their organization. Past winners include  AT&T, CERNCity NetworkComcastNTT GroupTencent TStack, and  VEXXHOST.

Wonder what these organizations are doing with open infrastructure now? Superuser reached out to previous Award recipients to find out. We’ll be posting them for the next couple of weeks as a part of our “Where are they now?” series, leading up to our celebration of 10 years of OpenStack in July.

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

The post Annual Superuser Awards Open! appeared first on Superuser.

by Helena Spease at May 28, 2020 01:00 PM

RDO

RDO Ussuri Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ussuri is the 21st release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

PLEASE NOTE: At this time, RDO Ussuri provides packages for CentOS8 only. Please use the previous release, Train, for CentOS7 and python 2.7.

Interesting things in the Ussuri release include:
  • Within the Ironic project, a bare metal service that is capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner, UEFI and device selection is now available for Software RAID.
  • The Kolla project, the containerised deployment of OpenStack used to provide production-ready containers and deployment tools for operating OpenStack clouds, streamlined the configuration of external Ceph integration, making it easy to go from Ceph-Ansible-deployed Ceph cluster to enabling it in OpenStack.
Other improvements include:
  • Support for IPv6 is available within the Kuryr project, the bridge between container framework networking models and OpenStack networking abstractions.
  • Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/ussuri/highlights.html.
  • A new Neutron driver networking-omnipath has been included in RDO distribution which enables the Omni-Path switching fabric in OpenStack cloud.
  • OVN Neutron driver has been merged in main neutron repository from networking-ovn.
Contributors
During the Ussuri cycle, we saw the following new RDO contributors:
  • Amol Kahat 
  • Artom Lifshitz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Dan Pawlik 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Eyal 
  • Harald Jensås 
  • Kevin Carter 
  • Lance Albertson 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Riccardo Pittau 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • SurajP 
  • Toure Dunnon 

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 54 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

  • Adam Kimball 
  • Alan Bishop 
  • Alan Pevec 
  • Alex Schultz 
  • Alfredo Moralejo 
  • Amol Kahat 
  • Artom Lifshitz 
  • Arx Cruz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Cédric Jeanneret 
  • Chandan Kumar
  • Dan Pawlik
  • David Moreau Simard 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Emilien Macchi 
  • Eric Harney 
  • Eyal 
  • Fabien Boucher 
  • Gabriele Cerami 
  • Gael Chamoulaud 
  • Giulio Fidente 
  • Harald Jensås 
  • Jakub Libosvar 
  • Javier Peña 
  • Joel Capitao 
  • Jon Schlueter 
  • Kevin Carter 
  • Lance Albertson 
  • Lee Yarwood 
  • Marc Dequènes (Duck) 
  • Marios Andreou 
  • Martin Mágr 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Mike Turek 
  • Nicolas Hicher 
  • Rafael Folco 
  • Riccardo Pittau 
  • Ronelle Landy 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • Soniya Vyas
  • Sorin Sbarnea 
  • SurajP 
  • Toure Dunnon 
  • Tristan de Cacqueray 
  • Victoria Martinez de la Cruz 
  • Wes Hayutin 
  • Yatin Karel
  • Zoltan Caplovic
The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Victoria, which has an estimated GA the week of 12-16 October 2020. The full schedule is available at https://releases.openstack.org/victoria/schedule.html.

Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 25-26 June 2020 for Milestone One and 17-18 September 2020 for Milestone Three.

Get Started
There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Iury Gregory Melo Ferreira at May 28, 2020 08:49 AM

Michael Still

Introducing Shaken Fist

Share

The first public commit to what would become OpenStack Nova was made ten years ago today — at Thu May 27 23:05:26 2010 PDT to be exact. So first off, happy tenth birthday to Nova!

A lot has happened in that time — OpenStack has gone from being two separate Open Source projects to a whole ecosystem, developers have come and gone (and passed away), and OpenStack has weathered the cloud wars of the last decade. OpenStack survived its early growth phase by deliberately offering a “big tent” to the community and associated vendors, with an expansive definition of what should be included. This has resulted in most developers being associated with a corporate sponser, and hence the decrease in the number of developers today as corporate interest wanes — OpenStack has never been great at attracting or retaining hobbist contributors.

My personal involvement with OpenStack started in November 2011, so while I missed the very early days I was around for a lot and made many of the mistakes that I now see in OpenStack.

What do I see as mistakes in OpenStack in hindsight? Well, embracing vendors who later lose interest has been painful, and has increased the complexity of the code base significantly. Nova itself is now nearly 400,000 lines of code, and that’s after splitting off many of the original features of Nova such as block storage and networking. Additionally, a lot of our initial assumptions are no longer true — for example in many cases we had to write code to implement things, where there are now good libraries available from third parties.

That’s not to say that OpenStack is without value — I am a daily user of OpenStack to this day, and use at least three OpenStack public clouds at the moment. That said, OpenStack is a complicated beast with a lot of legacy that makes it hard to maintain and slow to change.

For at least six months I’ve felt the desire for a simpler cloud orchestration layer — both for my own personal uses, and also as a test bed for ideas for what a smaller, simpler cloud might look like. My personal use case involves a relatively small environment which echos what we now think of as edge compute — less than 10 RU of machines with a minimum of orchestration and management overhead.

At the time that I was thinking about these things, the Australian bushfires and COVID-19 came along, and presented me with a lot more spare time than I had expected to have. While I’m still blessed to be employed, all of my social activities have been cancelled, so I find myself at home at a loose end on weekends and evenings at lot more than before.

Thus Shaken Fist was born — named for a Simpson’s meme, Shaken Fist is a deliberately small and highly opinionated cloud implementation aimed at working well in small deployments such as homes, labs, edge compute locations, deployed systems, and so forth.

I’d taken a bit of trouble with each feature in Shaken Fist to think through what the simplest and highest value way of doing something is. For example, instances always get a config drive and there is no metadata server. There is also only one supported type of virtual networking, and one supported hypervisor. That said, this means Shaken Fist is less than 5,000 lines of code, and small enough that new things can be implemented very quickly by a single middle aged developer.

Shaken Fist definitely has feature gaps — API authentication and scheduling are the most obvious at the moment — but I have plans to fill those when the time comes.

I’m not sure if Shaken Fist is useful to others, but you never know. Its apache2 licensed, and available on github if you’re interested.

Share

by mikal at May 28, 2020 05:05 AM

Stephen Finucane

Using AMI Images in OpenStack

I recently had to validate some interactions between the OpenStack Image service, glance, and the Compute service, nova. For this, I needed separate kernel and ramdisk images.

May 28, 2020 12:00 AM

Emulated Trusted Platform Module (vTPM) in OpenStack 🔐

Work is ongoing in nova to provide support for attaching virtual Trusted Platform Modules (vTPMs) to instances. The below guide demonstrates how you can go about testing this feature for yourself.

May 28, 2020 12:00 AM

May 27, 2020

VEXXHOST Inc.

How Governments Use OpenStack Around The World

Governments all over the world trust in OpenStack for their cloud computing needs. With an OpenStack powered private cloud government can benefit from the very best in security, scalability, and agility. Choosing a cloud-based infrastructure allows governments to stay up to date on the latest technology trends. All the while saving on costs typically associated with traditional IT. In a world where security threats are increasing, it has never been more important for government bodies to implement additional measures against risk. This includes a robust cloud solution, which enables the government to focus on its core competencies and work for the citizens of their country.

Today we are going to dive into how different governments use OpenStack around the world. Our global tour will begin in The United States of America, take a detour to Australia and end with the impact of OpenStack on the French government. Let’s take a look at how governments use OpenStack around the world- no passport required!

How Governments Use OpenStack

Governments all over the world have important priorities. Keep their citizens safe, keep law and order within their jurisdiction, and problem solve any issues that arise are just some of the issues that they face on a day to day basis. Although the way that they govern their countries the following countries have one thing in common: OpenStack.

The United States Of America

In the United States, the National Security Agency (NSA) utilizes the power of OpenStack to keep abreast of the high-security needs of the country. The implementation of OpenStack at the governmental level was such a success that the NSA plan on implementing OpenStack to all 16 agencies that make up the intelligence community in the United States. Considering the emphasis that the United States place on security, it is evident that OpenStack delivers where it matters most.

Moreover, the role of OpenStack in government security for the United States is good for OpenStack users. Why? Because the NSA has such strong security requirements, it developed systems to accommodate these requirements. From securing APIs and guest OSes to developing code to suit their needs, the OpenStack community can experience the security benefits for themselves.

consulting

Australia

The government of Australia is also no strange to OpenStack’s private cloud infrastructure. The Australian government leveraged simplicity alongside agility of OpenStack to build a secure cloud platform. Major government departments such as the Digital Transformation Agency, Department of Health and the Defence and Australian Intelligence Community all worked to ensure that the implementation of OpenStack was a success. Thanks to OpenStack, Australia was able to deliver high-security digital services and solutions to areas of the government that needed it most.

France

Finally, the French government is no stranger to OpenStack themselves. France’s Interior Ministry has been harnessing the power of OpenStack private clouds to help govern the country. The Ministry IT engineers were taught to use OpenStack best practices. They went on to help in selecting which tools and deployment strategies would be right for France. Many of the applications that France is using right now are in place to push for bureaucratic reform. About 20 different projects are being considered for France’s OpenStack powered cloud and they plan to migrate 150 applications to the cloud in the next three to five years.

Whether you’re a government body, organization or business, no matter what industry or size, VEXXHOST is here to help you get started with a private cloud solution. We’ve been using and contributing to OpenStack since 2011. We know OpenStack inside and out. Contact our team of experts today to learn more.

Would you like to know about Private Cloud and what it can do for you? So download our white paper and get reading!

Fighting Off Certain Death with OpenStack Private Cloud

Fighting Off Certain Death with OpenStack Private Cloud

The post How Governments Use OpenStack Around The World appeared first on VEXXHOST.

by Samridhi Sharma at May 27, 2020 06:05 PM

OpenStack Superuser

OpenStack Ussuri is Python3-Only: Upgrade Impact

A brief history of Python2 -> Python3:

Python version 2.0 was officially released in 2000, OpenStack was founded in 2010 and has since used Python 2.0 as its base language. The Python Foundation realized that in order to prevent users from having to perform tasks in a backward or difficult way, big improvements needed to be made to the software.

We released Python 2.0 in 2000. We realized a few years later that we needed to make big changes to improve Python. So in 2006, we started Python 3.0. Many people did not upgrade, and we did not want to hurt them. So, for many years, we have kept improving and publishing both Python 2 and Python 3.

In 2015, The Python Foundation made a very clear announcement on multiple platforms to migrate to Python 3 and discontinue Python 2. This initial plan was later extended to 2020. 

We have decided that January 1, 2020, was the day that we sunset Python 2. That means that we will not improve it anymore after that day, even if someone finds a security problem in it. You should upgrade to Python 3 as soon as you can.

OpenStack Starting Support of Python 3:

With the announcement of the sunset of Python 2, it became very clear that OpenStack also could not support Python 2 for much longer. Because it would have been impossible to fix any security bugs on Python 2, it was better for OpenStack to drop its support completely and instead concentrate on Python 3.

OpenStack’s support of  Python 3 started in 2013, and many developers contributed towards the enormous task of transitioning the software. After so much hard work from the community, the Stein cycle (September 2018) was the time when running OpenStack under Python3 as default work became a community goal. The community goal is a way to achieve common changes in OpenStack. OpenStack runs under Python3 as default was a great effort and includes a lot of hard work by many developers. Doug Hellmann was one of the key developers and showed coordination and leadership with other developers and projects to finish this goal.

OpenStack Train (Oct 2019): Python3 by default:

In the OpenStack Train release (October 2019), OpenStack was tested on Python 3 by default. This meant that  you could upgrade your Cloud to Python 3 environment with full confidence. OpenStack Train was released with well tested Python 3 support, but still also supported Python 2.7. At the same time, we kept testing the latest Python 3 version, and the OpenStack Technical Committee (TC) started defining the testing runtime for each cycle. OpenStack is targeting Python 3.8 in the next development cycle beginning soon.

OpenStack Ussuri (May 2020): Python3-Only: Dropped the support of Python2:

With the Ussuri cycle, OpenStack dropped all support of Python 2. All the projects have completed updating their CI jobs to work under Python 3. This achievement allows the software to be able to remove all Python 2 testing as well as the configuration that goes along with it..

Very first thing in the Ussuri cycle, we started planning for the drop of Python 2.7 support. Dropping Python 2.7 was not an easy task when many projects depend on each other and also integrate CI/CD. For example, if Nova drops Python 2.7 support and becomes Python 3 only, it can break Cinder and many other projects’s CI/CD. We prepared a schedule and divided the work into three phases, dropping support from services first, then library or testing tools.

    Phase-1: Start of Ussuri -> Ussuri-1 milestone: OpenStack Services to start

             dropping the py2.7 support.

    Phase-2: milestone-1 -> milestone-2:  Common libraries and testing tooling

    Phase-3: at milestone-2: Final audit.

Even still, a few things got broken in initial work. So, we made DevStack as Python 3 by default which really helped move things forward. In phase 2, when I started making Tempest and other testing tools as python3-only, a lot of stable branch testing started breaking. That was obvious because Tempest and many other testing tools are branchless, meaning the master version is being used for testing both the current and  older releases of OpenStack. So all Python 2.7 testing jobs were using the Tempest master version. Finally, capping and fixing Tempest installed in py3 venv made all stable branches and master testing green.

Just a couple of weeks before the Ussuri release, we completed this work and made OpenStack as python3-only, with an updated wiki page. Two projects, Swift and Storlets, are going to keep supporting Python 2.7 for another one or two cycles.

What “OpenStack is Python3-Only” means for Users/Upgrades:

If your existing Cloud is on Python 3 env, then you do not need to worry at all. If it is on Python 2.7 and you are upgrading to Ussuri,then you need to check that your env has the Python 3.6 or higher version available. From the Ussuri release onwards, OpenStack will be working on Python 3.6 or higher only. For example, if you want to install Nova Ussuri version, then it will give an error if Python 3.6 or higher is not available. It is done via metadata (“python-requires = >=3.6”) in setup configuration file. Below is the screenshot of how the setup config file looks in the Ussuri release onwards:

python-requires = >=3.6

classifier =

      Environment :: OpenStack

      Intended Audience :: Information Technology

      Intended Audience :: System Administrators

      License :: OSI Approved :: Apache Software License

      Operating System :: POSIX :: Linux

      Programming Language :: Python

      Programming Language :: Python :: 3

      Programming Language :: Python :: 3.6

      Programming Language :: Python :: 3.7

      Programming Language :: Python :: 3 :: Only

      Programming Language :: Python :: Implementation :: CPython

If you are using a distribution that does not have Python 3.6 or higher available, then you need to upgrade your distro first. There is no workaround or any compatible way to keep running OpenStack on Python 2.7. We have sunset the Python 2.7 support from Ussuri onwards, and the only way is to also upgrade your python version. There are a few questions on the python upgrade which are covered in the FAQ section below.

FAQ:

Q1: Is Python 2 to Python 3 upgrade being tested in Upstream CI/CD?

Answer: Not directly, but it is being tested indirectly.We did not set up the grenade testing (upstream upgrade testing) for py2 setup to py3 setup. However, previous OpenStack releases like Stein and Train were tested on both the python versions. This means that the OpenStack code was not working or well-tested on the previous version before it was python3 only. This makes sure that upgrading the py2->py3 for OpenStack has been tested indirectly. If you are upgrading OpenStack from Stein or Train to Ussuri, then there should not be any issues.

Q2: How are the backport changes from Ussuri onwards to old stable branches going to be python2.7 compatible?

Answer: We still run the Python 2.7 jobs until Stable Train testing so that any backport from Ussuri or higher (which are tested on Python 3 only) will be backported on Train or older stable branches with testing on Python 2.7 also. If anything breaks on Python 2.7, it will be fixed before backporting. That way we will keep Python 2.7 support for all stable branches greater than Ussuri.

Q3: Will testing frameworks like Tempest which are branchless (using the master version for older release testing) keep working for Python 2.7 as well?

Answer: No. We have released the last compatible version for Python 2.7 for Tempest and other branchless deliverables. Branchless means that the tools master version is being used to test the current or older OpenStack releases. For example, Tempest 23.0.0 can be used as a Python 2.7 supported version and Tempest 24.0.0 or master is Python 3 only. But there is a way to keep testing the older Python 2.7 release also (until you upgrade your cloud and want Tempest master to test your cloud). You can run Tempest on a Python 3 node or virtual env and keep using the master version for testing Python 2.7 cloud. Tempest does not need to be installed on the same system as other OpenStack services, as long as the APIs are accessible from the separate testing node, or the virtual env Tempest is functioning.

For any other questions, feel free to ping on the #openstack-dev IRC channel.

 

The post OpenStack Ussuri is Python3-Only: Upgrade Impact appeared first on Superuser.

by Ghanshyam Mann at May 27, 2020 01:00 PM

May 26, 2020

Ed Leafe

Writing Again

Today marks 2 months since I was laid off from my job at DataRobot. It was part of a 25% reduction that was made in anticipation of the business slump from the COVID-19 pandemic, and having just been there for 6 months, I was one of the ones let go. I have spent the last … Continue reading "Writing Again"

by ed at May 26, 2020 03:13 PM

CERN Tech Blog

Scaling Ironic with Conductor Groups

CERN has introduced OpenStack Ironic for bare metal provisioning as a production service in 2018. Since then, the service has grown to manage more than 5000 physical nodes and is currently used by all IT services still requiring physical machines. This includes storage or database services, but also the infrastructure for compute services. Even the “compute nodes” used by OpenStack Nova are instances deployed via Nova and Ironic (but that will be a different blog post!

by CERN (techblog-contact@cern.ch) at May 26, 2020 07:00 AM

May 25, 2020

Galera Cluster by Codership

Galera Cluster 4 for MySQL 8 is Generally Available!

Codership is proud to announce the first Generally Available (GA) release of Galera Cluster 4 for MySQL 8 and improve MySQL High Availability a great deal. The current release comes with MySQL 8.0.19 and includes the Galera Replication Library 4.5 with wsrep API version 26. You can download it now (and note that we have packages for various Linux distributions). 

Galera 4 and MySQL 8.0.19 have many new features, but here are some of the highlights:

  • Streaming replication to support large transactions by splitting transaction replication then applying them in smaller fragments. You can use this feature to load data faster, as data is written to all nodes simultaneously (or not at all in case of a failure in any single node).
  • Improved foreign key support, as write set certification rules are optimised and there will be a reduction in the number of foreign key related false conflicts in certifications.
  • Group commit is supported and integrated with the native MySQL 8 binary log group commit code. Within the codebase, the commit time concurrency controls were reworked such that the commit monitor is released as soon as the commit has been queued for a group commit. This allows transactions to be committed in groups, while still respecting the sequential commit order.
  • There are new system tables for Galera Cluster that are added to the mysql database: wsrep_cluster, wsrep_cluster_members and wsrep_streaming_log. You can now view cluster membership via system tables.
  • New synchronization functions have been introduced to help applications implement read-your-writes and monotonic-reads consistency guarantees. These functions are: WSREP_LAST_SEEN_GTID(), WSREP_LAST_WRITTEN_GTID() and WSREP_SYNC_WAIT_UPTO_GTID().
  • The resiliency of Galera Cluster against poor network conditions has been improved. Handling of irrecoverable errors due to poor network conditions has also been improved, so that a node will always attempt to leave the cluster gracefully if it is not possible to recover from errors without sacrificing data consistency. This will help your geo-distributed multi-master MySQL clusters tremendously.
  • This release also deprecates the system variables: wsrep_preordered and wsrep_mysql_replication_bundle.

We are pleased to offer packages for CentOS 7, CentOS 8, Ubuntu 18.04, Ubuntu 20.04, Debian 10, OpenSUSE 15, and SUSE Linux Enterprise (SLES) 15 SP1. Installation instructions are similar to previous releases of Galera Cluster.

In addition to the release, we are going to run a webinar to introduce this new release to you. Join us for Galera Cluster 4 for MySQL 8 Release Webinar happening Thursday  June 4 at 9-10 AM PDT or 2-3 PM EEST (yes, we are running two separate webinars for the Americas and European timezones).  

EMEA webinar 4th of June, 2-3 PM EEST  (Eastern European Time)

JOIN THE EMEA WEBINAR 

USA webinar 4th of June, 9-10 AM PDT

JOIN THE USA WEBINAR

by Sakari Keskitalo at May 25, 2020 05:34 AM

May 21, 2020

Stephen Finucane

Why You Can't Schedule to Host NUMA Nodes in Nova?

If I had a euro for every time someone had asked me or someone else working on nova for the ability to schedule an instance to a specific host NUMA node, I might never have to leave the pub (back in halcyon days pre-COVID-19 when pubs were still a thing, that is).

May 21, 2020 12:00 AM

May 20, 2020

OpenStack Superuser

OpenInfra Labs: An Open Infrastructure Collaboration for Research Use Cases

In early March—at what turned out to be one of the last non-virtual technology events held before Coronavirus lockdowns ended in-person conferences—I was fortunate to be among more than 200 attendees who gathered for two days in Boston at the Open Cloud Workshop to discuss the intersection of academic research and cloud computing software. 

The workshop is hosted by Massachusetts Open Cloud (MOC), a name that will be familiar to those who have attended OpenStack and Open Infrastructure Summits over the past few years. MOC is a consortium of universities in the New England area that share computing resources, data sets and operational practices. MOC equips its members with virtual resource sharing and on-demand user provisioning through high-bandwidth connections, all built upon OpenStack and driven by OpenStack APIs. Collectively, the members of MOC are active contributors to the OpenStack community and have delivered several Summit presentations, including in Atlanta (MOC Overview and Lessons Learned), Boston and Berlin

Another great outcome of MOC’s involvement in the OpenStack community is a new initiative called OpenInfra Labs. OpenInfra Labs is a community created by and for academic and research cloud operators who are testing open source code in production and publishing complete, reproducible stacks for existing and emerging research workloads. 

The primary objective of OpenInfra Labs is to deliver open source tools to run cloud, container, AI, machine learning and edge workloads repeatedly and predictably. 

OpenInfra Labs focuses on three core activities:

  • Integrated testing of all the components necessary to provide a complete use case
  • Documentation of operational and functional gaps required to run upstream projects in a production environment
  • Shared code repositories for operational tooling and the “glue” code that is often written independently by users

The OpenInfra Labs community was initiated by MOC, the OSF, and Red Hat. It has since welcomed a host of additional core industry partners and contributors who are interested not only in supporting academic research but also in knowledge transfer to help enterprises develop reliable and powerful federated computing resources. 

Learn More about OpenInfra Labs

To learn more, check out the April 28 meeting of the OpenStack Scientific SIG, which featured an introduction to OpenInfra Labs. 

If you are interested in building infrastructure for university or research purposes or represent an ecosystem vendor who would like to contribute to OpenInfra Labs, here are three ways to get involved:

Everyone is invited to engage with the OpenInfra Labs community and contribute your talents and expertise to current activities and community goals. 

The post OpenInfra Labs: An Open Infrastructure Collaboration for Research Use Cases appeared first on Superuser.

by Jeremy Stanley at May 20, 2020 01:00 PM

May 19, 2020

Fleio Blog

Fleio 2020.05: Reseller customization, security groups templates, new angular frontend, docker and more

Fleio 2020.05 is now available! The latest version was published today, 2020-05-19. New reseller customization With the latest version we have added to the reseller frontend more customization options. We have implemented themes support and custom logo support. This was added so that your resellers can actually differentiate from the cloud provider platform. You can […]

by Marian Chelmus at May 19, 2020 09:22 AM

May 17, 2020

CERN Tech Blog

A single cloud image for BIOS/UEFI boot modes on virtual and physical OpenStack instances

“Brace yourselves: upcoming hardware deliveries may come with UEFI-only support.” This announcement from our hardware procurement colleagues a few months ago triggered the OpenStack and Linux teams to look into how to add UEFI support to our cloud images. Up to now, CERN cloud users had been using the very same image for virtual and physical instances and we wanted to keep it that way. This blog post summarises some of the tweaks needed to arrive with an image that can be used to instantiate virtual and physical machines, can boot both of these in BIOS and UEFI mode, and works with Ironic managed software RAID nodes for both BIOS/UEFI boot modes as well.

by CERN (techblog-contact@cern.ch) at May 17, 2020 01:00 PM

May 16, 2020

Doug Hellmann

beagle 0.2.2

beagle is a command line tool for querying a hound code search service such as http://codesearch.openstack.org What’s new in 0.2.2? fix the reference to undefined function in link formatter Fix issues (contributed by Hervé Beraud) Refactor pipelines (contributed by Hervé Beraud) [doc] refresh oslo examples (contributed by Hervé Beraud)

by doug at May 16, 2020 01:42 PM

May 15, 2020

Galera Cluster by Codership

Installing Galera on Amazon Linux 2 for Geo-distributed Multi-master MySQL

We recently covered Installing Galera Cluster 4 with MySQL 8 on Ubuntu 18.04 , the new Galera version for MySQL High Availability. We got a request to see if we would be able to install it on Amazon Linux 2, and the short answer is yes, we are able to deploy Galera Cluster on Amazon Linux 2.

We have even published Installing a Galera Cluster on AWS guide for Geo-distributed MySQL Multi-master clustering which covers how to install a 3-node Galera Cluster on CentOS 7 to achieve disaster recovery . It turns out, Amazon Linux 2 tends to be quite compatible with this article (documentation). Heed the notices about how to configure SELinux, the firewall, as well as the security settings on AWS.

Today we will focus on installing Galera Cluster with MySQL 5.7 on Amazon Linux 2 (yes, the same instructions apply to installing the beta of MySQL 8 & Galera 4).

uname -a
Linux ip-172-30-0-54.ec2.internal 4.14.173-137.229.amzn2.x86_64 #1 SMP Wed Apr 1 18:06:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

[ec2-user@ip-172-30-0-54 ~]$ cat /etc/system-release
Amazon Linux release 2 (Karoo)

[ec2-user@ip-172-30-0-54 ~]$ cat /etc/os-release 
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"

So beyond the article above, all you have to do is to ensure that there is a /etc/yum.repos.d/galera.repo file:

[galera]
name = Galera
baseurl = https://releases.galeracluster.com/galera-3/redhat/7/x86_64
gpgkey = https://releases.galeracluster.com/GPG-KEY-galeracluster.com
gpgcheck = 1

[mysql-wsrep]
name = MySQL-wsrep
baseurl =  https://releases.galeracluster.com/mysql-wsrep-5.7/redhat/7/x86_64
gpgkey = https://releases.galeracluster.com/GPG-KEY-galeracluster.com
gpgcheck = 1

And then you install it via: sudo yum install galera-3 mysql-wsrep-5.7

Since this example took the smallest instance just for testing, a simple /etc/my.cnf was used to bootstrap a cluster:

default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=128M
binlog_format=ROW

wsrep_on=ON
wsrep_provider=/usr/lib64/galera-3/libgalera_smm.so

wsrep_node_name="g1"
wsrep_node_address="ip"
wsrep_cluster_name="galera4"
wsrep_cluster_address="gcomm://ip1,ip2,ip3"
wsrep_provider_options="gcache.size=128M; gcache.page_size=128M"
wsrep_sst_method=rsync

When you install MySQL 5.7, you’ll have to remember that you need to grab the password from the MySQL log, so do this via grep password /var/log/mysqld.log. After that, login and remember to change the root password by doing something like alter user 'root'@'localhost' identified with mysql_native_password by 'rootyes123A!';. Go ahead and run mysqld_bootstrap on the first node, and start MySQL normally on the 2nd and 3rd nodes.

We ran the MySQL test suite which has Galera Cluster tests as well, and the tests passed. As a company, Codership, considers the Amazon Linux 2 distribution compatible with our CentOS 7 binaries. Don’t forget we also release for Ubuntu, Debian, CentOS, openSUSE, Red Hat Enterprise Linux, and SUSE Enterprise Linux. This is of course in addition to FreeBSD. Expect a lot more distributions when MySQL 8 + Galera 4 goes Generally Available (GA).

by Sakari Keskitalo at May 15, 2020 07:27 AM

May 14, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: OpenStack Ussuri 

Ussuri, the 21st release of OpenStack, includes improvements in core functionality, automation, cross-cell cold migration, containerized applications, and support for new use cases at multiple levels in the stack.

Thank you to the more than 1,000 contributors from more than 50 countries and 188 organizations that contributed to the OpenStack Ussuri release. With these metrics, OpenStack continues to be one of the top three open source projects in the world in terms of active contributions, along with the Linux kernel and Chromium.

Among the many enhancements contributors delivered in Ussuri, three highlights are:

  1. Ongoing improvements to the reliability of the core infrastructure layer
  2. Enhancements to security and encryption capabilities
  3. Extended versatility to deliver support for new and emerging use cases

This year, we are celebrating 10 years of the OpenStack project. Since the software pioneered the concept of open infrastructure ten years ago, it has rapidly become the open infrastructure-as-a-service standard. Recently, new workload demands like AI, ML, edge, and IoT have given rise to the project’s support for new chip architectures, automation at scale down to the bare metal, and integration with myriad open source components. Intelligent open infrastructure—the integration of open source components that are evolving to meet these demands—creates an infrastructure that is self-monitoring, self-replicating, and delivering a versatile set of use cases.

The Ussuri release reinforces what OpenStack is well known for—namely, rock-solid virtual machine, container, and bare metal performance at massive scale. At the same time, Ussuri delivers security improvements via Octavia and Kolla. And, it supports new and emerging use cases with improvements to projects like Zun and Cyborg. 

Users can now use Nova to launch server instances with accelerators managed by Cyborg

Learn more about Ussuri features, check out screenshots from different OpenStack projects, and find out who contributed to the 21st OpenStack release at openstack.org/ussuri.

OpenStack Foundation news

Project Teams Gathering (PTG) June 1-5

OpenDev

  • Large-scale Usage of Open Infrastructure Software
    June 29 – July 1, 2020. Register now for free!
  • Hardware Automation
    July 20 – 22, 2020
  • Containers in Production
    August 10 – 12, 2020

Airship: Elevate your infrastructure

  • Airshipctl has completed its 2.0 alpha milestone and is now working towards beta.
  • Airship will be participating in the virtual PTG! View the draft agenda, and make any suggestions by May 23.
  • If you are evaluating or running Airship, share your feedback in the Airship User Survey! Take the chance and provide anonymous feedback back to the community. Take the user survey now.

Kata Containers: The speed of containers, the security of VMs

  • We are happy to announce the new stable release for Kata 1.11.x branch. This is the first official release for 1.11.x and includes many changes compared to 1.10.x. See more details on the 1.11.x release.
    • Take a look at the full release notes for the changes in this release here.
    • We have released a new version of 1.10.x branch: 1.10.4.
  • If you are running Kata Containers, the user survey is your opportunity to provide anonymous feedback to the upstream community, so the developers can better understand Kata Containers environments and software requirements. Take your Kata survey today.

OpenStack: Open source software for creating private and public clouds

  • Ghanshyam Mann announced the completion of the Python 3 transition goal. It’s been a long journey since we started to introduce Python 3 support in 2013! OpenStack components and libraries are now Python3-only (except Swift and Storlets which will continue to support Python 2 in Ussuri).
  • Following the recent elections, the OpenStack Technical Committee selected Mohammed Naser as its chair for the Victoria cycle. It also confirmed the removal of the Congress and Tricircle projects in the Victoria release (scheduled for October 2020), and the merge of the LOCI team into the OpenStack-Helm team, due to commonality of scope.
  • Are you ready to take your cloud skills to another level? The updated Certified OpenStack Administrator (COA) exam can help you with that. Check out the OpenStack COA exam and become a Certified OpenStack Administrator

StarlingX: A fully featured cloud for the distributed edge

  • The nomination period for the upcoming TSC elections is starting next week! For details about the process please see the elections website
  • The StarlingX user survey is live. Take the StarlingX user survey and provide anonymous feedback to the upstream community.

Zuul: Stop merging broken code

  • Zuul’s Github driver now supports reporting results via the Github Checks API. Find out more in the Zuul Github driver docs.
  • Work has begun to support multi architecture docker image builds. Help us improve the system by setting the ‘arch’ parameter on your docker image build jobs.
  • Are you a Zuul user? Fill out the Zuul User Survey to provide feedback and information around your deployment. All information is confidential to the OpenStack Foundation unless you designate that it can be public. Take your Zuul survey today.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at May 14, 2020 06:08 PM

May 06, 2020

OpenStack Superuser

Where are they now? Superuser Awards winner: CERN

If you’ve been around the OSF community for any amount of time, chances are you’ve heard the name CERN.

Famous for their Large Hadron Collider, Higgs boson, and antimatter studies, the Geneva-based laboratory has spent decades researching physics and the universe. So what does that have to do with OpenStack? All of that research produces massive amounts of data, thus requiring a substantial amount of infrastructure.

Keep reading to find out how CERN’s OpenStack environment has evolved since they won the first Superuser Award at the OpenStack Summit six years ago.

What has changed in your OpenStack environment since you won the Superuser Awards?

At the OpenStack Summit Paris in 2014, CERN received the first Superuser Award from Guillaume Aubuchon, CTO of Digitalfilm Tree.

Presentation of first Superuser Award at Paris OpenStack Summit.

At the time, CERN’s cloud had been in production for a year with 65,000 cores running Havana providing VMs, images and identity. After six years and 13 upgrades, the CERN cloud now covers 11 OpenStack projects adding containers, bare metal, block, share, workflows, networking and file system storage.

What is the current size of CERN’s OpenStack environment?

Snapshot of CERN’s infrastructure dashboard.

Currently, the CERN cloud is around 300,000 cores across 80 cells with big recent growth in OpenStack Magnum to manage Kubernetes clusters, OpenStack Ironic servers for all the computer center hardware, and Fileshares with CephFS.

What version of OpenStack is CERN running?

We are in the process of upgrading from Stein to Train with most components already running Train. We use the RDO distribution.

What open source technologies does your team integrate with OpenStack?

The list is very long! The aim for the CERN cloud environment was to build a toolchain based on a set of open source projects which could also be used by other labs collaborating with CERN. A few examples are:

Cloud and Containers

Configuration

  • Puppet and Foreman for configuration management
  • Terraform for automated provisioning (including external clouds)

Monitoring

Storage

Identity

Workflows

  • Gitlab for version control, continuous integration
  • Koji for builds
  • Rundeck for automation

What workloads are you running on OpenStack?

Over 90% of the infrastructure in the CERN computer center is managed and provisioned by OpenStack. This includes the physics processing and storage, databases along with the infrastructure for the laboratory administration. The remaining hardware in the computer center is now being enrolled into Ironic to ensure strong resource management, accounting and lifecycle tracking.

How big is your OpenStack team?

The production support team in the CERN IT Department is around seven engineers with further students and fellows contributing to various project enhancements.

How is your team currently contributing back to the OpenStack project? Is your team contributing to any other projects supported by the OpenStack Foundation (Airship, Kata Containers, StarlingX, Zuul)?

CERN has made over 1,000 commits to OpenStack since the implementation started in 2011. The largest three OpenStack projects CERN have contributed to are Magnum, Nova and Keystone. CERN’s experiences have been presented at more than 30 talks at OpenStack summits as well as regional events such as the open Infrastructure days which have provided an opportunity to share the experiences of running OpenStack at scale and our current focus areas. This included an OpenStack day at CERN in 2019 covering experiences of OpenStack usage in science and hosting the Ironic mid-cycle meetup in 2020.

The CERN blog is available at https://techblog.web.cern.ch/techblog/ and local developments are shared at https://github.com/cernops.

CERN has also contributed to governance and project management including an elected OpenStack individual board member, two members of the User Committee and PTL/core roles in Magnum, Keystone and Ironic.

What kind of challenges has your team overcome using OpenStack?

Given the demands of the Large Hadron Collider and the CERN experiments, provisioning more computing capacity without increasing the number of engineers was a challenge to overcome. Working with other members of the open source community in areas such as Container Orchestration-as-a-Service, Nova Cells, Identity Federation and Spot Market functionality has allowed these new features to be developed, reviewed by community and further enhanced. OpenStack Special Interest Groups such as the Scientific SIG and Large Scale SIG have provided a useful framework for debate, information sharing and common contribution.

A single framework for tracking, authentication and accounting for bare metal, virtual machines, storage and containers has been a major benefit for the CERN IT department. Allowing users to have self-service resources in a few minutes while ensuring that these are clearly allocated (and expired if appropriate) allows the CERN cloud users to focus on the goals of the laboratory rather than how to get the infrastructure they need.

Stay tuned for more updates from previous Superuser Award winners!

 

Cover Image: CERN

The post Where are they now? Superuser Awards winner: CERN appeared first on Superuser.

by Ashlee Ferguson at May 06, 2020 05:40 PM

May 05, 2020

VEXXHOST Inc.

3 KPIs Your Business Needs For Successful Cloud Infrastructure

When it comes to ensuring successful cloud infrastructure there are certain KPIs that you need to pay close attention to. Key Performance Indicators such as cost and quality can make a large impact on the bottom line of your business. Tracking your cloud infrastructure’s KPIs can help your business measure its cloud performance all the while developing new strategies to improve your business overall.

Today we are going to review which KPIs your business needs to address to stay competitive. If your business is looking to make the most of its cloud infrastructure it is time that you consider these three KPIs that your business should be measuring to indicate if your cloud strategy is working. Keep reading to see which are the ones your business should be looking out for.

#1: Security

The level of security and compliance is a performance indicator that your business cannot afford to ignore. From evaluating the reliability of your access points to rigorous compliance, it’s important to keep a close eye on security measures. Your cloud infrastructure, whether public or private cloud solutions, needs to ensure that only the right people have access to confidential information. Take the time to note how compliant you are with each of your security requirements. Everything from physical access if you have an on-premise solution to GDPR needs to be reviewed regularly to ensure that everything is up to date.

#2: Cost

Cost and ROI is a big factor when it comes to successful cloud infrastructure. Your business needs a solution that is working to bring returns not to drain financial resources. When it comes to selecting a cloud provider taking the time to calculate the infrastructure costs from the bottom up can help your business compare providers. After choosing a trusted cloud provider then it might be worth measuring how much cloud spend is your business using and how much is being wasted. Track these measurements frequently to ensure that overall waste goes down not up.

PublicCloud

#3: User Experience

In any business strategy, the quality of user experience is a metric that needs to be tracked closely. It is important to keep the user experience at the core of your KPIs for successful cloud infrastructure. Does your cloud infrastructure have frequent downtime? Are you losing data in the disruptions of service? Is your cloud running as optimally as it should be? These are questions that need to be addressed in order to optimize your cloud infrastructure and improve overall user experience.

Getting Started: KPIs Your Business Needs For Successful Cloud Infrastructure

Whether your business is starting its cloud journey or if it is looking to optimize the infrastructure it already has, the experts at VEXXHOST can help your business stay competitive. We are here to help your business refine and improve its key performance indicators and optimize your cloud infrastructure. We’ve been contributing and using OpenStack since 2011. So it’s safe to say we know OpenStack clouds inside and out. Want to learn more about how we can help? Contact our team of experts today.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 3 KPIs Your Business Needs For Successful Cloud Infrastructure appeared first on VEXXHOST.

by Hind Naser at May 05, 2020 05:42 PM

StackHPC Team Blog

Flatten the Learning Curve with OpenStack HIIT

With the current Coronavirus lockdown affecting many countries (including all the countries in which we work), remote working and videoconference has become the only way to be productive.

At StackHPC our flexible and distributed team is already used to working this way with clients. We have gone further, and developed online training for workshops we would normally deliver in person.

OpenStack HIIT: OpenStack in Six Sessions

With a nod to the intensity of OpenStack's infamous learning curve, we've called our new workshop format OpenStack HIIT.

OpenStack HIIT is a remote workshop, delivered by video conference. The workshop is organised into six sessions. Session topics include:

  1. Step-by-step deployment of an OpenStack control plane into a virtualised lab environment.
  2. A deep dive into the control plane to understand how it fits together and how it works.
  3. Operations and Site Reliability Engineering (SRE) principles. Best practices for operating cloud infrastructure.
  4. Monitoring and logging for OpenStack infrastructure and workloads.
  5. Deploying platforms and applications to OpenStack infrastructure.
  6. OpenStack software-defined networking deep dive.
  7. Ceph storage and OpenStack.
  8. Contributing to a self-sustaining open source community.
  9. Deploying Kubernetes using OpenStack Magnum.

Each session is led by a Senior Tech Lead from StackHPC's team. The workshop is designed to be interactive and up to six attendees can be supported.

Because it is remotely delivered, the sessions can be spread out, enabling attendees to read around the subject, practice content learned and prepare ahead for the next session.

The interactive sessions use lab infrastructure provided as part of the workshop. In some circumstances a client's own infrastructure can be used, which gives a client the opportunity to retain the lab environment and to use it between sessions. Additional provision for qualification of a client environment is required in this case.

OpenStack HIIT

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Stig Telfer at May 05, 2020 03:00 PM

Galera Cluster by Codership

Installing Galera Cluster 4 with MySQL 8 on Ubuntu 18.04

Since the beta of Galera Cluster 4 with MySQL 8 has been released, we’ve had people asking questions as to how to install it on Ubuntu 18.04. This blog post will cover just that.

Prerequisites

  • All 3 nodes need to have Ubuntu 18.04 installed
  • Firewall (if setup) needs to accept connections on 3306, 4444, 4567, 4568 (a default setup has the firewall disabled)
  • AppArmor disabled (this is as simple as executing: systemctl stop apparmor and systemctl disable apparmor).

Installation and Configuration

We have good installation documentation as well as a quick how to get this installed in AWS (though this is CentOS centric).

First, you will need to ensure that the Galera Cluster GPG key is installed:

apt-key adv --keyserver keyserver.ubuntu.com --recv BC19DDBA

 

This is followed by editing /etc/apt/sources.list.d/galera.list to have the following lines in the file:

deb https://galeracluster.com/wsrep_8.0.19-26.3-rc/galera-4/ubuntu bionic main

deb https://galeracluster.com/wsrep_8.0.19-26.3-rc/mysql-wsrep-8.0/ubuntu bionic main


You should now run an apt update and then install Galera 4 with MySQL 8:

apt install galera-4 mysql-wsrep-8.0

You are now told to enter a root password as apt/dpkg supports interactivity during installations. Please enter a reasonably secure password. Then you are asked if you should use a strong password, which is caching_sha2_password (you are encouraged to pick this).

 

Then you need to edit the/etc/mysql/mysql.conf.d/mysqld.cnf file to add the following lines:

default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=128M
binlog_format=ROW
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_node_name="g1"
wsrep_node_address="ipaddress"
wsrep_cluster_name="galera4"
wsrep_cluster_address="gcomm://ip1,ip2,ip3"
wsrep_provider_options="gcache.size=128M; gcache.page_size=128M"
wsrep_slave_threads=4
wsrep_sst_method=rsync

Remember that you will need to change wsrep_node_name and wsrep_node_address. The above is a very basic configuration.

Ensure that you have stopped MySQL (systemctl stop mysql). On the first node, execute:

mysqld_bootstrap

You can execute: mysql -u root -p -e "show status like 'wsrep_cluster_size'” and see:

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+

Now, when you bring up the second node as simply as systemctl start mysql, you can execute the same command above and will see that the wsrep_cluster_size has increased to 2. Repeat this again for the third node. You can also choose to test replication by creating a database and table on one node, and see that the replication is happening in real time.

To find out more, start MySQL and execute show status like 'wsrep%';.

We hope this helps you get started, and we are definitely looking at providing packages for Ubuntu 20.04 which just got released. Look forward to more guides on getting started on other types of Linux distributions.

 

by Sakari Keskitalo at May 05, 2020 02:20 PM

April 30, 2020

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Spotlight on: Upcoming OSF Virtual Events  

We’re Going Virtual!
Last month, based on the input from the community, board, and the latest information available from the health experts, we announced our decision not to host the OpenDev + PTG in Vancouver this June. Instead, we’d like to invite you to join us for the upcoming, first of its kind virtual OSF event series! 

OpenDev 

Join us for OpenDev, an ongoing collaborative event series focused on advancing open source software and communities. Participants can expect discussion oriented, interactive sessions exploring challenges, sharing common architectures, and collaborating around potential solutions. Previous OpenDev events include Edge Computing in 2017 and CI/CD in 2018.

The virtual OpenDev event series will consist of three separate events hosted in the upcoming months, each focused on a different open infrastructure topic:

  • Large-scale Usage of Open Infrastructure Software
    June 29 – July 1, 2020. Register now!
  • Hardware Automation
    July 20 – 22, 2020
  • Containers in Production
    August 10 – 12, 2020

If you are interested in the Hardware Automation or Containers events, we’d love your input on the best time block to host the sessions before registration goes live. Please share your preference here: Hardware AutomationContainers in Production.   

PTG

The Project Teams Gathering (PTG) is an event where open source upstream contributors (user working groups, development teams, operators, SIGs) gather to collaborate on software requirements and roadmaps. Registration is now open for the virtual PTG, taking place June 1-5.

The event is open to all OSF projects, and teams are currently signing up for their time slots. Find participating teams here, and the schedule will be posted on the PTG website in the upcoming weeks. Join us

Sponsor Shout Out

We also want to thank all of the OpenStack Foundation Platinum, Gold, and Corporate sponsors for their ongoing support that make these virtual events possible and free to attend. We couldn’t do it without you!  

Airship: Elevate your infrastructure

  • Join Airship at the virtual PTG! Stay up to date on meeting plans via the mailing list.
  • Check out the April update on the blog for the latest Airship 2.0 progress, virtual March meeting notes, and more.
  • Interested in learning how to set up a Cluster API development environment? Find step-by-step directions and documentation in this tutorial. This development environment will allow you to deploy virtual nodes as Docker containers in Kind, test out changes to the Cluster API codebase, and gain a better understanding of how Airship works at the component level to deploy Kubernetes clusters.
    • Read more about how Airship 2.0 plans to use Cluster API in this blog post.

Kata Containers: The speed of containers, the security of VMs

  • The community has set up an etherpad page for Kata virtual PTG. Please register your name and time slots if you plan to attend it. Also if you have anything to discuss during PTG, feel free to add it there as well.
  • Kata Containers Demo: A Container Experience with VM Security
    • Eric Ernst, principal systems software engineer, for Ampere and Bharat Kunwar, software engineer, for StackHPC explain how Kata Containers work, as well as its performance and security advantages. They also describe a use-case scenario and new research.

OpenStack: Open source software for creating private and public clouds

  • The OpenStack community is in the final preparation stage for the ‘Ussuri’ releasescheduled for May 13. Discussion on how to properly celebrate virtually is under way on the openstack-discuss mailing-list.
  • The election cycle to designate the stewards for our next development cycle, Victoria, just concluded. The Technical Committee (now a group of 11 people) welcomes two new members: Belmiro Moreira and Kristi Nikolla. They, along with three returning members (Graham Hayes, Mohammed Naser, and Rico Lin) make the newly elected members of the TC. Also, huge thank you to Alexandra Settle, Jim Rollenhagen, Thierry Carrez, and Zane Bitter for their past service. PTLs for project teams were also renewed, with 12 new people stepping up.
  • The 2019 User Survey results have been analyzed by the OpenStack Technical Committee. Read the full report for more information.
  • The Kolla team set up a new way to engage with users, and improve communication between Kolla operators and Kolla developers: the Kolla Klub. Interested? Read more information on how to join the next meeting.

StarlingX: A fully featured cloud for the distributed edge

  • The community is preparing for the upcoming virtual PTG to discuss topics like planning for the 5.0 release cycle, testing and cross-project collaboration. Stay tuned for updates as the event is getting closer!
  • The next StarlingX TSC elections are happening in less than a month! Check out the details on the elections web page in case you are interested in running for one of the 4 seats.

Check out these Open Infrastructure Community Events!

For more information about these events, please contact denise@openstack.org

Questions / feedback / contribute

This newsletter is written and edited by the OSF staff to highlight open infrastructure communities. We want to hear from you! If you have feedback, news or stories that you want to share, reach us through community@openstack.org . To receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by Sunny Cai at April 30, 2020 06:06 PM

StackHPC Team Blog

StackHPC Under African Skies: Kayobe in Cape Town

StackHPC are pleased to announce along with our partner, Linomtha ICT, a new OpenStack system at the Centre for High Performance Computing to support researchers and academics across South Africa. StackHPC worked with Linomtha, Supermicro and Mellanox to jointly engineer the system and support project management. The system deploys OpenStack Kayobe together with a billing system engineered around CloudKitty & Monasca.

The text below can also be found on Linomtha's blog.

LinomthaID logo

The Centre for High Performance Computing (CHPC) is proud to announce a new on-premise cloud infrastructure that has been delivered recently under exceptional circumstances. The delivery of the system is testament to the close collaboration CHPC has with Linomtha ICT (SA) and their strategic technology partners StackHPC Ltd (UK), Supermicro (SA), and Mellanox (IL), and will ensure that the CHPC has a stable environment to continue to deliver on its mandate. The OpenStack Production Cloud Services caters for the CHPC scientific users executing, for example, custom workflows, embarrassingly parallel workloads and webhosting. The OpenStack services will also be a road-header for such HPC configuration in the future. It is envisaged that this platform will build both the skills and operational experience for CHPC, to develop, provision and operate a National federated OpenStack platform, which will be linked with other countries, that are involved in the Square Kilometre Array (SKA) project.

The Cloud infrastructure has been designed in such a way that the transportation of data to and from the CHPC to the external institutions that are connected to the NICIS network or those that want to utilise the DIRISA long term storage can be achieved.

Linomtha, a majority black owned company comprising of an energetic mix of business people, entrepreneurs and engineers with experience and skills from various fields, together with CHPC, successfully completed the installation of the OpenStack Production Cloud Service project.

Linomtha recognises the important role that ICT can play in terms of economic growth, social inclusion and government efficiency. The key individuals driving Linomtha all have extensive practical experience in the field of ICT, working on large scale government and private sector projects across the country and are recognized as experts, both locally and internationally. Linomtha is a value-added reseller of StackHPC as well as Supermicro, the key technology partners in responding to CHPC's RFP. LinomthaICT's sister company, LinomthaID, provided the Billing/Invoicing portal for the solution through its VOIS platform.

The CHPC has been running a VMware virtual environment or cluster (IT-Shop) previously, as an alternative to support scientific projects or applications which were not best suited for High Performance Computing Platform. Projects were mostly hosted on the IT-Shop Cluster as web portals to support these special scientific groups to share data-knowledge or compute their specific scientific workflows.

The IT-Shop cluster is currently over-provisioned, especially for memory resources, due to the large demand of numerous projects requiring high-spec virtual machines and has become an unreliable environment, no longer able to adequately serve the users, as the performance and available capacity has deteriorated over time.

The CHPC OpenStack Production Cloud will provide a sufficient and efficient environment to continue to support these kinds of projects from the IT-Shop. In addition, the CHPC Cloud Solution will offer the following benefits and functionalities which were not met on the current IT-Shop:

  • Self-Service Portal. CHPC Cloud users will now have the ability to deploy application on-demand with limited technical support to promote rapid and efficient IT Service.
  • Metered Service and Resource Monitoring. CHPC will now be able to monitor resource utilization from individual users or projects to prepare billing statement as per our cost-recovery model.
  • Avoid Vendor Lock-In. The OpenStack solution is open source. CHPC will Reduce-On-Cost related to proprietary software such as the VMware vSphere Solution.
  • Enable Rapid Innovations (DevOps). The CHPC Staff can significantly reduce on development and testing periods and have more freedom to experiment with new technology or even do customisation to expand the capabilities of the OpenStack Cloud.

The CentOS based OpenStack Cloud is a self-service Virtual Machine (VM) provisioning portal for CHPC Administrators where common administrative tasks like VM creation, recoup unused resources, and infrastructure maintenance tasks are automated and capacity analysis, utilization, and end-user costing reports can be generated.

Through this project, CHPC administrators have been exposed to the initial implementation of the OpenStack system and have hands on experience of performing the various required tasks.

Linomtha together with Supermicro, Mellanox, StackHPC and LinomthaID have jointly-engineered the CSIR OpenStack Cloud Solution. This solution is built on Supermicro Server and Storage systems that deliver first to market innovation and optimized for value, performance and efficiency. Using the Supermicro TwinPro Servers to provide 320cores/640threads (2.50 - 3.90GHz) and over 3TB DDR4 2933 Memory providing some 9GB RAM per core all in just 4U of rack space, connected through Mellanox 100GB Ethernet Networking to Supermicro Ultra and Supermicro Simply Double Servers providing a CEPH Storage cluster with over 1.5PB (1500TB) of Mechanical Disk Storage and more than 220TB of Flash Storage.

OpenStack was deployed with OpenStack Kayobe, a tool largely developed and maintained by StackHPC within the OpenStack Foundation. Kayobe provides for easy management of the deployment process across all compute, storage and networking infrastructure using a high degree of automation through infrastructure as code. Kayobe invokes a containerised Kolla control plane providing for easier upgrades and maintainability. In addition to the infrastructure element, Kayobe also deploys rating, monitoring and logging services providing insight on resources and their use.

The integration of the invoicing engine and portal, VOIS, was undertaken by LinomthaID who extracted the billing information of the Openstack Usage provided by CloudKitty, and localised and customised the invoicing to CHPC requirements.

Ensuring there was constant and clear communication during the project, the Linomtha project team ensured daily stand-up calls, weekly progress meetings and utilised tools such as Slack and Google Meet - which allowed for quick turnaround times for addressing queries.

We were impressed with the Slack communication and the shared Google drive provided for documentation between team members, it made the sharing of thoughts much easier resulting in solving problems quickly and collaboratively.

A single point of contact was identified from each stakeholder involved in the project, allowing for communication to flow to the right people and ensuring action items were accomplished and ultimately, meeting the challenging deadline.

One component of the project was training which initially was to take place on-site, but due to the restraints of COVID-19, the team improvised and the training was successfully delivered remotely, over a five-day period. The training was deemed a great success! The training has ensured that the CHPC Administrators have sufficient knowledge and confidence to efficiently manage the environment.

The training was one of the best we've attended, the setup was great, the trainer's expertise and their quick thinking or rather well-considered answers in providing solutions to our questions was impressive. The information gathered and shared is helping us with our OpenStack operations and we can only grow strong from here with our OpenStack expertise as well.

No project is without challenges and this one was no exception. One of the lessons learnt was that the time between the initial workshop and implementation was too compressed. It did not allow for all team members, including technical resources, to fully understand the finer technical detail of the project and allow them to all contribute.

Despite the challenges encountered during the project, through the professional Linomtha Project Management deployment, milestones were met, the deadline accomplished, quality documentation drafted, successful training delivered and the handover to operations completed within the required deadline and budget.

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by John Taylor at April 30, 2020 03:00 PM

April 29, 2020

StackHPC Team Blog

Kata Containers on The New Stack

Our team draws on a broad base of expertise in the technologies used to build the high-peformance cloud. Occasionally our research breaks new ground, and we are always thrilled with the opportunity to talk about it.

The New Stack recently approached Bharat from our team to participate in a webinar on Kata containers. Often Kata containers are pitched with the soundbite "the speed of containers, the security of VMs". Bharat's previous research on IO performance suggested the real picture was more nuanced.

The end result is a great article and webinar (with Eric Ernst from Ampere), which can be read here. Bharat's presentation can be downloaded here (as PDF).

Get in touch

If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.

by Bharat Kunwar at April 29, 2020 01:40 PM

April 21, 2020

VEXXHOST Inc.

6 Reasons Why You Should Run Your Containers On OpenStack

Let’s talk about running containers on OpenStack.

In a fiercely competitive market, it’s crucial that businesses keep up to date with trends and innovations within the IT space. The power of an OpenStack powered cloud allows users to deploy and update applications faster than ever before to keep up with what will only be bigger and bigger increases in demand.

Container technology in OpenStack can offer strategic flexibility and agility when it matters most. Moreover, with the power of containerization, your business will be able to manage applications in a consistent fashion all the while increasing overall efficiency.

Today we are going to review six reasons why containers and OpenStack work well together and why you should be running your containers on OpenStack. Keep reading to learn more.

Reason 1: Provides Measurable Standards

Thanks to the support of the growing OpenStack community that is comprised of users and developers, OpenStack is able to provide a good platform for building scalable clouds. Certainly, by providing measurable standards for cloud platforms, OpenStack can offer flexibility, efficiency, innovations, and savings for all users of their infrastructure.

Reason 2: Improves Overall Security

Some businesses are hesitant to adopt containers due to security concerns. Thankfully, OpenStack can help limit some of these security concerns and risks. Through the integration of certain tools for scanning and certification, OpenStack allows for the verification of container content. Thus ensuring that all content and containers are safe. Certainly, OpenStack clouds support both single and multi-tenant options for private and public clouds respectively. Furthermore, our business is able to select which cloud best suits your unique security needs. That being said, at VEXXHOST you’ll find virtual machines, bare metal and containers available all in one environment.

Reason 3: Allows Teams To Develop Apps Faster

If your business or enterprise is looking to develop better quality applications with speed then containers may be able to help. Containers can help increase the portability of applications while reducing the overall time it takes to develop them. In addition, highly distributed applications are able to avail of microservice architecture and containers can help deploy these microservices with speed. Containers plus OpenStack is a great way to add speed to your cloud infrastructure.

Reason 4: The OpenStack Community

The OpenStack community has created several projects that support containers. These projects work to support containers and the third-party ecosystems around them, within an OpenStack powered cloud. In more recent developments, OpenStack offers different container-centric management solutions, such as monitoring, multi-tenant security, and isolation.

Reason 5: Software-Defined Infrastructure Services

OpenStack compute, network, storage, tenancy security, and service management are just some of the software-defined infrastructure services that they offer. This ecosystem provides a plethora of capabilities and choices for developers and users alike. Moreover, containers are able to run within virtual machine sets, aggregating OpenStack compute and other infrastructure resources.

Reason 6: Continuous Standardization

Lastly, OpenStack embraces advanced open standards for container technology. The OpenStack Containers team was created to work and build upon container standards. Thus allowing things like the runC runtime standard from the Open Container Initiative (OCI) to become a reality. From there, OpenStack continues to develop simpler ways for organizations to adopt container technology within their OpenStack powered cloud.

Run Containers On OpenStack

In conclusion, is your business considering getting started with OpenStack? Trust the experts at VEXXHOST to help guide you through the process. Contact us today to learn more about our OpenStack powered private cloud services.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 6 Reasons Why You Should Run Your Containers On OpenStack appeared first on VEXXHOST.

by Hind Naser at April 21, 2020 07:47 PM

April 20, 2020

VEXXHOST Inc.

3 Tips For Easy Cloud Application Development

Application development is not a one size fits all model. In contrast between traditional IT department systems and cloud programming tools, there’s a lot of differences between them. These significant differences can mean slower processing times, complex integration and issues for traditional IT that consume time and resources. The best way to take on any challenges surrounding old IT infrastructure and application development is for businesses to be open to adopting new technologies to face any issues head-on.

Good news: 76% of businesses have some form of their data center infrastructure on the cloud. Better news: In 2015, 52% of enterprises with 1,000 employees or more planned on increasing their cloud spend. This number is only growing with each passing year as cloud computing takes a larger prominence in the IT-sphere.

When it comes to the growth of the cloud it impacts on application development are evident. From changes in outlining design specifications to writing code, the cloud is here to help deliver applications more efficiently. We’re here today to go over some tips and tricks for easy cloud application development. Ready to get learning? Let’s dive straight in.

Tip #1: Address Performance Issues Early On In Cloud Application Development

If you don’t address performance issues early on it can have a devastating impact on your system development. Prepare your team to work around potential network bottlenecks or latency issues. Applications need to be architected to ensure that network resources are always available. Before, applications ran on a handful of computers. Now cloud computing allows applications to run on multiple servers and even larger data centers. Create your application design with the potential server load or bandwidth in mind to make sure that everything runs smoothly from the start.

Tip #2: Understand Your Impact

The impact of cloud implementation in application development goes much further than your IT department. From internal departments such as sales or human resources to external reaches such as partners or customers, your application systems have reached. Moreover, with the cloud, your business is able to extend its systems and share data. You need to ensure that your data and all applications are secure, especially when opening up data to users outside your organization. Examine all connected components to ensure that information reaches those who need it and stays inaccessible to those who don’t.

Tip #3: Keep A Close Eye On System Resources

In order to ensure the smoothest experience in cloud application development, users need to be wary of their usage of system resources. There is a dynamic aspect to application development. System configurations are always in flux, or a virtual machine could be used for a test one day and still be running a few days later. With traditional IT systems, an oversight isn’t the end of the world. Even though this is not a major expense, thanks to the abstract nature of cloud infrastructure these costs can impact the productivity of application development. Moreover, cloud computing gives businesses the benefits of additional flexibility, better agility, and lower costs. Keep a close eye on your system resources to make sure that your development is as efficient as it can be.

Cloud application development is here to help streamline your business processes. Curious to learn more? Contact our team of experts to learn how a public cloud solution can help get your business started with the cloud.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post 3 Tips For Easy Cloud Application Development appeared first on VEXXHOST.

by Hind Naser at April 20, 2020 08:15 PM

OpenStack Superuser

Women of Open Infrastructure: Meet Melanie from the OpenStack Nova Project

This post is part of the Women of Open Infrastructure series to spotlighting the women in various roles in the community who have helped make the Open Infrastructure successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of Open Infrastructure taking shape. If you’re interested in you are interested in being featured or would like to nominate someone to tell their stories, please email editor@openstack.org.

This time we’re talking to Melanie Witt from the OpenStack Nova project. She tells Superuser about how she became an active contributor in the community and her best practices on stay on top of things in the fast-moving open source industry.

What’s your role (roles) in the Open Infrastructure community?

I am a core reviewer in the OpenStack Nova project. I also served as Nova Project Team Lead (PTL) for the Rocky and Stein release cycles.

What obstacles do you think women face when getting involved in the Open Infrastructure community?   

For me, I think the primary obstacle I faced when I was first getting involved in the community was being different than most everyone else. Questions like, “will they accept me?” or “do I belong here?” came to my mind. I think I started off too shy because of this.

The community documentation made it easy for me to get started and before I knew it, I was proposing patches, filing and triaging bugs, and chatting on IRC with members of the community. Everyone was (and still is, eight years later) so welcoming and willing to help me. I love this community and am still so happy I joined. The only thing I would change is I would have started off less shy.

Why do you think it’s important for women to get involved with open source?

I think it’s important for everyone to get involved with open source. Open source software is such a unique model where a large community of contributors works together on software we all share. Each contribution is multiplied not only to a single company’s product or customers but to everyone in the world who uses that same open source software. It’s like “distributed software.” We have so many improvements constantly flowing into the software from different people and organizations that it can get difficult to keep track of all of them (in a good way!).

Open source is so important and impactful that I think everyone who is interested in getting involved should absolutely get involved. You might hesitate and wonder whether it’s for you if you are different, but I encourage everyone to give it a try. It can be very rewarding.

Efforts have been made to get women involved in open source, what are some initiatives that have worked and why?

I think the most important thing is general community encouragement. When someone asks a question in an IRC channel or when they propose a patch, file a bug, post to the mailing list, having a community that responds with friendliness, helpfulness, and actionable guidance makes all the difference, in my opinion. When someone reaches out to make a contribution, they’ve put themselves out there to a new community. It’s important to help them learn the ropes and by doing that you also let them know you appreciate their contribution. When people know their contributions are appreciated, they are more likely to return and make more contributions. Keep it going and eventually, they will hold positions in the community like core reviewer, PTL, Technical Committee (TC) member, etc.

Community documentation is the second most important thing, in my opinion. This will be the first thing that prospective contributors see and interact with, so it’s important that it be clear, concise, and easy to consume as a layperson. All of us had to start somewhere and the easier it is to understand the documentation as a new person who doesn’t know anything yet, the more likely we are to obtain new contributors. It’s hard to make that first step to get involved when you have no clue what anything is or how to use it.

Open source moves very quickly. How do you stay on top of things and what resources have been important for you during this process?

I do a lot of things to stay up to speed. First, I have a separate email address for open source work and I set up email filters for Gerrit notifications, launchpad bugs, and mailing lists. This helps me to find highlights in each area: of code reviews|bugs|community discussion mailing list quickly. Next, I try to attend community meetings on IRC and if I can’t attend, I read the recorded meeting log created by the channel meetbot. I have my IRC client set up with a ZNC bouncer and receive notifications when my IRC nick is mentioned. I review these at the start of each workday and respond to items related to me.

By doing these things, I’m able to have at least a high-level idea of what’s going on even when I’m more occupied with downstream work. During the busiest of downstream times, I spend at least 15 minutes a day reviewing the code review|bug|community mailing list email filter folders just so I have an idea what’s been going on.

The last thing I’ll mention might sound obvious, but another thing to do is to let people know when you’re interested in something. If you’re interested in a feature or a bug fix, chime in on the review or the bug or  IRC. When people know others are interested, they’re more mindful about communicating updates and will likely keep you in the loop directly.

What advice would you give to a woman considering a career in open source? What do you wish you had known?

I would say, give it a try! Each open source community is different and it’s important to know that you can find the right community that is a good match for you. Not everyone finds the same fit in the same communities. I would advise not to give up on a career in open source if the first community you tried was not a good match. Communities are mostly about people, not only code, so there is an element of match-making personalities and styles involved.

I think the number one thing I wish I had known a long time ago is how rewarding it is to take risks. I’m actually thinking more about small risks, like asking questions, jumping into a code review|bug report|community mailing list post that you weren’t previously involved with, asking for advice, and proposing an idea or patch and potentially being wrong or having people not like your idea. Lots of these things can feel like embarrassments or failures but I’ve learned over the years that these are things that make you part of a community and team. You might feel embarrassed but what others see is that you are engaged and motivated to solve problems. That you are someone they might want to ask to weigh in on something later or you might be someone to go to for questions on that topic.

Push yourself to ask questions and share thoughts on code reviews, even if they are not perfect. Oftentimes, an imperfect question or comment will build a little bridge for someone else to catch a problem in a patch or see a way to improve a patch. This helps build your relationship with the community as people get to know your contributions and get more chances to appreciate them.

Taking risks is hard but I think it’s really worth it. I’d say it’s even essential. Otherwise, you stay “distant” in the community, to some degree. So, please take risks. I have been in the community for many years and I am still pushing myself to take risks.

The post Women of Open Infrastructure: Meet Melanie from the OpenStack Nova Project appeared first on Superuser.

by Superuser at April 20, 2020 05:00 PM

April 19, 2020

RDO

Community Blog Round Up 19 April 2020

Photo by Florian Krumm on Unsplash

Three incredible articles by Lars Kellogg-Stedman aka oddbit – mostly about adjustments and such made due to COVID-19. I hope you’re keeping safe at home, RDO Stackers! Wash your hands and enjoy these three fascinating articles about keyboards, arduino and machines that go ping…

Some thoughts on Mechanical Keyboards by oddbit

Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.

Read more at https://blog.oddbit.com/post/2020-04-15-some-thoughts-on-mechanical-ke/

Grove Beginner Kit for Arduino (part 1) by oddbit

The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.

Read more at https://blog.oddbit.com/post/2020-04-15-grove-beginner-kit-for-arduino/

I see you have the machine that goes ping… by oddbit

We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!

Read more at https://blog.oddbit.com/post/2020-03-20-i-see-you-have-the-machine-tha/

by Rain Leander at April 19, 2020 09:45 AM

April 17, 2020

VEXXHOST Inc.

Why Decision Makers Need To Build Cloud Culture

If you’re a decision-maker in your business or organization then you’ve probably already considered implementing some form of cloud solution. Or maybe you’ve already deployed a public or private cloud for your business. Moreover, decision-makers are aware that moving to the cloud is a transition that requires both time and resources. Despite this, the overall positive impact of a winning cloud strategy is evident.

It goes without saying that implementing cloud infrastructure goes far beyond the scope of your IT department. Therefore, cloud technology has a notable impact on all layers of a company, from sales to human resources and beyond. In order to create a successful cloud, it’s integral that decision-makers build cloud culture throughout their business. Futhermore, your staff need to understand the power of cloud infrastructure, how it benefits them and receive education on cloud best practices. By encouraging shared knowledge amongst your team you are helping to build a cloud educated workforce that is ready to approach the cloud.

We’re here today to dive into why decision-makers need to build cloud culture in the workplace and how building cloud culture pays off. Keep reading to learn more.

The Cloud Culture Difference

When a business moves away from traditional IT infrastructure they open a new world of possibilities. From being able to opt for hosted solutions in a data center to building an on-premise solution right on-site, there a clear incentive to make the move to the cloud. Certainly, any business that is moving towards a modern infrastructure is looking towards cloud computing. Moreover, implementing both a private or public cloud will have a ripple effect on the IT department and beyond. Any aspect of your business that touches technology can see the benefits of cloud infrastructure. The best way to get the entire team on board with implementing a cloud solution is to get them to invest early in understanding the opportunities and benefits of the cloud. Everyone on your team should have a role in adopting, adapting and maintaining the cloud.

Cloud-Powered Digital Transformation

Once upon a time, cloud computing was usually allocated purely to the IT department. They gave them a set of resources and responsibilities for cloud deployment. Now today things have very much changed. The IT department must collaborate with decision-makers to ask how a digital transformation can benefit the business or organization overall. Which departments will adopt the most from the cloud and what gaps in knowledge your team will address to ensure that this is a success. If you store confidential data from human resources in the cloud then it is essential that your HR department is aware of security best practices. Certainly, the same goes for your sales team if they utilized a cloud based CRM.

Of course, there is an obvious learning curve, but ensuring that you and your employees are fully invested in cloud adoption and cloud culture is the best way to begin your cloud journey. Actively working on cloud culture is the first and one of the most important steps that you can take to optimize your cloud for success. Thinking of implementing a private or public cloud to modernize your IT infrastructure? Contact us to learn more about how we can help you get there.

Would you like to know about OpenStack Cloud? So download our white paper and get reading!

Conquer the Competition with OpenStack Cloud

The post Why Decision Makers Need To Build Cloud Culture appeared first on VEXXHOST.

by Hind Naser at April 17, 2020 05:33 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
July 05, 2020 07:37 AM
All times are UTC.

Powered by:
Planet