VEXXHOST is a Canadian cloud computing provider with an impressive global presence and record. As a former Superuser Awards winner and a major contributor to the open source community, VEXXHOST’s growth trajectory is of keen interest to any member of the community.
From small businesses to governments, VEXXHOST’s operations are spread over 150 countries. The company started as a web hosting provider in 2006 and later transitioned into cloud solutions. They adopted OpenStack for their cloud operations in 2011, coinciding with the platform’s second release, Bexar. VEXXHOST now offers OpenStack-based public cloud, private cloud, consulting, and other enterprise-grade cloud solutions.
VEXXHOST recently announced a massive revamp in its public cloud offerings during its keynote address at the Open Infrastructure Summit 2020. There are some new and exciting developments, including a brand new Data Center in Amsterdam. The company says that the revamp is part of their new growth strategy, powered by their association with various open source foundations and communities. They are a reputed member of the Open Infrastructure Foundation, Linux Foundation, CNCF, and the Ceph Foundation, to name a few.
With this revamp, VEXXHOST joins the range of community-driven companies evolving and growing strong even during these testing times (i.e., pandemic time). Without further delay, here’s what they have included in the cloud revamp.
The Amsterdam data center is VEXXHOST’s third region to date, the other two being Montreal and Santa Clara. VEXXHOST also manages several private cloud operations for different enterprises and organizations in various other parts of the world, in association with local data centers. The launch of the new region not only puts them physically in a different continent but also cements their global footprint.
The state-of-the-art data center used in Amsterdam provides unparalleled service in terms of connectivity, efficiency, server protection mechanisms, security, and reliability. The facility is built to Uptime Institute Tier 3 standards, and has PCI/SOC, and AMS-IX, and multiple ISO certifications. The Amsterdam data center also boasts of a multi-tier security system, 24×7 on-site support, high energy efficiency with green standards, advanced global connectivity via being linked to AMS-IX, among other features.
Another change that is part of the revamp is VEXXHOST’s new pricing strategy. The modified pricing is available to all new users as well as existing customers who make the switch to their latest solutions.
According to VEXXHOST’s new introductory and limited-time pricing model, the hourly rate for a standard 2 core, 8 GB offering is just $0.055, compared to the market rate, which stands at $0.086. The difference between the current market rate and the new VEXXHOST rate is 36%. Higher core/memory offerings such as 4 core – 16 GB, 8 core – 32 GB, and 16 core – GB, will also carry proportionate price reduction.
This new pricing strategy couldn’t have come at a better time. Many businesses are facing a lot of challenges due to the pandemic situation and there is also a need for companies to find remote and cloud solutions to adapt to the rapid changes around. Getting users enterprise-grade cloud solutions at great pricing is beneficial not only for the parties involved but also for the overall community.
Next in VEXXHOST’s revamp is the addition of new servers equipped with 2nd Gen AMD EPYC processors for their Montreal region. Driven by AMD Infinity Architecture, these processors are the first x86 processor of their kind for servers based on 7nm process technology. Also boasting of a hybrid, multi-die architecture, they offer up to 64 high-performance cores per SOC and are equipped with AMD Infinity Guard and PCIe® Gen 4 I/O security features.
The new AMD processors bring about significant change to VEXXHOST’s servers in better performance and secure computing, improving workload acceleration, data protection, and overall infrastructure to the users.
VEXXHOST has upgraded storage from SSD to NVMe in their Montreal region, offering its public cloud users the fastest hard drive in the market. A major benefit available with this storage type is faster parallel read and write capabilities.
NVMe drives can work at speeds of more than 2,000MB/s compared to the typical SATA III SSD running under 600MB/s. This is possible because of the interaction with flash memory, made through a PCIe interface. This interface is bi-directional and runs at a stellar speed. NVMe storage is also better in power-efficiency, reducing consumption by significant percentages in standby mode. NVMe is also a scalable alternative going beyond the four lanes found in most conventional PCIe SSDs.
Moving forward, VEXXHOST has a lot planned in terms of cloud offerings to customers. If you would like to know more about the company and its extensive cloud solutions, check out VEXXHOST’s website.
The post New in OpenInfra: VEXXHOST’s Public Cloud Revamp Includes a Lot More Than the New Data Center in Amsterdam appeared first on Superuser.
Codership is pleased to announce a new Generally Available (GA) release of the multi-master Galera Cluster for MySQL 5.6, 5.7 and 8.0, consisting of MySQL-wsrep 5.6.50 (release notes, download), 5.7.32 (release notes, download) and 8.0.22 (release notes, download) with Galera replication library 3.32 (release notes, download) implementing wsrep API version 25, and Galera replication library 4.7 (release notes, download) implementing wsrep API version 26 for 8.0. This release incorporates all changes to MySQL 5.6.50, 5.7.32 and 8.0.22 respectively, adding a synchronous option for your MySQL High Availability solutions.
This release is notable because there have been improvements with CRC32 hardware support compatibility on ARM based architectures for the Galera replication library 3.32 and 4.7. GCache locking has also been improved.
From a lower level, the build system has changed from using SCons to CMake, so while SCons scripts are still included they will not be actively maintained any longer.
In MySQL 5.6.50, we improved the XtraBackup v2 SST script for FreeBSD users.
In MySQL 5.7.32, we added a new variable wsrep_mode. The first application of the variable is to allow ignoring native replication filter rules if configured with replicate-do-db. We also improved LOAD DATA splitting to generate Xid events for binary log events with intermediate commits.
In MySQL 8.0.22, we now have a new SST method based on the CLONE plugin. Similar improvements around wsrep_mode exist too. Improvements around foreign key handling are also present in this release: load data with foreign keys had intermittent failures that have been fixed, as well as BF-BF conflicts between OPTIMIZE/REPAIR/ALTER TABLE and DML in presence of foreign key constraints were suppressed. We would also like to note that Percona XtraBackup version 8.0.22-15.0 and greater is required to perform XtraBackup based SSTs.
For now, MySQL 5.6.50 and 5.7.32 are the last official builds for CentOS 6 and Red Hat Enterprise Linux 6. It is also important to remember that MySQL 5.6 is nearing End Of Life (EOL) so we do recommend that you upgrade to MySQL-wsrep 5.7.
Cyborg is an accelerator resource (GPU, vGPU, FPGA, NVMe SSD, QAT, DPDK, SmartNIC etc.) management project, and it uses a micro-service architecture to support distributing deployment, which contains cyborg-api, cyborg-conductor and cyborg-agent services. Cyborg-agent collects accelerator resources information and reports it to the cyborg-conductor. Cyborg-conductor stores accelerator resources information to the database, and reports it to Placement resource management service. Placement stores the resources information and provides available resources for Nova scheduling when the server is creating. Cyborg-api service provides interfaces to request accelerator resources information. Diagram 0 is the architecture of Cyborg.
With Cyborg, we can boot server with accelerators by interacting with Nova and Placement, and support batch boot servers with scheduling accelerators (this is an enhancement in Inspur InCloud OpenStack Enterprise edition that we plan to contribute to the Nova and Cyborg community.). Then users can use these devices in server to program with FPGA, progress images via GPU and so on. We can also bind and unbind accelerators to exist server by hot-plug and non-hot plug devices, which guarantees the convenient usage of accelerators. Diagram 1 is the interaction flow between Nova and Cyborg when booting a server.
On the side of enhancing server operations with accelerators, we have supported most operations for servers, such as creation and deletion, reboots (soft and hard), pause and unpause, stop and start, take a snapshot, backup, rescue and unrescue, rebuild, evacuate. And there are other operations going on, such as shelve and unshelve, suspend and resume that are close to merge. We plan to support migration (live and cold) and resize soon. With these enhancements, operators can use accelerators more flexibly. Diagram 2 is the sequence flow about booting a server with vGPU devices.
And in Inspur InCloud OpenStack Enterprise edition we have made enhancements to some features, such as batch boot servers, bind and unbind accelerator devices based on hot-plug which are mentioned above. Cyborg was used to managing virtual GPU and the utilization rate was improved by 80%. Data synchronization strategy makes Cyborg and Placement data transportation increased by 30% on efficiency.
From diagram 3, the main components of N3000 include the Intel Arria® 10 FPGA, Dual Intel Ethernet Converged Network Adapter XL710, Intel MAX® 10 FPGA Baseboard Management Controller (BMC), 9 GB DDR4, 144 Mb QDR-IV. It can support High-speed network with 10Gbps/25Gbps interface and High-speed host interface with PCIe* Gen 3×16.
Intel® FPGA Programmable Acceleration Card N3000 (Intel FPGA PAC N3000) is a highly customized platform that enables high-throughput, lower latency, and high-bandwidth applications. It allows the optimization of data plane performance to achieve lower costs while maintaining a high degree of flexibility. End-to-end industry-standard and open-source tool support allow users to quickly adapt to evolving workloads and industry standards. Intel is accelerating 5G and network functions virtualization (NFV) adoption for ecosystem partners, such as telecommunications equipment manufacturers (TEMs), virtual network functions (VNF) vendors, system integrators, and telcos, to bring scalable and high-performance solutions to market. Diagram 4 is the sequence flow for program in Cyborg.
For SmartNIC, we can program it with OVS image as NFVI function in OpenStack, e.g. N3000, Mellanox CX5 and BF2. Diagram 5 is the sequence for enabling accelerators for SmartNIC.
In Cyborg, we support several new features:
Users can start a new program process request for N3000. It is an async API, and it is safe to detect intelligently whether there is a conflict in resource usage and decide on accepting or rejecting the request. We also support a friendly program process query API. Users can use it to check the stage of the process and what percentage is completed at any time. When the program is finished, the resource typed and quantified, then Cyborg can discover and report the change dynamically.
In the whole OpenStack, we have also made some new improvements.
After these improvements, the OpenStack can support SmartNIC more flexibly and conveniently, and the improvements include SmartNIC cards (as mentioned above), even other SRIOV card.
All the new features and improvements on OpenStack will be upstream.
On November 25, Inspur InCloud OpenStack (ICOS) completed the 1000-nodes “The world’s largest single-cluster practice of Cloud, Bigdata and AI”, a practice for convergence of Cloud, Bigdata and Artificial Intelligence. It is the largest SPEC Cloud test and the first large-scale multi-dimensional fusion test in the industry. It has achieved a comprehensive breakthrough in scale, scene and performance, and completed the upgrade from 500 nodes to 1000 nodes, and achieved the sublimation from quantitative change to qualitative change. Inspur is working on the white paper for this large-scale test, which will be released soon. This will certainly be a reference for products in large-scale environments.
This article is a summary of the Open Infrastructure Summit session, Enhancement of new heterogeneous accelerators based on Cyborg.
Watch more Summit session videos like this on the Open Infrastructure Foundation YouTube channel. Don’t forget to join the global Open Infrastructure community, and share your own personal open source stories using the hashtag, #WeAreOpenInfra, on Twitter and Facebook.
Thanks to the 2020 Open Infrastructure Summit sponsors for making the event possible:
Headline: Canonical (ubuntu), Huawei, VEXXHOST
Premier: Cisco, Tencent Cloud
Exhibitor: InMotion Hosting, Mirantis, Red Hat, Trilio, VanillaStack, ZTE
The post Enhancement of New Heterogeneous Accelerators Based on Cyborg appeared first on Superuser.
by Brin Zhang, Shaohe Feng and Wenping Song at January 13, 2021 02:00 PM
File sharing and storage is a thriving entity within cloud-based environments. As OpenStack’s cloud file sharing tool, Manila is a reliable solution providing high-performance, highly scalable storage to users. Here is an overview of the project and how it benefits OpenStack cloud users.
Manila was originally derived from OpenStack’s block storage tool, Cinder. Similar to Cinder offering canonical control plane for block storage, Manila provides canonical storage provisioning control plane for shared or distributed file systems in OpenStack. Like Cinder, users in Manila can configure multiple backends. With Manila, a user can store content repositories, web server farms, development environments, big data apps, home directories, and more. The service also has a share-server assigned to every tenant.
As an open source cloud file sharing service, Manila has certain objectives:
Cloud file sharing with OpenStack Manila gives users several benefits. Secure file sharing with user access control is one of the major advantages. This means that the service allows you to control access by various users. This access control is done by limiting permissions on mounting and operating files. The file-sharing can only be accessed through servers with appropriate credentials.
Another advantage of file sharing with Manila is its ability to provide a simplified integration with Cinder, OpenStack’s block storage tool. Manila integrates with Cinder by installing the necessary software, which allows any designated system to access the file sharing with ease.
Finally, OpenStack Manila also allows your environment to have a performance conscious and efficient processing. Users have the ability to download the files that they specifically need instead of downloading the files as a whole. This capability will save a lot of processing power and time for the user. For big data use cases, this becomes particularly useful as the system can read part of the file and distribute the workload.
At VEXXHOST, we offer cloud file sharing among the various OpenStack services we provide to our customers. The pay as you go structure allows the user to scale storage as they fit the application needs, and the user only pays for the hours the service is used. Services such as Manila are deployed according to specific use cases through our cloud solutions, including highly scalable private clouds.
Speaking of private clouds, you can now run on a fully agile and customized cloud from VEXXHOST, with no licensing fees and smooth 2-week migration. In fact, we’re ready to put our money where our mouth is. We’re so confident in being able to save you at least 20% or more on your current cloud infrastructure expenditure that if proven wrong- we’ll give you $1,000 credit to our public cloud.
Excited? Find out more.
The post An Overview of Cloud File Sharing with OpenStack Manila appeared first on VEXXHOST.
For people who have attended the Open Infrastructure Summits in recent years, you have probably heard of the first winner of the Superuser Awards, CERN. CERN is the European Organization for Nuclear Research. It has a laboratory that is located at the border between France and Switzerland, and the main mission of CERN is to uncover what the universe is made of and how it works.
For a few years, all physical servers deployed in the CERN IT data centers have been provisioned as part of the OpenStack private cloud offering, leveraging Nova and Ironic. What started with a few test nodes has now grown to more than 5,000 physical machines, with the aim to move to around 10,000 nodes.
CERN and CERN IT rely heavily on OpenStack, and we have deployed OpenStack in production since 2013 with around 8,500 compute nodes with 300k cores and 35k instances in around 80 cells. In addition, we deploy three regions mainly for scalability and to ease the rollout of the new features.
When we scaled our bare metal deployment with Nova and Ironic to several thousand nodes, we encountered three main issues so far:
Scaling Issue 1: Controller Crashes
The CERN Cloud Infrastructure Team uses the iscsi deploy interface for Ironic. It means that a node upon deployment is exporting an iscsi device to the controller, and the controller is then dumping the image downloaded from Glance onto it. (Note that this deploy interface will be deprecated in the future, so direct deploy interface should be used instead.)
For deployment, since the images are tunneled through the controller, many parallel deployments will drive the conductor into out-of-memory (OOM) situations. In consequence, the controller would crash and leave the nodes in an error state.
Solutions
To address this issue we horizontally scaled the controllers and introduced the “wing” controllers to have more controllers to handle the requests. Another solution would be to use a scalable deploy interface, such as direct or ansible which makes the node download the image directly.
Scaling Issue 2: API Responsiveness
When scaling up the infrastructure, we noticed that all requests involving the database were slow. Looking at the request logs, we realized that the inspector as the other component running on the controller gets a list of all nodes to clean up its database every 60 seconds. In addition, we had disabled pagination when we reached 1,000 nodes, which means that every request that went to the API assembled all the nodes in one giant request and tried to give that back to the requester.
Solutions
To solve this issue, we re-enabled pagination and changed the sync interval from 60 seconds to one hour. You can see how this solution affected the response time from the graph below. For a more scalable solution, we introduced “Inspector Leader Election”, which we developed together with upstream and deployed now in production. Inspector Leader Election will be available in the Victoria release, and you can see more details about how it works here.
Scaling Issue 3: Resource Discovery
Another issue that we faced when increasing the number of physical nodes managed by Ironic is the time that the Resource Tracker (RT), which runs in each nova-compute, takes to report all the available resources to Placement. From the graph below, you can see the original OpenStack Nova and Ironic setup for the dedicated bare metal cell at CERN.
There is a dedicated Nova cell for Ironic which is the standard way that we partition and manage the infrastructure. In the dedicated Nova cell, we only have the cell control plane, nova-conductor, RabbitMQ, and nova-compute. This single nova-compute is responsible for all the communication between Nova and Ironic using the Ironic API. It is possible to run several nova-computes in parallel based on a hash ring to manage the nodes, but when we were testing this functionality, several issues were observed.
Since there is only one nova-compute that interacts with Ironic and the RT needs to report all the Ironic resources, the RT cycle takes a long time when the number of resources is increased in Ironic. In fact, it took more than three hours to complete our deployment with about 5,000 nodes. During the RT cycle, all the users’ actions are queued until all the resources are updated. This created a bad user experience with the new resources creation taking a few hours.
Conductor Groups
In order to have failure domains in Ironic and to allow to split the infrastructure, a new feature was introduced in the Stein release of Ironic and Nova. It’s called “conductor groups”. A conductor group is an association between a set of physical nodes and a set of (one or more) Ironic conductors which manage these physical nodes. This association reduces the number of nodes a conductor is looking after, which is key in scaling the deployment.
Conductor Groups Configuration
The conductor group an Ironic conductor is taking care of is configured in Ironic’s configuration file “ironic.conf” on each Ironic conductor:
[conductor]
conductor_group = MY_CONDUCTOR_GROUP
Next, each Ironic resource needs to be mapped to the conductor group that is selected, and this can be done in the Ironic API.
openstack baremetal node set --conductor-group "MY_CONDUCTOR_GROUP"
<node_uuid>
Finally, each group of nova-compute nodes needs to be configured to manage only a conductor group. This is done in Nova’s configuration file “nova.conf” for each “nova-compute” node:
[ironic]
partition_key = MY_CONDUCTOR_GROUP
peer_list = LIST_OF_HOSTNAMES
Now, there is one nova-compute per conductor group, and you can see how the deployment looks like in the graph below.
The Transition Steps
How did we deploy the conductor groups? The transition steps below are summarized from a CERN Tech Blog, Scaling Ironic with Conductor Groups.
Impact on the Resource Tacker (RT) Cycle Time
In the graph below, you can see the number of Placement requests per conductor group. As a result, the RT cycle time now only takes about 15 minutes to complete instead of three hours previously since it’s now divided through the number of conductor groups.
Number of Resources Per Conductor Group
How did we decide the number of resources per conductor groups? The RT cycle time increases linearly with the number of resources. The compromise between management and RT cycle time is around 500 nodes per conductor group in our deployment. The deployment scales horizontally as we add conductor groups
Scaling infrastructure is a constant challenge! By introducing conductor groups, the CERN Cloud Infrastructure Team was able to split this locking and reduce the effective lock time from three hours to 15 minutes (and even shorter in the “leading” group)
Although we have addressed various issues, some issues are still open and new ones will arise. Good monitoring is key to see and understand issues.
This article is a summary of the Open Infrastructure Summit session, Scaling Bare Metal Provisioning with Nova and Ironic at CERN, and the CERN Tech Blog, Scaling Ironic with Conductor Groups.
Watch more Summit session videos like this on the Open Infrastructure Foundation YouTube channel. Don’t forget to join the global Open Infrastructure community, and share your own personal open source stories using the hashtag, #WeAreOpenInfra, on Twitter and Facebook.
The post Scaling Bare Metal Provisioning with Nova and Ironic at CERN: Challenges & Solutions appeared first on Superuser.
by Arne Wiebalck, Belmiro Moreira and Sunny Cai at January 11, 2021 02:00 PM
Nowadays, Infrastructure can be created and managed using code – you can create 100 servers with prebaked software using a for loop to manage and tag the servers. All of this is created by a CI/CD server triggered by a single commit on a given repository. With this in mind, all the infrastructure for the new application can be created and managed using IaC tools such as Terraform or Pulumi, which are cloud agnostic, or cloud vendor’s proprietary solutions. Cloud providers are now enabling SDKs for a more developer experience/oriented development with more compatibility and capabilities than a given “Provider” on Terraform or even their main IaC solutions.
Choosing the right IaC tool/product will fully depend on the application logic and the level of automation that is needed (which should be the entire stack), but in the end, having a complete pipeline for the Infrastructure should be one of the main goals of having our applications running on the cloud as it allows to have complete control of our systems. Soon after this, we will end up using GitOps methodologies, which will increase our agility to deploy not just our applications but also the entire infrastructure.
As soon as you have developed your entire infrastructure on any IaC, you are ready to deploy it “n” times with the same precision, without any need for human intervention on any of the configurations that your application needs in terms of inventory management or infrastructure requirements. You will be creating all the environments on where the application will live, which normally tends to be development, staging/UAT, and production. Sometimes you will also need other environments for testing, experimenting, or even innovating. This will be just as easy as running the scripts to repeatedly create the same infrastructure without worrying about drifts in the configurations.
When using Terraform, you can input the code to a Terraform module and reuse that module in multiple places throughout your code. Instead of having the same code in the staging and production environments, you will have both environments reuse code from the same module and then spin up “n” environments with the same set of resources and configurations.
This is a game-changer. Modules are the foundation of writing reusable, maintainable, and testable Terraform code and, as a result, writing infrastructure. With this in mind, teams can now develop their own modules that can be published via the Terraform registry or GitHub. Furthermore, anyone in the world can use the module and create the infrastructure or components needed by an application.
At VEXXHOST, we are advocates of sharing technology. Hence, most of our offerings and solutions are open source, and we have done the same with this project as well. Please refer to the public repository to access the Terraform code.
This module allows you to bootstrap an OpenStack cloud by adding images to some of the most common Linux distributions from their official release sites. You can also use this module to build a collection of the most recent and popular images into your OpenStack cloud, either public or private.
Additionally, this the same tooling used to deploy OS images for the VEXXHOST’s public cloud and private cloud offerings. Therefore, if you want to add an image to our public catalog, you can submit a pull request to our repository, and once it is approved and merged, the image should appear in our clouds.
To summarize the offering, you have a module that will allow you and your team to add different Linux images into your existing OpenStack clouds, be it public or private. This will increase your ability to use and install custom solutions that are needed or restricted in some cases by the Linux distribution and version that your servers run on.
Anyone with an OpenStack cloud can take advantage of this module. As mentioned earlier, it allows users to make PRs for changes, and then if approved, the new images will appear on their OpenStack clouds. At the end of the day, it truly serves as a testament to VEXXHOST’s commitment as an open source cloud provider and contributor.
Infrastructure as Code is here to stay. With Terraform, modules are now the main foundation of all resources for public and private cloud providers as they contribute to a better methodology when writing Infrastructure.
VEXXHOST provides a variety of OpenStack-based cloud solutions, including highly secure private clouds. The advanced level of customization possibilities make private cloud a favorite among enterprises, without the burden of licensing fees or vendor lock-ins. Learn more about VEXXHOST’s cloud offerings by contacting our team. Check out our private cloud resource page to improve your knowledge on the topic.
The post Adding Linux Images to OpenStack Using IaC appeared first on VEXXHOST.
In a cloud computing context, resource pooling is a collection of resources such as cores, RAM, storage, etc., treated as a single, shared pool to achieve a sort of fail-proof flexibility. It is a way to spread the load across multiple paths, increasing the utilization rate of servers. It is seen as one of the many benefits of an OpenStack Cloud. In this blog, we examine how resource pooling works in a private cloud environment and along with various OpenStack services.
The major focuses of resource pooling are on a low cost, unused delivery of services, and a user-preference based resource division. For additional context, let us take an example outside a cloud scenario. Resource pooling is often used in wireless technology such as radio communication, where individual channels are pooled together to form a more robust channel, without any interference or conflict.
For clouds, pooling is most used in a multi-tenant environment, according to demands from the users. This model is particularly beneficial for Software as a Service (SaaS), which runs in a centralized manner. When different users or tenants share resources within the same cloud, the overall operational costs can drastically decrease.
In a private cloud deployment such as OpenStack, the pool is created, and computing resources are transferred over a private network. The OpenStack provider adds a range of IP addresses into the interface, and when the VM boots up, it recognizes the IPs and collects the resources in a pool.
Various OpenStack Services are deployed and used when the resources are pooled in a server-cluster, and there is a need to launch a VM. These services enable the cloud to function more efficiently. When more services are needed, the provider adds another feature to the setup. OpenStack services have the advantage of reducing complexity in managing hardware requirements according to user needs. From OpenStack’s identity service Keystone to its network service Neutron, everything can be enabled to aid the pooling of resources and ensure the running of an efficient private cloud.
VEXXHOST has been offering OpenStack-powered cloud solutions since 2011. With a safe and efficient deployment, our customers enjoy the various benefits of an OpenStack private cloud, including resource pooling, scalability, load balancing, and more. Contact our team to know more and check out our resource page to improve your knowledge of OpenStack private clouds.
Like what you’re reading?
Deep dive into a hands-on ebook about how you can build a successful infrastructure from the ground up!
The post How Resource Pooling Works in an OpenStack Private Cloud appeared first on VEXXHOST.
I want to be able to see the level of change between OpenStack releases. However, there are a relatively small number of changes with simply huge amounts of delta in them — they’re generally large refactors or the delete which happens when part of a repository is spun out into its own project.
I therefore wanted to explore what was a reasonable size for a change in OpenStack so that I could decide what maximum size to filter away as likely to be a refactor. After playing with a couple of approaches, including just randomly picking a number, it seems the logical way to decide is to simply plot a histogram of the various sizes, and then pick a reasonable place on the curve as the cutoff. Due to the large range of values (from zero lines of change to over a million!), I ended up deciding a logarithmic axis was the way to go.
For the projects listed in the OpenStack compute starter kit reference set, that produces the following histogram:I feel that filtering out commits over 10,000 lines of delta feels justified based on that graph. For reference, the raw histogram buckets are:
Commit size | Count |
---|---|
< 2 | 25747 |
< 11 | 237436 |
< 101 | 326314 |
< 1001 | 148865 |
< 10001 | 16928 |
< 100001 | 3277 |
< 1000001 | 522 |
< 10000001 | 13 |
I wanted a quick summary of OpenStack git release tags for a talk I am working on, and it turned out to be way more complicated than I expected. I ended up having to compile a table, and then turn that into a code snippet. In case its useful to anyone else, here it is:
Release | Release date | Final release tag |
---|---|---|
Austin | October 2010 | 2010.1 |
Bexar | February 2011 | 2011.1 |
Cactus | April 2011 | 2011.2 |
Diablo | September 2011 | 2011.3 |
Essex | April 2012 | 2012.1.3 |
Folsom | September 2012 | 2012.2.4 |
Grizzly | April 2013 | 2013.1.5 |
Havana | October 2013 | 2013.2.4 |
Icehouse | April 2014 | 2014.1.5 |
Juno | October 2014 | 2014.2.4 |
Kilo | April 2015 | 2015.1.4 |
Liberty | October 2015 | Glance: 11.0.2 Keystone: 8.1.2 Neutron: 7.2.0 Nova: 12.0.6 |
Mitaka | April 2016 | Glance: 12.0.0 Keystone: 9.3.0 Neutron: 8.4.0 Nova: 13.1.4 |
Newton | October 2016 | Glance: 13.0.0 Keystone: 10.0.3 Neutron: 9.4.1 Nova: 14.1.0 |
Ocata | February 2017 | Glance: 14.0.1 Keystone: 11.0.4 Neutron: 10.0.7 Nova: 15.1.5 |
Pike | August 2017 | Glance: 15.0.2 Keystone: 12.0.3 Neutron: 11.0.8 Nova: 16.1.8 |
Queens | February 2018 | Glance: 16.0.1 Keystone: 13.0.4 Neutron: 12.1.1 Nova: 17.0.13 |
Rocky | August 2018 | Glance: 17.0.1 Keystone: 14.2.0 Neutron: 13.0.7 Nova: 18.3.0 |
Stein | April 2019 | Glance: 18.0.1 Keystone: 15.0.1 Neutron: 14.4.2 Nova: 19.3.2 |
Train | October 2019 | Glance: 19.0.4 Keystone: 16.0.1 Neutron: 15.3.0 Nova: 20.4.1 |
Ussuri | May 2020 | Glance: 20.0.1 Keystone: 17.0.0 Neutron: 16.2.0 Nova: 21.1.1 |
Victoria | October 2020 | Glance: 21.0.0 Keystone: 18.0.0 Neutron: 17.0.0 Nova: 22.0.1 |
Or in python form for those so inclined:
RELEASE_TAGS = {
'austin': {'all': '2010.1'},
'bexar': {'all': '2011.1'},
'cactus': {'all': '2011.2'},
'diablo': {'all': '2011.3'},
'essex': {'all': '2012.1.3'},
'folsom': {'all': '2012.2.4'},
'grizzly': {'all': '2013.1.5'},
'havana': {'all': '2013.2.4'},
'icehouse': {'all': '2014.1.5'},
'juno': {'all': '2014.2.4'},
'kilo': {'all': '2015.1.4'},
'liberty': {
'glance': '11.0.2',
'keystone': '8.1.2',
'neutron': '7.2.0',
'nova': '12.0.6'
},
'mitaka': {
'glance': '12.0.0',
'keystone': '9.3.0',
'neutron': '8.4.0',
'nova': '13.1.4'
},
'newton': {
'glance': '13.0.0',
'keystone': '10.0.3',
'neutron': '9.4.1',
'nova': '14.1.0'
},
'ocata': {
'glance': '14.0.1',
'keystone': '11.0.4',
'neutron': '10.0.7',
'nova': '15.1.5'
},
'pike': {
'glance': '15.0.2',
'keystone': '12.0.3',
'neutron': '11.0.8',
'nova': '16.1.8'
},
'queens': {
'glance': '16.0.1',
'keystone': '13.0.4',
'neutron': '12.1.1',
'nova': '17.0.13'
},
'rocky': {
'glance': '17.0.1',
'keystone': '14.2.0',
'neutron': '13.0.7',
'nova': '18.3.0'
},
'stein': {
'glance': '18.0.1',
'keystone': '15.0.1',
'neutron': '14.4.2',
'nova': '19.3.2'
},
'train': {
'glance': '19.0.4',
'keystone': '16.0.1',
'neutron': '15.3.0',
'nova': '20.4.1'
},
'ussuri': {
'glance': '20.0.1',
'keystone': '17.0.0',
'neutron': '16.2.0',
'nova': '21.1.1'
},
'victoria': {
'glance': '21.0.0',
'keystone': '18.0.0',
'neutron': '17.0.0',
'nova': '22.0.1'
}
}
This proposal was submitted for FOSDEM 2021. Given that acceptances were meant to be sent out on 25 December and its basically a week later I think we can assume that its been rejected. I’ve recently been writing up my rejected proposals, partially because I’ve put in the effort to write them and they might be useful elsewhere, but also because I think its important to demonstrate that its not unusual for experienced speakers to be rejected from these events.
OpenStack today is a complicated beast — not only does it try to perform well for large clusters, but it also embraces a diverse set of possible implementations from hypervisors, storage, networking, and more. This was a deliberate tactical choice made by the OpenStack community years ago, forming a so called “Big Tent” for vendors to collaborate in to build Open Source cloud options. It made a lot of sense at the time to be honest. However, OpenStack today finds itself constrained by the large number of permutations it must support, ten years of software and backwards compatability legacy, and a decreasing investment from those same vendors that OpenStack courted so actively.
Shaken Fist makes a series of simplifying assumptions that allow it to achieve a surprisingly large amount in not a lot of code. For example, it supports only one hypervisor, one hypervisor OS, one networking implementation, and lacks an image service. It tries hard to be respectful of compute resources while idle, and as fast as possible to deploy resources when requested — its entirely possible to deploy a new VM and start it booting in less than a second for example (if the boot image is already held in cache). Shaken Fist is likely a good choice for small deployments such as home labs and telco edge applications. It is unlikely to be a good choice for large scale compute however.
You can’t talk about open source cloud-based key management without mentioning OpenStack Barbican.
Data security and encryption are always priorities for cloud users. At VEXXHOST, many of our clients have asked and keep asking about tools that will help them secure their data and enable safe access to authorized users. We believe that key management plays a big part in it.
It is generally an easy task for the end-user to encrypt data before saving it into the cloud. For tenant objects such as database archives and media files, this kind of encryption is a viable option. A key management service is used in some instances, such as presenting keys to encrypt and decrypt data, providing seamless security and accessibility to the data, and not burden the clients with managing all the keys. Barbican is a tool that enables the creation and secure storage of such keys.
In simple terms, OpenStack Barbican is an open source Key Management service that provides safe storage, provisioning, and management of sensitive or secret data. This data could be of various content types – anything from symmetric or asymmetric keys, certificates, or raw binary data.
With Barbican, users can secure their data seamlessly and maintain its accessibility without personally managing their keys. Barbican also addresses concerns about privacy or misuse of data among users.
As mentioned above, users can utilize OpenStack Barbican for safe and secure storage, provisioning, and management of sensitive data. Barbican boasts of a plug-in based architecture that allows users to store their data in multiple secret stores. These stores can be software-based, like a software token, or hardware-device-based, like a Hardware Security Module (HSM). The tool allows users to securely store and manage anything from passwords to encryption keys to X.509 certificates.
Another advantage of OpenStack Barbican is its ability to integrate seamlessly with all other enterprise-grade cloud services from OpenStack. With a simplified integration with block storage, the key management service stores keys for encrypted volumes. With object storage, Barbican enables encryption of data at rest. Similarly, seamless integration happens with Keystone, providing identity authentication and complete role-based access control. With OpenStack’s image storage, the integration allows users to verify signed images, to check whether an uploaded image is altered or not.
Among other OpenStack-based services, VEXXHOST offers Barbican as our open source solution for key management. Barbican works in tandem with other OpenStack projects in creating highly secure private cloud environments for our customers. Contact our team regarding any queries you might have on key management or to know how VEXXHOST can help build your OpenStack cloud. Check out our dedicated resource page to improve your knowledge of private clouds.
Like what you’re reading?
Deep dive into a hands-on ebook about how you can build a successful infrastructure from the ground up!
The post An Overview of OpenStack Barbican for Secure Key Management appeared first on VEXXHOST.
Improve ease-of-use and features for Santa’s Helpers:
the traditional sack storage protocols offered by the original design were not readily accessible by less technical elves. Santa wanted to make storage available to a broader cohort of Santa’s helpers by adding a user-friendly virtual interface with sharing and collaboration features.
Reduce the maintenance required of their Elf Services Department to manage it:
At the time Santa was using a variety of methods to provision access to sack storage – all of them requiring manual steps before the storage could be delivered to a family household. Keeping track of the current storage allocations had also become a burden for Santa helpers.
Integrate authentication with their existing Sack management system:
Allow users to login via SSO (Santa sign-on).
Integrate storage account requests into their existing SSM (Sack service management) system to enable self-service provisioning for Elves.
Santa had identified a few candidate products that might fulfill their requirements, but had not looked at each in any great depth due to excessive eggnog and internal resourcing constraints.
Aptira first undertook an evaluation of four candidate Sack storage applications. We rapidly deployed each application in an environment within the North Pole so the features and functionality of each could be compared. We produced a detailed evaluation report that allowed Santa to make an informed decision about which application to move forward with for his Sack. Two leading candidates were put forward by Aptira and those deployments were converted into a larger-scale proof-of-concept that included integration with the actual Sack storage so Santa’s helpers and ELF services team could get a feel for using each application.
The SuchSack application was eventually chosen as it met the majority of Santa’s user and business requirements. From here Aptira developed a comprehensive solution architecture, paying particular concern to high demand and the ability to scale as the world’s population increased.
According to the solution architecture, Aptira deployed:
Maintainability was a significant concern for Santa so we ensured that all components of the architecture were deployed using Presentible to eliminate any manual steps in the deployment. We integrated our Presentible work into Santa’s existing Presentible Tower deployment, creating job templates so that deployments could be triggered from the Tower server. Since all of our work was being stored in Git on Santa’s GitLab server, we also created CICD pipelines to both build the SuperSack sack image and to trigger deployment to their test and production environments via Presentible Tower. During handover, Santa ELF staff were able to deploy changes to the test environment by simply committing gifts to the repository.
Finally, we worked with ELFSM staff to integrate the new service into Santa’s self-service portal, so users can request access to sack and make changes to their allocated quota.
Santa’s Helpers now have a stable and performant virtual sack storage service where they can upload, manage and share on-premises giftage.
As the uptake of the service increases, ELF staff also have the confidence that the service can be scaled out to handle the increasing world population.
By recommending applications with an external API, Aptira made sure that Santa’s ELFSM system would easily integrate with SuperSack and satisfy Santa’s requirement to have a single snowglobe screen for all household service requests. With ELFSM integration, Santa’s IT have also gained a charge-back capability to recover gift labour & production costs from other departments.
The solution was built with 100% open source components, reducing vendor lock-in.
While Aptira is happy to recommend and deploy snowfield DevOps infrastructure to support a company’s CICD needs, this project showed that we can also customise our solutions to fit in with our customers’ existing infrastructure, configuring a complete deployment pipeline for provisioning the entire solution.
Team Aptira
The post Santa’s Scalable Sack Storage & Delivery Automation appeared first on Aptira.
Managed OpenStack cloud or a self-managed cloud? Many organizations and enterprises out there considering private clouds debate between these choices. It is good to have these deliberations since the choice one makes impacts the company’s future operations.
For most companies, the answer is pretty simple—a Managed OpenStack Cloud. Let us see why it is so.
Managing a Self-Managed Cloud
While many companies are successful in deploying a self-managed cloud, the struggle begins with the daily management of and operation. Unless the company has an excellent IT team and the budgetary capability to handle growing infrastructural needs, sustaining a private cloud environment could be difficult. If you’re one of the bigwigs, an ultra-large company, this could be a cakewalk. You can hire a big team of personnel and get the necessary infrastructure, and there you have it – a self-managed cloud.
On the other hand, for most small to medium to almost-largish organizations, self-managing a cloud means a large financial and infrastructural burden. An entity that needs to be taken care of always. Also, something that should also be adapting according to the operational needs and wants of the company. This is not considering the team you have to hire, their salaries, payments for any support services you might need, etc.
There also remains the fact that platforms like OpenStack are complex and constantly evolving, and you would need a team that can really stay on top of things to self-manage a solid cloud environment.
A fully managed OpenStack cloud is the right solution for enterprises facing this situation. By taking care of infrastructural and operational needs and upgrades, a managed private cloud is a much cheaper and effortless alternative.
Managed OpenStack Cloud As A Solution
A Managed OpenStack environment offloads most of the tasks and infrastructural necessities to an outside provider and their expert and dedicated team of cloud experts. The process couldn’t be any simpler – you let the provider know what you want in your cloud, they build your OpenStack cloud the way you want it, and you start using the cloud. There. That’s it.
All the operational needs are taken care of by the provider in such a setting. If you need to scale vertically or horizontally, you let the provider know, and they do it for you. As mentioned earlier, OpenStack is a complex and dynamic platform. It keeps evolving with contributions from open source developers from around the world. These upgrades can feel like a hassle for self-managed clouds and could come with a need to change infrastructure or hire additional personnel. According to your business structure, with a managed private cloud, the provider will implement all the upgrades as they come.
Overall, opting for a managed OpenStack rather than a self-managed one will save your enterprise a lot of time, money, infrastructural requirements, and human resources necessary to deploy and operate a cloud environment.
Fully Managed OpenStack Cloud With VEXXHOST
VEXXHOST has been in the cloud game since 2006, and we have been OpenStack experts for close to a decade. Fully managed OpenStack private cloud is one of our signature offerings, and we have helped numerous clients across industries and regions by setting their desired cloud environments. Improve your knowledge of private clouds from our dedicated resource page. Contact us today for more details.
The post Why a Managed OpenStack Cloud Is Better For You Than Self-Managed appeared first on VEXXHOST.
In 2019, the term “digital sovereignty” gained momentum in Europe. For the first time in history, several ministries such as the German and French ministries for Economic Affairs pushed forward initiatives to establish digital sovereignty.
At the Open Infrastructure Summit keynotes, Johan Christenson, CEO of City Network, discussed the importance of digital sovereignty in the infrastructure world. Christenson mentioned that on July 16th 2020, the Court of Justice of the European Union invalidated “Privacy Shield”, which means companies seeking to transfer personal data from the European Economic Area to the United States must now use other mechanisms recognized by General Data Protection Regulation (GDPR) to appropriately safeguard personal data.
“As Europe grapples with an ever more centralized cloud-world and lack of true choice in providers, new initiatives are surging like GAIA-X. Europe is looking to have vendors whom can support both its values and laws in the infrastructure layer.” Christenson said. Therefore, digital sovereignty is becoming critically important from both the aspects of achieving digital independence and the long-term innovation for Europe as a whole.
But, what exactly is digital sovereignty? Why is open infrastructure important to achieve digital sovereignty? What is the European initiative GAIA-X project and how is it leveraging open infrastructure for its use case? In this article, we will focus on what digital sovereignty is and why open infrastructure matters, from the talk that Marius Feldmann, Chief Operation Officer of Cloud&Heat Technologies, and Kurt Garloff, leading the Sovereign Cloud Stack (SCS) initiative, delivered—Digital Sovereignty Why Open Infrastructure Matters—at the Open Infrastructure Summit.
It’s easy to confuse digital sovereignty with digital autarky, and they are completely different concepts. Digital sovereignty is about integrating the rising and existing technologies and innovative ideas into an overall system and adding its own solutions in order to have the freedom of choice among different options that exist.
Just like Feldmann discussed in his Summit session, “digital sovereignty is built on freedom of choice.” Consumers should have particularly effective freedom of choice when they become the “active managers” of their own data (Digital Sovereignty Report by the Advisory Council for Consumer Affairs, 2017).
But what are the essential preconditions to achieve having the freedom of choice in digital sovereignty? One of the most important preconditions is that alternatives should be available to achieve the freedom of choice. In addition, the alternatives must differ regarding their functional and non-function properties. It’s also fundamentally important that discovery based on these properties must be rendered possible, which means there should be possibilities and means to discover the digital services that are offered based on the properties.
With the aim to develop common requirements for a European data infrastructure, GAIA-X is a project that has been initiated by the German ministry for economy and energy. Currently, hundreds of companies worldwide work together on GAIA-X to push forward the overall vision of digital sovereignty.
GAIA-X aims to develop common requirements for a European data infrastructure. Therefore openness, transparency and the ability to connect to other European countries are central to GAIA-X. This project is the cradle of an open, transparent digital ecosystem, where data and services can be made available, collated and shared in an environment of trust. Read more about GAIA-X on GAIA-X: Technical Architecture document.
To further explain what it needs to achieve digital sovereignty, let’s assume a heterogeneous / distributed ecosystem as the foundation for a digital sovereign future.
Without having heterogeneous and decentralized infrastructure, there is no freedom of choice. Therefore, Having various providers and decentralized infrastructures is key to digital sovereignty. Feldmann summarized it well that
However, to avoid a potential drawback of this future digital infrastructure, which is having various APIs to achieve mainly the same things on the infrastructure side, there should be a standardized API for all the cloud providers. Therefore, on one hand, the future digital infrastructures should be heterogeneous in order to have alternatives available due to the different properties. On the other hand, there should be standards for the API and for the platform itself.
To clarify, when talking about standardized APIs as well as standardized tools to make an infrastructure platform operational, Feldmann was only talking about the software layer for commodity (compute, storage, networking). There is no need for many alternatives on this layer. The commodity layer should leverage open infrastructure projects to render possible collaborative development and to avoid API fragmentation.
Reinventing the wheel on the commodity layer may be an interesting exercise; however, it slows down innovative and technical progress.
On the path towards digital sovereignty, on one hand, it is crucial to provide a modular open infrastructure solution that renders it possible to enable various actors to set up an operational platform for hosting services quickly. On the other hand, open infrastructure projects should reduce development overhead and should ensure API interoperability in order to avoid vendor-lock-in contradicting the freedom of choice.
Just like what Christenson said, “open source is the only realistic way to a solid choice in vendors. Open source is also the foundation for Europe’s long term innovation and sovereignty in anything data.”
This article is a summary of the Open Infrastructure Summit session, Digital Sovereignty – Why Open Infrastructure Matters.
Watch more Summit session videos like this on the Open Infrastructure Foundation YouTube channel. Don’t forget to join the global Open Infrastructure community, and share your own personal open source stories using the hashtag, #WeAreOpenInfra, on Twitter and Facebook.
Thanks to the 2020 Open Infrastructure Summit sponsors for making the event possible:=
Headline: Canonical (ubuntu), Huawei, VEXXHOST
Premier: Cisco, Tencent Cloud
Exhibitor: InMotion Hosting, Mirantis, Red Hat, Trilio, VanillaStack, ZTE
The post Digital Sovereignty – Why Open Infrastructure Matters appeared first on Superuser.
by CERN (techblog-contact@cern.ch) at December 16, 2020 10:44 AM
While reviewing the comments on the Ironic spec, for Secure RBAC. I had to ask myself if the “project” construct makes sense for Ironic. I still think it does, but I’ll write this down to see if I can clarify it for me, and maybe for you, too.
Baremetal servers change. The whole point of Ironic is to control the change of Baremetal servers from inanimate pieces of metal to “really useful engines.” This needs to happen in a controlled and unsurprising way.
Ironic the server does what it is told. If a new piece of metal starts sending out DHCP requests, Ironic is going to PXE boot it. This is the start of this new piece of metals journey of self discovery. At least as far as Ironic is concerned.
But really, someone had to rack and wire said piece of metal. Likely the person that did this is not the person that is going to run workloads on it in the end. They might not even work for the same company; they might be a delivery person from Dell or Supermicro. So, once they are done with it, they don’t own it any more.
Who does? Who owns a piece of metal before it is enrolled in the OpenStack baremetal service?
No one. It does not exist.
Ok, so lets go back to someone pushing the button, booting our server for the first time, and it doing its PXE boot thing.
Or, we get the MAC address and enter that into the ironic database, so that when it does boot, we know about it.
Either way, Ironic is really the playground monitor, just making sure it plays nice.
What if Ironic is a multi-tenant system? Someone needs to be able to transfer the baremetal server from where ever it lands up front to the people that need to use it.
I suspect that ransferring metal from project to project is going to be one of the main use cases after the sun has set on day one.
So, who should be allowed to say what project a piece of baremetal can go to?
Well, in Keystone, we have the idea of hierarchy. A Project is owned by a domain, and a project can be nested inside another project.
But this information is not passed down to Ironic. There is no way to get a token for a project that shows its parent information. But a remote service could query the project hierarchy from Keystone.
Say I want to transfer a piece of metal from one project to another. Should I have a token for the source project or the remote project. Ok, dump question, I should definitely have a token for the source project. The smart question is whether I should also have a token for the destination project.
Sure, why not. Two tokens. One has the “delete” role and one that has the “create” role.
The only problem is that nothing like this exists in Open Stack. But it should.
We could fake it with hierarchy; I can pass things up and down the project tree. But that really does not one bit of good. People don’t really use the tree like that. They should. We built a perfectly nice tree and they ignore it. Poor, ignored, sad, lonely tree.
Actually, it has no feelings. Please stop anthropomorphising the tree.
What you could do is create the destination object, kind of a potential piece-of-metal or metal-receiver. This receiver object gets a UUID. You pass this UUID to the “move” API. But you call the MOVE api with a token for the source project. The move is done atomically. Lets call this thing identified by a UUID a move-request.
The order of operations could be done in reverse. The operator could create the move request on the source, and then pass that to the receiver. This might actually make mores sense, as you need to know about the object before you can even think to move it.
Both workflows seem to have merit.
And…this concept seems to be something that OpenStack needs in general.
Infact, why should the API not be a generic API. I mean, it would have to be per service, but the same API could be used to transfer VMs between projects in Nova nad between Volumes in Cinder. The API would have two verbs one for creating a new move request, and one for accepting it.
POST /thingy/v3.14/resource?resource_id=abcd&destination=project_id
If this is called with a token, it needs to be scoped. If it is scoped to the project_id in the API, it creates a receiving type request. If it is scoped to the project_id that owns the resource, it is a sending type request. Either way, it returns an URL. Call GET on that URL and you get information about the transfer. Call PATCH on it with the appropriately scoped token, and the resource is transferred. And maybe enough information to prove that you know what you are doing: maybe you have to specify the source and target projects in that patch request.
A foolish consistency is the hobgoblin of little minds.
Edit: OK, this is not a new idea. Cinder went through the same thought process according to Duncan Thomas. The result is this API: https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfer
Which looks like it then morphed to this one:
https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfers-volume-transfers-3-55-or-later
Big Data platforms and technologies are exceeding expectations all over. Initially considered a means of generating a large amount of data, they are now pushing limits with high volume, performance, variety, and processing speed.
For OpenStack users, the choice platform to build Big Data applications is the high-performance and versatile Sahara. Let us find out what makes OpenStack Sahara special and how it functions along with other OpenStack projects in creating solid Big Data platforms.
OpenStack Sahara provides users with simplified ways to provision clusters such as Hadoop and Stark. This provisioning is done by specifying various parameters like framework version, hardware node details, cluster topology, and more. Sahara can do this process in minutes and add or remove nodes on demand to scale already provisioned clusters.
As a Big Data Platform, Sahara is designed to address the following:
Here are some of the key features of the platform:
At VEXXHOST, we use Sahara combined with other OpenStack projects to deploy clouds with Big Data requirements seamlessly without any upfront costs. We can even deploy big data applications (Cloudera CDH, Hortonworks HDP, etc.) 300% more efficiently. This efficiency results in much faster data access and better cost-efficiency.
Quick Deployment – With the backing of the range of OpenStack services, Sahara can deploy big data applications quickly. By quick, we mean hundreds of servers in a matter of minutes! Incredible, right? This quick deployment means that your team experiences no delays in building big data applications.
Cost Savings – Using a VEXXHOST cloud means that your organization doesn’t have to worry about procuring expensive hardware. We are ready for your Big Data project with enterprise-grade hardware solutions.
Scalability for Easy Resource Management – Business needs keep changing. Our OpenStack-based cloud allows you to scale servers up and down as needed quickly, and this comes in highly beneficial for Big Data applications. You don’t have to waste time and resources, adding and removing extra space and hardware.
Optimized, High-Performance Cloud – Users are ensured high-performance clouds consistently, avoiding ‘noisy neighbor’ issues.
No Vendor Lock-Ins, Constant Monitoring, and Support – We ensure full monitoring, support, and incident management for your Big Data applications are ensured with our platform. The open source nature of the cloud makes the entire stack devoid of vendor lock-ins.
As mentioned earlier, VEXXHOST uses the Big Data platform Sahara along with OpenStack projects to create secure private cloud environments. They are highly flexible and secure. You can get to know more about Sahara and the whole cloud offerings by contacting our team or checking our dedicated resource page on private clouds.
The post How OpenStack Sahara Works As a High-Performance Big Data Platform appeared first on VEXXHOST.
I've had to re-teach myself how to do this so I'm writing my own notes.
Prerequisites:
Once you have your environment ready run a test with the name from step 3.
Some tests in CI are configured to use `--skip-tags`. You can do this for your local tests too by setting the appropriate environment variables. For example:
./scripts/run-local-test tripleo_derived_parameters
export TRIPLEO_JOB_ANSIBLE_ARGS="--skip-tags run_ceph_ansible,run_uuid_ansible,ceph_client_rsync,clean_fetch_dir"
./scripts/run-local-test tripleo_ceph_run_ansible
by Unknown (noreply@blogger.com) at December 15, 2020 03:46 PM
Codership is pleased to announce our first ever online training for Galera Cluster, with the launch of the Database Administration for Galera Cluster course. There are two sets of times for our EMEA and American attendees, with the former happening on January 2021, 13th and 14th, starting 10 AM CET and the latter happening on January 2021, 20th and 21st, starting 9 AM PST.
This is a hands on course that spans two days, and you will have about six contact hours per day in this instructor-led course. If you are a DBA or been given a Galera Cluster to manage, this is the course for you, as you’ll learn how to run your Galera Cluster in an optimal fashion, from setup, performance tuning, monitoring, as well as backups and more. We will cover not just Codership’s Galera Cluster distribution but also MariaDB Galera Cluster and Percona XtraDB Cluster in this course. For more information, please read the full content breakdown.
As an added bonus, we will also be covering database administration with Galera Manager our new GUI tool.
What are you waiting for? Sign up now, as the early bird rate ends 31 December 2020. There is also a volume discount available.
First things first. We love Ceph storage. It has been a part of VEXXHOST’s OpenStack private cloud offering for a while now. Since storage is one of the prime requirements for most enterprises approaching us for OpenStack solutions, here we are, giving you the basics of Ceph and how it will benefit your private cloud.
With its first stable release in 2012, Ceph is the most popular distributed storage solution for OpenStack. What really is it? How does it work?
In simple terms, Ceph is a free and open source storage solution that is designed to allow object, block, and file storages from a unified system. Ceph is designed to be self-managed and self-healing. It can deal with outages on its own and constantly works to reduces costs in administration.
Ceph is highly scalable, runs on commodity hardware, and is specifically designed to handle enterprise workloads aiming for completely distributed operations sans any failure points. Ceph storage is also fault-tolerant and becomes so by replicating data. This means that there really are no bottlenecks in the process while Ceph is operating.
The first major stable release of Ceph was Argonaut, which was released in July 2012. Since then, there have been 15 releases within 8 years, the latest in line being Nautilus and Octopus. The next release is titled Pacific, with the date of release yet to be announced.
Ceph nodes work by employing five fully distributed and distinct daemons allowing direct user interaction. Here is a look at each of them and what they do.
Ceph Monitors – (ceph-mon) – These cluster monitors help in keeping track of both active and failed nodes.
Ceph Managers – (ceph-mgr) – They work in tandem with Ceph monitors and support external systems in monitoring and management.
Object storage devices – (ceph-osd) – They work in storing the content files.
Metadata servers – (ceph-mds) – They help in the storage of metadata from inodes and directories.
Representational state transfer – (ceph-rgw) – These gateways bring out the object storage layer make the interface compatible with relevant APIs
When one or more monitors and two or more object storage are deployed, it is known as a Ceph Storage Cluster. The file system, object storage, and block devices read and write data to and from the storage cluster. A Ceph cluster can have thousands of storage nodes since the object storage devices store data in such nodes.
Ceph uses an architectural system of distributed object storage within the storage system where data as objects, as opposed to other architectures where data is managed in a file hierarchy. Another aspect worth mentioning is that Ceph’s libraries give direct access for users to RADOS (Reliable Autonomic Distributed Object Store) storage system. This feature also lays the foundation for Ceph Filesystem and RADOS Block Device.
Ceph brings in many great advantages to OpenStack-based private clouds. Here is a look at some of them.
Easy adoption – A shift into software-defined storage platforms can sometimes be complicated. Ceph solves this problem by allowing block and object storage in the same cluster. There is no worry about administering separate storage services using other APIs or tech.
High availability & improved performance – The coding erasure feature improves data availability by adding resiliency and durability. Sometimes, the writing speeds can almost be double the previous backend.
Cost control – Since Ceph runs on commodity hardware, there is no need for expensive and extra hardware. With an OpenStack private cloud from a reliable and reputed provider such as VEXXHOST, the pay-as-you-go structure also contributes to the overall cost control.
Better security – LDAP, Active Directory Integration, encryption features, etc., in place with Ceph can limit unnecessary access into the system.
Interested in knowing more about Ceph storage and secure and scalable OpenStack Private clouds? VEXXHOST has been using Ceph for storage for a long while now, and since 2019, we are a member of the Ceph Foundation. Reach out to the expert team at VEXXHOST, and we can guide you through the process easily. Looking forward to hearing from you!
The post Ceph Storage Basics and How It Benefits Your OpenStack Private Cloud appeared first on VEXXHOST.
European Centre for Medium range Weather Forecasts (ECMWF), an intergovernmental organisation, was established in 1975. Based in Reading, UK and the data center soon moving to Bologna Italy, ECMWF spans 34 States in Europe. It operates one of the largest supercomputer complexes in Europe and the world’s largest archive of numerical weather prediction data. In terms of its IT infrastructure, ECMWF’s HPC (High-performance computing) facility is one of the largest weather sites globally. With cloud infrastructure for Copernicus Climate Change Service (C3S), Copernicus Atmosphere Monitoring Service (CAMS) and WEkEO, which is a Data and Information Access Service (DIAS) platform, and the European Weather Cloud, teams at ECMWF maintain an archive of climatological data with a size of 250 PB with a daily growth of 250TB.
The European Weather Cloud started three years ago in a collaboration between ECMWF and The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) aiming to make it easier to work on weather and climate big data in a cloud-based infrastructure. With the goal of bringing the computation resources (Cloud) closer to their Big data (meteorological archive and satellite data), ECMWF’s pilot infrastructure was with open source software – Ceph and OpenStack using TripleO.
The graph below shows the current state of the European Weather Cloud overall infrastructure comprising two OpenStack clusters: one built with OpenStack Rocky and another one with OpenStack Ussuri. The total hardware of the current configuration comprises around 3,000 vCPUs, 21 TB RAM for both clusters, 1PB of storage and 2×5 NVIDIA Tesla V100 GPUs.
The graph below shows the cloud infrastructure of the European Weather Cloud. As you can see, Ceph is built and maintained separately from OpenStack which gives the teams at the European Weather Cloud a lot of flexibility in building different clusters on the same Ceph storage. Both of its OpenStack clusters use the same Ceph infrastructure and the same rbd pools. Besides some usual HDD failures, Ceph performs very well, and the teams at the European Weather Cloud are planning to gradually move to CentOS8 (due to partial support of CentOS7) and upgrade to Octopus and cephadm on a live cluster after a lot of testing to their development environment.
The first OpenStack cluster in the European Weather Cloud, which was built in September 2019, is based on Rocky with TripleO installer. In the meantime, engineers at the European Weather Cloud also created another development environment with OpenStack and Ceph clusters that are similarly configured for testing-experimentation.
Experience and Problems:
Their deployment has about 2,600 VCPUs with 11TB RAM which doesn’t have any significant problem. The external Ceph cluster integration worked with minimum effort by simply configuring the ceph-config.yaml with little modifications. The two external networks (one public facing and another for fast access to their 300PB data archive) were straightforward.
Most of their VMs are attached to both external networks with no floating IPs, which was a challenging VM routing issue without dynamic routing on the switches. To solve this issue, they used dhcp hooks and configured VM routing before they made the images available to the users.
There were some problems that they encountered with the NIC bond interface configuration and in conjunction with the configuration of their switches at the beginning. Therefore, the engineers decided to not use a Link Aggregation Control Protocol (LACP) configuration, and now they have a single network interface card (NIC) deployment for OpenStack. They also encountered some problems with Load-Balancing-as-a-Service (LBaas) due to Octavia problems with certificates overridden on each deployment. The problem is presented here.
As soon as they found the solutions to these challenges, the engineers updated their live system and moved the whole cluster from a Single NIC to a Multiple NIC deployment which is transparent to the users with zero downtime. The whole first cluster was redeployed, and the network was re-configured with Distributed Virtual Routing (DVR) configuration for better network performance.
In March 2020, engineers at the European Weather Cloud added more hardware for their OpenStack and Ceph cluster, and they decided to investigate upgrading to the newest versions of OpenStack.
Experience and Problems:
First, they converted their Rockey undercloud to a VM for better management and as a safety net for backups and recovery. From March to May 2020, they investigated and tested upgrading to Stein (first undercloud and then overcloud upgrade to a test environment). Updating was possible from Rocky to Stein to Train and finally to Ussuri, but due to the CentOS7 to CentOS8 transition since Ussuri was based on CentOS8, it was considered impractical. Therefore, they made a big jump from Rocky to Ussuri by skipping three updates and decided to directly deploy their new systems on OpenStack Ussuri.
The second OpenStack cluster, based on Ussuri, was first built in May 2020, 17 days after the release of Ussuri on May 13. This cluster was a plain vanilla configuration which means that although the network was properly configured with OVN and provider networks with 25 nodes they haven’t had any integration with Ceph storage.
Experience and Problems:
The new building method that was based on Ansible instead of Mistral had some hiccups, such as the switch from stack to heat-admin which is not what the users are used to deploy. In addition, they were trying to quickly understand and master the CentOS8 base operating system for both the host systems and service containers. Engineers at the European Weather Cloud also continued with OVS instead of OVN because of the implications in assigning floating IP addresses. With the help from the OpenStack community, the problems were solved, and the cluster was built again in mid-June 2020.
Regarding GPUs, the configuration of Nvidia GPUs was straightforward. However, since they haven’t implemented IPv6 to their Ussuri cluster when they installed and configured the GPUs drivers to a node, OVS was trying to bind to IPv6 addresses during the booting time which results in a considerable increase in booting time. A workaround was to explicitly remove PIv6 configuration to their GPU nodes. All nodes with a GPU also resolved as normal compute nodes, and they have configured nova.conf with their Ansible playbooks.
In terms of the European Weather Cloud’s infrastructure, the engineers are planning to integrate the infrastructure with other internal systems for better monitoring and logging. They are also planning to phase out the Rocky cluster and move all the nodes to Ussuri. Trying to follow the latest versions of OpenStack and Ceph, they will continue to operate, maintain and upgrade the Cloud’s infrastructure.
For the federation, the goal is to federate their Cloud infrastructure with infrastructures of their Member states. They have identified and will continue to explore potential good use cases to federate.
Regarding the integration with other projects, the European Weather Cloud will be interfacing with the Digital Twin Earth which is a part of the Destination Earth Program of the EU.
Teams at the European Weather Cloud are also planning to contribute code and help other users that are facing the same problems while deploying clusters in the OpenStack community.
Watch this session video and more on the Open Infrastructure Foundation YouTube channel. Don’t forget to join the global Open Infrastructure community, and share your own personal open source story using #WeAreOpenInfra on social media.
Special thanks to our 2020 Open Infrastructure Summit sponsors for making the event possible:
The post OpenStack in Production and Integration with Ceph: A European Weather Cloud User Story appeared first on Superuser.
In the year since the Raspberry Pi 4 was released, I've seen many tutorials (like this and this) and articles on how well the 4GB model works with container platforms such as Kubernetes (K8s), Lightweight Kubernetes (K3s), and Docker Swarm. As I was doing research, I read that Arm processors are "first-class citizens" in OpenStack.
I have two business ideas to explore, and I decided that now is a good time to take the plunge and create a prototype. My hesitation throughout the last year was due to the time and financial investment required. After some inspiration, detailed thought, and self-evaluation, I am ready to go for it. Worst case scenario, this is going to eat up a lot of my time. Even if I lose time, I will learn a lot about cloud infrastructure, cloud networking, and cloud instance provisioning. My first business idea is in the realm of home and small business network cyber security. The second utilizes a private cloud platform to provision labs for IT and cyber security training. A small virtual lab isn’t going to cut it for these ventures.
Before I can pursue these builds, I need to upgrade my home network and lab and select a platform. I currently have 3 old used servers (2 Dell PowerEdge R510s and an HP Proliant DL360) for the cloud. For networking, I have an ancient Cisco switch. I think I can get by with the old switch for now, but my small private cloud requires more servers. I can use the private cloud to provision networks to test out capabilities, learn, and design. These can also hold prototypes and proof of concepts for demonstrations. For the private cloud, I selected OpenStack as my platform. This will allow me to provision instances using Terraform, and have more flexibility with networking configuration. I can also avoid a large AWS and Azure bill while I experiment with different configurations. The only thing that will suffer is my power bill.
Based on the OpenStack documentation I will need at least 4-5 servers to support my configuration which is a small compute cloud. To use Juju and Metal as a Service (MAAS) to deploy the cloud, I will need 2 more servers, but I could probably use one of my servers and host 2 VMs instead of purchasing another server. I haven’t yet decided whether I am going to use Juju and MAAS to deploy OpenStack, but I do know that I need at least 2 more servers for my project. I also want to separate my private cloud from the rest of my network and still maintain network performance with the added security, so I will need a firewall / IPS appliance. Once complete, my home network will look something like this:
I am trying to stay under $2,000 total for this project (including what I already spent). Below is the price I paid for everything I already have.
Device | Qty | Unit Cost | Shipping | Total Cost |
HP ProLiant DL360 | 1 | $149.99 | $112.89 | $262.88 |
Dell PowerEdge R510 | 2 | $238.99 | $75.00 | $552.98 |
Cisco Catalyst 3560 | 1 | $69.00 | $17.95 | $86.95 |
Total Cost | $902.81 |
Existing devices with costs at the time of purchase
So, based on that I have about $1100 to spend. Although I have plenty of room, I am sticking with used equipment. The only exception I am making is my firewall appliance.
I was able to find 2 Dell PowerEdge R610s for $157 each, well within budget. My shipping costs to my location are really high, so I have to keep that in mind. Even with the shipping costs, I still consider these a bargain and they meet my needs. These servers also come from the same vendor as my previous purchases (PC Server and Parts), so I know they will arrive in good condition and operate well.
Next I need a firewall appliance, for this I am going straight to a vendor because their site is a lot cheaper than Amazon. This appliance from Protectli has 4 NICs, a quad core processor, and a small SSD. This is more than enough to run pfsense (and it was already tested for it), so it will easily meet my needs and be a step up from my current options for under $300.
With those 2 purchases, I have all the equipment I will need, and significantly under my max budget! The only other purchase I might make is a rack to store the equipment and a PDU. For now, I just have to wait for them to arrive. I plan to start sometime in December. While I wait, I am going to work on my remote access solutions, determine what IDS/IPS I am going to use (Suricata, Snort, or Bro), and finalize my design of how this will all fit together.
Device | Qty | Unit Cost | Shipping | Total Cost |
HP ProLiant DL360 | 1 | $149.99 | $112.89 | $262.88 |
Dell PowerEdge R510 | 2 | $238.99 | $75.00 | $552.98 |
Cisco Catalyst 3560 | 1 | $69.00 | $17.95 | $86.95 |
Protectli FW4B | 1 | $282.00 | $7.00 | $289.00 |
Dell PowerEdge R610 | 2 | $156.99 | $111.00 | $424.98 |
Total Cost | $1616.79 |
Existing devices with costs at time of purchase
This article was originally posted on mattglass-it.com. See the original article here.
The post Embarking on a New Venture: Creating a Private Cloud with OpenStack for Under $1700 appeared first on Superuser.
OpenStack private cloud for DevOps is gaining much traction even among fierce competition. The flexible nature of the open source platform allows DevOps engineers to innovate from time to time. OpenStack also maximizes existing infrastructure and helps engineers tackle untoward incidents with ease.
OpenStack emerged and established itself as a gold standard in building private clouds, among other Infrastructure as a Service (IaaS) platforms. The open source elements of the platform allow engineers to act autonomously to provision and de-provision cloud environments. OpenStack works as a self-service mechanism with all the flexibility cloud builders need. Another advantage is that engineers being able to provision things reduces downstream bottlenecks for the operations team.
OpenStack is not just open source but also vendor-agnostic. This enables the end-user to take full advantage of competitive pricing. There is no vendor lock-in with OpenStack. The availability of a private cloud at prices comparable to public clouds works great for organizations with large-scale data needs.
Another significant feature of OpenStack private cloud, compared to a public cloud, is its ability to have more control in optimizing application performance and security. Companies with sensitive data to handle prefer OpenStack private clouds for DevOps and further use for the same reason.
In the initial phases, the integration of OpenStack clouds might seem like a challenge to enterprises used to traditional IT infrastructures. But, an experienced cloud provider can make this process a breeze. Once the company makes it clear what they want in their cloud for DevOps and later use, the provider takes care of the rest. The flexibility of OpenStack really comes in handy here as it allows tailoring the platform according to individual needs.
Moreover, OpenStack also comes with regular updates and releases across the board frequently. The cloud provider ensures that enterprises get these upgrades promptly so that operations run smoothly with the latest technology.
For compute, storage, and network, OpenStack is clearly one of the leaders in the game, with its flexibility and vendor-agnostic nature. The fact that developers are able to create cloud environments with high agility is invaluable for DevOps.
VEXXHOST’s is an established cloud provider with a decade-worth of experience in OpenStack. We build and deploy private clouds for DevOps according to varying specifications and requirements from clients worldwide. We also provide Managed Zuul, a specialized tool that can accompany DevOps cycles. Talk to our team for further assistance. Check our private cloud resources page to learn more about highly secure cloud environments.
You’ve got Big Data?
We’ve got your Big Data Infrastructure Solution in a free ebook!
The post Why OpenStack Private Cloud for DevOps is the Way Forward appeared first on VEXXHOST.
Open Infrastructure Foundation is an entity supporting open source development in IT infrastructure globally.
For almost a decade, the governing body of OpenStack and related projects was the OpenStack Foundation. Recently, at the Open Infrastructure Summit 2020, the foundation announced that it has evolved into the Open Infrastructure Foundation.
This move is part of the multi-year drive of community evolution and expansion into including newer projects under the foundation’s wing. In this context, let’s take a look at the timeline that led to this evolution.
The origin of OpenStack, and later the foundation, can be traced back to something that happened in a true open source fashion – a collaboration.
Rackspace was rewriting the infrastructure code (what was later known as Swift) to its cloud offerings. They decided to make the existing code open source. Simultaneously, through its contractor Anso Labs, NASA did the same with the Python-based cloud fabric controller Nova. The teams realized that both the projects are complementary and decided to collaborate. This shared program marked the beginning of OpenStack.
Tech professionals from 25+ companies attended the first OpenStack Design Summit, held in July 2010 in Austin, Texas. Team VEXXHOST joined the OpenStack community by the time the second OpenStack project, Bexar, was released.
OpenStack and the community were growing, and there was a need to promote and develop projects in a more sustainable and organized manner. This thought resulted in the creation of the OpenStack Foundation in September 2012.
The Foundation
The creation of the OpenStack Foundation was a defining moment for cloud computing users across the globe. The foundation launched with over 5,600 members representing numerous companies. We are proud to say that VEXXHOST was also part of it all from the very beginning.
To govern OpenStack and other open source projects better, the foundation set up three bodies under its wing – the Board of Directors, the Technical Committee, and the User Committee. Over the years, the foundation grew with the growth of the projects. Recently there arose a need to build a larger umbrella to adopt and develop more open source projects. Hence, the OpenStack Foundation evolved into the Open Infrastructure Foundation, with OpenStack still being in the heart of it all.
The Summits
The first OpenStack Summit was held in Paris in 2014. The event changed its name to the Open Infrastructure Summit with its Denver edition in 2019. Held almost bi-annually, the summits have always given timely boosts to the development of open sources. The global community of OpenStack developers, contributors, and users come together during the summits and share their ideas collectively. VEXXHOST is a regular presence at the summits and won the SuperUser Award at the Denver Summit, 2019.
The Open Infrastructure Summit was held virtually from 19th to 23rd October 2020, owing to the pandemic. The foundation announced its evolution and name change at the Summit and was greeted with much fanfare.
VEXXHOST was a Corporate Member of the OpenStack Foundation for many years. Our association with the OpenStack community began in 2011, and we’ve been a part of the journey so far as an avid contributor and user. With the latest evolution, we are proud to be a Founding Silver Member of the Open Infrastructure Foundation and accompany it to new heights of open source development.
VEXXHOST has a wide range of cloud solutions powered by OpenStack and other open source projects, including a fully customizable private cloud. If you have further queries on our services, contact us, and we’ll get back to you.
The post A Brief History of the Open Infrastructure Foundation appeared first on VEXXHOST.
Kata containers are containers that use hardware virtualization technologies for workload isolation almost without performance penalties. Top use cases are untrusted workloads and tenant isolation (for example in a shared Kubernetes cluster). This blog post describes how to run Percona Kubernetes Operator for Percona XtraDB Cluster (PXC Operator) using Kata containers.
Setting up Kata containers and Kubernetes is well documented in the official github repo (cri-o, containerd, Kubernetes DaemonSet). We will just cover the most important steps and pitfalls.
First of all, remember that Kata containers require hardware virtualization support from the CPU on the nodes. To check if your linux system supports it run on the node.
$ egrep ‘(vmx|svm)’ /proc/cpuinfo
VMX (Virtual Machine Extension) and SVM (Secure Virtual Machine) are Intel and AMD features that add various instructions to allow running a guest OS with full privileges, but still keeping host OS protected.
For example, on AWS only i3.metal and r5.metal instances provide VMX capability.
Kata containers are OCI (Open Container Interface) compliant, which means that they work pretty well with CRI (Container Runtime Interface) and hence well supported by Kubernetes. To use Kata containers please make sure your Kubernetes nodes run using CRI-O or containerd runtimes.
The image below describes pretty well how Kubernetes works with Kata.
Hint: GKE or kops allows you to start your cluster with containerd out of the box and skip manual steps.
To run Kata containers, k8s nodes need to have kata-runtime installed and runtime configured properly. The easiest way is to use DaemonSet which installs required packages on every node and reconfigures containerd. As a first step apply the following yamls to create the DaemonSet:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy/base/kata-deploy.yaml
DaemonSet reconfigures containerd to support multiple runtimes. It does that by changing /etc/containerd/config.toml. Please note that some tools (ex. kops) keep containerd in a separate configuration file config-kops.toml. You need to copy the configuration created by DaemonSet to the corresponding file and restart containerd.
Create runtimeClasses for Kata. RuntimeClass is a feature that allows you to pick runtime for the container during its creation. It has been available since Kubernetes 1.14 as Beta.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/k8s-1.14/kata-qemu-runtimeClass.yaml
Everything is set. Deploy test nginx pod and set the runtime:
$ cat nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kata
spec:
runtimeClassName: kata-qemu
containers:
- name: nginx
image: nginx
$ kubectl apply -f nginx-kata.yaml
$ kubectl describe pod nginx-kata | grep “Container ID”
Container ID: containerd://3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111
On the node verify if Kata is used for this container through ctr tool:
# ctr --namespace k8s.io containers list | grep 3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111
3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111 sha256:f35646e83998b844c3f067e5a2cff84cdf0967627031aeda3042d78996b68d35 io.containerd.kata-qemu.v2cat
Runtime is showing kata-qemu.v2 as requested.
The current latest stable PXC Operator version (1.6) does not support runtimeClassName. It is still possible to run Kata containers by specifying io.kubernetes.cri.untrusted-workload
annotation. To ensure containerd supports this annotation add the following into the configuration toml file on the node:
# cat <> /etc/containerd/config.toml
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.kata-qemu.v2"
EOF
# systemctl restart containerd
We will install the operator with regular runtime but will put the PXC cluster into Kata containers.
Create the namespace and switch the context:
$ kubectl create namespace pxc-operator
$ kubectl config set-context $(kubectl config current-context) --namespace=pxc-operator
Get the operator from github:
$ git clone -b v1.6.0 https://github.com/percona/percona-xtradb-cluster-operator
Deploy the operator into your Kubernetes cluster:
$ cd percona-xtradb-cluster-operator
$ kubectl apply -f deploy/bundle.yaml
Now let’s deploy the cluster, but before that, we need to explicitly add an annotation to PXC pods and mark them untrusted to enforce Kubernetes to use Kata containers runtime. Edit deploy/cr.yaml
:
pxc:
size: 3
image: percona/percona-xtradb-cluster:8.0.20-11.1
…
annotations:
io.kubernetes.cri.untrusted-workload: "true"
Now, let’s deploy the PXC cluster:
$ kubectl apply -f deploy/cr.yaml
The cluster is up and running (using 1 node for the sake of experiment):
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pxc-kata-haproxy-0 2/2 Running 0 5m32s
pxc-kata-pxc-0 1/1 Running 0 8m16s
percona-xtradb-cluster-operator-749b86b678-zcnsp 1/1 Running 0 44m
In crt output you should see percona-xtradb cluster running using Kata runtime:
# ctr --namespace k8s.io containers list | grep percona-xtradb-cluster | grep kata
448a985c82ae45effd678515f6cf8e11a6dfca159c9abf05a906c7090d297cba docker.io/percona/percona-xtradb-cluster:8.0.20-11.2 io.containerd.kata-qemu.v2
We are working on adding the support for runtimeClassName option for our operators. The support of this feature enables users to freely choose any container runtime.
Running databases in containers is an ongoing trend and keeping data safe is always the top priority for a business. Kata containers provide security isolation through mature and extensively tested qemu virtualization with little-to-none changes to the existing environment.
Deploy Percona XtraDB Cluster with ease in your Kubernetes cluster with our Operator and Kata containers for better isolation without performance penalties.
This article was originally posted on percona.com/blog. See the original article here.
The post Running Percona Kubernetes Operator for Percona XtraDB Cluster with Kata Containers appeared first on Superuser.
OpenStack upstream CI/CD tests the things on defined LTS or stable distribution versions. OpenStack Technical Committee defines each cycle testing runtime. As per OpenStack Victoria testing runtime, defined versions are:
Ubuntu Focal (Ubuntu LTS 20.04) was released on April 23, 2020, and in OpenStack Victoria (released on 14th Oct 2020), we migrated the upstream CI/CD testing on the above-defined testing runtime. This work is done as one of the community-wide goals “Migrate CI/CD jobs to new Ubuntu LTS Focal“.
OpenStack CI/CD is implemented with Zuul jobs prepare the node to deploy the OpenStack using Devstack and run tests (Tempest or its plugins, project in-tree tests, rally tests etc). Base OS installed on the node is where OpenStack will be deployed by DevStack.
Till OpenStack Ussuri release, the base OS on the majority of the job’s node was Ubuntu Bionic (18.04). So DevStack used to deploy OpenStack on Ubuntu Bionic and then run tests.
With the new version of Ubuntu Focal (20.04), the node’s base OS has been moved from Ubuntu Bionic -> Ubuntu Focal. On every code change, it will make sure OpenStack work properly on Ubuntu Focal.
NOTE: This migration target only zuulv3 native jobs. Legacy jobs are left to keep running on Bionic and plan to be migrated on Focal while they migrate to zuulv3 native jobs . We had another community-goal to migrate all the legacy jobs to zuulv3 native.
We started the work in June and prepared the devstack, tempest, and tox-based base jobs on Focal so that all projects gate can be tested and fixed in advance before devstack and Tempest base jobs merge. The idea behind the advance testing is to avoid or minimize the gate failure in any of the repo under any projects. This advance testing includes integration as well as tox based unit, functional, doc, pep8, and lower-constraint testing.
This migration had more things to fix compared to the previous migration from Ubuntu Xenial to Bionic. One reason for that was Ubuntu Focal and python dependencies dropping python2.7 support and MySQL 8.0 compatibility. OpenStack already dropped the Python2.7 in the Ussuri release but OpenStack dependencies lower constraints were not updated to their python-3 only version because many of them were not python3-only at that time. So in Ubuntu Focal, those dependencies versions are python3-only which caused many failures in our lower constraints jobs.
A few of the key issues we had to fix for this migrations are:
Fixing these bugs took a lot of time for us and that is the reason this migration was late and missed the initial deadlines.
All the work for this migration are tracked on: https://storyboard.openstack.org/#!/story/2007865
All changes are: https://review.opendev.org/q/topic:%2522migrate-to-focal%2522+(status:open+OR+status:merged)
If your 3rd party CI jobs are still not migrated to zuulv3 then you need to first migrate legacy jobs to zuulv3 native. Refer to this community-goal for details.
For zuulv3 native jobs, like upstream jobs you need to switch the jobs nodeset from ubuntu Bionic to ubuntu Focal.
Below Diagram gives a quick glance of changing the nodeset to Ubuntu Focal:
If you want to verify the nodeset used in your zuul jobs, you can see the hostname and label in job-output.txt
In the same way, you can migrate your third-party CI also to Focal. If third-party job is using the base job without overriding the ‘nodeset’ then the job is automatically switched to Focal. If the job overrides the ‘nodeset’ then, you need to switch to Focal nodeset like shown above. All the Ubuntu Focal nodeset for a single node to multinode jobs are defined in devstack.
We encourage all the 3rd party jobs to migrate to Focal asap as devstack will not support the Bionic related fixes from Victoria onwards.
There are many dependencies constraints that need to be bumped to upgrade the OpenStack cloud to the Victoria release. To know all those compatible versions, check the project-specific patches merged from here or from this Bug#1886298. This can help to have prepared the smooth upgrades.
I would like to convey special Thanks to everyone who helped in this goal and made it possible to complete it in the Victoria cycle itself.
Cloud networking is an important element within all types of cloud building – public, private, or hybrid. For private clouds from VEXXHOST, our open source networking choice is OpenStack Neutron. We believe that Neutron brings in great value for enterprises in building the ‘central nervous system’ of their cloud. Let us see why.
Neutron is an extremely powerful networking project of OpenStack. It is considered complex by many users but let me reassure you that its capabilities make it a virtual powerhouse like nothing else out there. We have a previous post that lays out the basics of OpenStack Neutron.
OpenStack Neutron can help you create virtual networks, firewalls, routers, and more. It is flexible and secure. With it, OpenStack can offer network-connectivity-as-a-service. Neutron can help other OpenStack projects manage interface devices through the implementation of API.
Here is a breakdown of a few points mentioned above and how they benefit enterprises.
OpenStack Neutron provides cloud tenants with a flexible API, which helps them build strong networking topologies while also allowing them to configure advanced network policies. There is no unnecessary vendor lock-in as well. A use-case scenario of this capability for enterprises is that they can create multi-tier topologies of web applications.
Neutron allows organizations to have peace of mind regarding security and segmentation as it enables single-tenant networks. These fully isolated networks work in a way that’s almost like having your own secure control switch to the servers, with no possibility of someone else accessing. Moreover, segmentation is possible for these connections. This segmentation enables each VM that comes within a given hypervisor the capability to be private to the respective network.
With Neutron for Cloud Networking, enterprises can leverage automatic IP Address Management and ensure consistency. This means that you don’t have to manually manage IP addresses, and it allows for consistency between the system and the documentation. Another advantage of these dynamic IP addresses is that the possibility of manipulating IP via blocking the layer above is eliminated.
Did you know that enterprises can use OpenStack Neutron for bare metal scaling? Yes, it is true. Each rack of the system works as a network of its own. The vast scheduling network enables these racks to be interconnected with each other. This capability also allows the system to assign appropriate IP addresses.
Overall, Neutron works as a safe, reliable, and flexible cloud networking option for businesses.
VEXXHOST provides Neutron as our open source solution for networking with private cloud. We also provide various other OpenStack-based services for our clients across the globe. If you want to know more about our services and solutions, contact our team. Improve your knowledge of private clouds from our ever-evolving and dedicated resource page.
The post Why Cloud Networking with OpenStack Neutron Works Great for Enterprises appeared first on VEXXHOST.
Look back at our Pushing Keystone over the Edge presentation from the OpenStack Summit. Many of the points we make are problems faced by any application trying to scale across multiple datacenters. Cassandra is a database designed to deal with this level of scale. So Cassandra may well be a better choice than MySQL or other RDBMS as a datastore to Keystone. What would it take to enable Cassandra support for Keystone?
Lets start with the easy part: defining the tables. Lets look at how we define the Federation back end for SQL. We use SQL Alchemy to handle the migrations: we will need something comparable for Cassandra Query Language (CQL) but we also need to translate the table definitions themselves.
Before we create the tables, we need to create keyspace. I am going to make separate keyspaces for each of the subsystems in Keystone: Identity, Assignment, Federation, and so on. Here’s the Federated one:
CREATE KEYSPACE keystone_federation WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'} AND durable_writes = true;
The Identity provider table is defined like this:
idp_table = sql.Table(
'identity_provider',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('enabled', sql.Boolean, nullable=False),
sql.Column('description', sql.Text(), nullable=True),
mysql_engine='InnoDB',
mysql_charset='utf8')
idp_table.create(migrate_engine, checkfirst=True)
The comparable CQL to create a table would look like this:
CREATE TABLE identity_provider (id text PRIMARY KEY , enables boolean , description text);
However, when I describe the schema to view the table defintion, we see that there are many tuning and configuration parameters that are defaulted:
CREATE TABLE federation.identity_provider (
id text PRIMARY KEY,
description text,
enables boolean
) WITH additional_write_policy = '99p'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND cdc = false
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND extensions = {}
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99p';
I don’t know Cassandra well enough to say if these are sane defaults to have in production. I do know that someone, somewhere, is going to want to tweak them, and we are going to have to provide a means to do so without battling the upgrade scripts. I suspect we are going to want to only use the short form (what I typed into the CQL prompt) in the migrations, not the form with all of the options. In addition, we might want an if not exists clause on the table creation to allow people to make these changes themselves. Then again, that might make things get out of sync. Hmmm.
There are three more entities in this back end:
CREATE TABLE federation_protocol (id text, idp_id text, mapping_id text, PRIMARY KEY(id, idp_id) );
cqlsh:federation> CREATE TABLE mapping (id text primary key, rules text, );
CREATE TABLE service_provider ( auth_url text, id text primary key, enabled boolean, description text, sp_url text, RELAY_STATE_PREFIX text);
One thing that is interesting is that we will not be limiting the ID fields to 32, 64, or 128 characters. There is no performance benefit to doing so in Cassandra, nor is there any way to enforce the length limits. From a Keystone perspective, there is not much value either; we still need to validate the UUIDs in Python code. We could autogenerate the UUIDs in Cassandra, and there might be some benefit to that, but it would diverge from the logic in the Keystone code, and explode the test matrix.
There is only one foreign key in the SQL section; the federation protocol has an idp_id that points to the identity provider table. We’ll have to accept this limitation and ensure the integrity is maintained in code. We can do this by looking up the Identity provider before inserting the protocol entry. Since creating a Federated entity is a rare and administrative task, the risk here is vanishingly small. It will be more significant elsewhere.
For access to the database, we should probably use Flask-CQLAlchemy. Fortunately, Keystone is already a Flask based project, so this makes the two projects align.
For migration support, It looks like the best option out there is cassandra-migrate.
An effort like this would best be started out of tree, with an expectation that it would be merged in once it had shown a degree of maturity. Thus, I would put it into a namespace that would not conflict with the existing keystone project. The python imports would look like:
from keystone.cassandra import migrations
from keystone.cassandra import identity
from keystone.cassandra import federation
This could go in its own git repo and be separately pip installed for development. The entrypoints would be registered such that the configuration file would have entries like:
[application_credential] driver = cassandraAny tuning of the database could be put under a [cassandra] section of the conf file, or tuning for individual sections could be in keys prefixed with cassanda_ in the appropriate sections, such as application_credentials as shown above.
It might be interesting to implement a Cassandra token backend and use the default_time_to_live value on the table to control the lifespan and automate the cleanup of the tables. This might provide some performance benefit over the fernet approach, as the token data would be cached. However, the drawbacks due to token invalidation upon change of data would far outweigh the benefits unless the TTL was very short, perhaps 5 minutes.
Just making it work is one thing. In a follow on article, I’d like to go through what it would take to stretch a cluster from one datacenter to another, and to make sure that the other considerations that we discussed in that presentation are covered.
Feedback?
For the first time, the November TOP500 list (published to coincide with Supercomputing 2020) includes fully OpenStack-based Software-Defined Supercomputers:
Drawing on experience including from the SKA Telescope Science Data Processor Performance Prototypting Platform and Verne Global's hpcDIRECT project, StackHPC has helped bootstrap and is providing support for these OpenStack deployments. They are deployed and operated using OpenStack Kayobe and OpenStack Kolla-Ansible.
A key part of the solution is being able to deploy an OpenHPC-2.0 Slurm cluster on server infrastructure managed by OpenStack Ironic. The Dell C6420 servers are imaged with CentOS 8, and we use our OpenHPC Ansible role to both configure the system and build images. Updated images are deployed in a non-impacting way through a custom Slurm reboot script.
With OpenStack in control, you can quickly rebalance what workloads are deployed. Users can move capacity between multiple Bare Metal, Virtual Machine and Container based workloads. In particular, OpenStack Magnum provides on demand creation of Kubernetes clusters, an approach popularised by CERN.
In addition to user workloads, the solution interacts with iDRAC and Redfish management interfaces to control server configurations, remediate faults and deliver overall system metrics. This was critical in optimising the data centre environment and resulted in the high efficiency achieved in the TOP500 list.
For more details, please watch our recent presentation from the OpenInfra Summit:
If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.
RDO Victoria Released
The RDO community is pleased to announce the general availability of the RDO build for OpenStack Victoria for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Victoria is the 22nd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: RDO Victoria provides packages for CentOS8 and python 3 only. Please use the Train release, for CentOS7 and python 2.7.
Interesting things in the Victoria release include:
Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/victoria/highlights.
Contributors
During the Victoria cycle, we saw the following new RDO contributors:
Amy Marrich (spotz)
Daniel Pawlik
Douglas Mendizábal
Lance Bragstad
Martin Chacon Piza
Paul Leimer
Pooja Jadhav
Qianbiao NG
Rajini Karthik
Sandeep Yadav
Sergii Golovatiuk
Steve Baker
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:
Adam Kimball
Ade Lee
Alan Pevec
Alex Schultz
Alfredo Moralejo
Amol Kahat
Amy Marrich (spotz)
Arx Cruz
Bhagyashri Shewale
Bogdan Dobrelya
Cédric Jeanneret
Chandan Kumar
Damien Ciabrini
Daniel Pawlik
Dmitry Tantsur
Douglas Mendizábal
Emilien Macchi
Eric Harney
Francesco Pantano
Gabriele Cerami
Gael Chamoulaud
Gorka Eguileor
Grzegorz Grasza
Harald Jensås
Iury Gregory Melo Ferreira
Jakub Libosvar
Javier Pena
Joel Capitao
Jon Schlueter
Lance Bragstad
Lon Hohberger
Luigi Toscano
Marios Andreou
Martin Chacon Piza
Mathieu Bultel
Matthias Runge
Michele Baldessari
Mike Turek
Nicolas Hicher
Paul Leimer
Pooja Jadhav
Qianbiao.NG
Rabi Mishra
Rafael Folco
Rain Leander
Rajini Karthik
Riccardo Pittau
Ronelle Landy
Sagi Shnaidman
Sandeep Yadav
Sergii Golovatiuk
Slawek Kaplonski
Soniya Vyas
Sorin Sbarnea
Steve Baker
Tobias Urdin
Wes Hayutin
Yatin Karel
The Next Release Cycle
At the end of one release, focus shifts immediately to the next release i.e Wallaby.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
If you haven’t already tried SkySQL, it is worth noting that you can now launch a Galera Cluster. SkySQL is an automated Database as a Service (DBaas) solution to launch a Galera Cluster within Google Cloud Platform. Launching a Galera Cluster is currently a tech preview and you are still eligible for USD$500 worth of credit, which should let you evaluate it for quite a bit.
When you choose Transactions (it also supports Analytics, Both (HTAP) and Distributed SQL which is also Galera Cluster), you’ll notice that you can launch the Galera Cluster tech preview in multiple regions: Americas, APAC, or EMEA. Costs per hour for Sky-4×15 which has 4 vCPUs and 15GB of memory is USD$0.6546/hour/node (and when you think about it, you’re getting a minimum of 3 Galera Cluster nodes and one MaxScale node which acts as a load balancer and endpoint for your application). You’ll also pay a little more for storage (100GB SSD storage is USD$0.0698/hour due to the 3 nodes). So overall, expect an estimated total of USD$1.9638/hour for the Sky-4×15 nodes, and $0.0698/hour for the 100GB storage per node, bringing your total to USD$2.0336/hour.
Once launched, you’ll note that the service state will be pending till all the nodes are launched. During this time you will also have to whitelist your IP addresses that are planning to access the endpoint. Doing so is extremely straightforward as it does automatic detection within the browser for you. You’ll probably need to add a few more for your application and so forth, but this is extremely straightforward and very well documented.
You’re then given temporary service login credentials, and again, it is extremely well documented. You also get an SSL certificate to login with, and considering this is using the cloud, it makes absolute sense.
A quick point to note: you may see such an error when you’re trying to connect to the MaxScale endpoint, especially if you’re using the MySQL 8 client: ERROR 1105 (HY000): Authentication plugin 'MariaDBAuth' failed. The easy fix is of course to use the proper client library. You also automatically get connected to one of the three nodes in the cluster.
Overall, when we evaluated it, you end up with a 10.5.5-3-MariaDB-enterprise-log, which means it also comes with a few handy additions not present in the community versions: GCache encryption, BlackBox, and non-blocking DDL (wsrep_osu_method = NBO is an option). When you run a SHOW VARIABLES you will notice a few new additions, some of which include: wsrep_black_box_name, wsrep_black_box_size, and an obviously new wsrep_provider, ibgalera_enterprise_smm.so.
Why not take SkySQL for a spin? It is a really easy way to launch a Galera Cluster, and you also have a $500 credit. Load some data. Send some feedback. And if you’re interested in learning more, why not attend: Achieving uninterrupted availability with clustering and transaction replay on November 17 at 10 a.m. PT and 4 p.m. CET? This and more will be discussed at the webinar.
ARM servers are more and more present in our day to day life, their usage varying from minimal IoT devices to huge computing clusters. So we decided to put the Windows support for ARM64 cloud images to the test, with two primary focuses:
Our friends from https://amperecomputing.com kindly provided the computing resources that we used to check the current state of Windows virtualization on ARM64.
The test lab consisted of 3 Ampere Computing EMAG servers (Lenovo HR330A – https://amperecomputing.com/emag), each with 32 ARM64 processors, 128 GB of RAM and 512 GB SSD.
Cloudbase-Init is a provisioning agent designed to initialize and configure guest operating systems on various platforms: OpenStack, Azure, Oracle Cloud, VMware, Kubernetes CAPI, OpenNebula, Equinix Metal (formerly: Packet), and many others.
Building and running Cloudbase-Init requires going through multiple layers of an OS ecosystem, as it needs a proper build environment, C compiler for Python and Python extensions, Win32 and WMI wrappers, a Windows service wrapper and an MSI installer.
This complexity made Cloudbase-Init the perfect candidate for checking the state of the toolchain ecosystem on Windows ARM64.
EMAG servers come with CentOS 7 preinstalled, so the first step was to have a Windows ARM64 OS installed on them.
Windows Server ARM64 images are unfortunately not publicly available, so the best option consists in using Windows Insider (https://insider.windows.com/), Windows 10 PRO ARM64 images available for download.
As there is no ISO available on the Windows Insiders website, we had to convert the VHDX to a RAW file using qemu-img.exe, boot a Linux Live ISO which had dd binary tool on it (Ubuntu is great for this) on the EMAG server and copy the RAW file content directly on the primary disk.
For the dd step, we needed a Windows machine where to download / convert the Windows 10 PRO ARM64 VHDX and two USB sticks. One USB stick for the Ubuntu Live ISO and one for the Windows 10 PRO ARM64 RAW file.
Rufus was used for creating the Ubuntu Live ISO USB and copying the RAW file to the other USB stick. Note that one USB stick must be at least 32 GB in size to cover for the ~25GB of the Windows RAW file.
Tools used for the dd step:
After the dd process succeeds, a server reboot was required. The first boot took a while for the Windows device initialization followed by the usual “Out of the box experience”.
The following steps show how we built Cloudbase-Init for ARM64. As a side note, Windows 10 ARM64 has a builtin emulator for x86, but not for x64. Practically, we could run the x86 version of Cloudbase-Init on the system, but it would have run very slow and some features would have been limited by the emulation (starting native processes).
The Cloudbase-Init ecosystem consists of these main building blocks:
Toolchain required:
Python 3.x for ARM64 can be built using Visual Studio 2017 or 2019. In our case, we used the freely available Visual Studio 2019 Community Edition, downloadable from https://visualstudio.microsoft.com/downloads/.
The required toolchain / components for Visual Studio can be installed using this vsconfig.txt. This way, we make sure that the build environment is 100% reproducible.
Python source code can be found here: https://github.com/python/cpython.
To make the build process even easier, we leveraged GitHub Actions to easily build Python for ARM64. An example workflow can be found here: https://github.com/cloudbase/cloudbase-init-arm-scripts/blob/main/.github/workflows/build.yml.
Also, prebuilt archives of Python for Windows ARM64 are available for download here: https://github.com/ader1990/CPython-Windows-ARM64/releases.
Notes:
Python setuptools is a Python package that handles the “python setup.py install” workflow.
Source code can be found here: https://github.com/pypa/setuptools.
The following patches are required for setuptools to work:
Installation steps for setuptools (Python and Visual Studio are required):
set VCVARSALL="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" set CL_PATH="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX86\ARM64\cl.exe" set MC_PATH="C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\arm64\mc.exe" call %VCVARSALL% amd64_arm64 10.0.17763.0 & set git clone https://github.com/ader1990/setuptools 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 pushd setuptools git checkout am_64 echo "Installing setuptools" python.exe bootstrap.py 1>nul 2>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 %CL_PATH% /D "GUI=0" /D "WIN32_LEAN_AND_MEAN" /D _ARM64_WINAPI_PARTITION_DESKTOP_SDK_AVAILABLE launcher.c /O2 /link /MACHINE:ARM64 /SUBSYSTEM:CONSOLE /out:setuptools/cli-arm64.exe IF %ERRORLEVEL% NEQ 0 EXIT 1 python.exe setup.py install 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 popd
Python pip is required for easier management of Cloudbase-Init’s requirements installation and wheels building.
Python’s wheel package is required to build wheels. Wheels are the pre-built versions of Python packages. There is no need to have a compiler to install the package from source on the exact system version the wheel has been built for.
Pip sources can be found here: https://github.com/pypa/pip.
The following pip patch is required: https://github.com/ader1990/pip/commit/0559cd17d81dcee43433d641052088b690b57cdd.
The patch introduces two binaries required for ARM64, which were built from: https://github.com/ader1990/simple_launcher/tree/win_arm64
This patched version of pip can use the wheel to create proper binaries for ARM64 (like setuptools).
Installation steps for wheel (Python is required):
echo "Installing pip" python.exe -m easy_install https://github.com/ader1990/pip/archive/20.3.dev1.win_arm64.tar.gz 1>nul 2>nul IF %ERRORLEVEL% NEQ 0 EXIT
Python PyWin32 package is a wrapper for (almost) all Win32 APIs from Windows. It is a behemoth from the source code perspective, with Cloudbase-Init using a limited amount of Win32 APIs via PyWin32.
Source code can be found here: https://github.com/mhammond/pywin32.
The following patches are required:
Installation steps for PyWin32 (Python 3.8 and Visual Studio 2019 are required):
echo "Installing pywin32" git clone https://github.com/ader1990/pywin32 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 pushd pywin32 git checkout win_arm64 IF %ERRORLEVEL% NEQ 0 EXIT 1 pushd "win32\src" %MC_PATH% -A PythonServiceMessages.mc -h . popd pushd "isapi\src" %MC_PATH% -A pyISAPI_messages.mc -h . popd mkdir "build\temp.win-arm64-3.8\Release\scintilla" 1>nul 2>nul echo '' > "build\temp.win-arm64-3.8\Release\scintilla\scintilla.dll" python.exe setup.py install --skip-verstamp IF %ERRORLEVEL% NEQ 0 EXIT 1 popd
The build process takes quite a lot of time, at least half an hour, so we took a(nother) cup of coffee and enjoyed the extra time.
The patches hardcode some compiler quirks for Visual Studio 2019 and remove some unneeded extensions from the build. There is work in progress to prettify and upstream the changes.
Now, as all the previous steps have been completed, it is time to finally build Cloudbase-Init. Thank you for your patience.
Source code can be found here: https://github.com/cloudbase/cloudbase-init
Installation steps for Cloudbase-Init (Python and Visual Studio are required):
echo "Installing Cloudbase-Init" git clone https://github.com/cloudbase/cloudbase-init 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 pushd cloudbase-init echo "Installing Cloudbase-Init requirements" python.exe -m pip install -r requirements.txt 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 python.exe -m pip install . 1>nul IF %ERRORLEVEL% NEQ 0 EXIT 1 popd
After the installation steps were completed, the cloudbase-init.exe AMR64 executable wrapper will be available.
Cloudbase-Init usually runs as a service at every boot. As cloudbase-init.exe is a normal executable, it needs a service wrapper for Windows. A service wrapper is a small program that implements the hooks for the Windows service actions, like start, stop and restart.
Source code can be found here: https://github.com/cloudbase/OpenStackService
The following patch was required: https://github.com/ader1990/OpenStackService/commit/a48c4e54b3f7db7d4df163a6d7e13aa0ead4a58b
For an easier build process, a GitHub actions workflow file can be found here: https://github.com/ader1990/OpenStackService/blob/arm64/.github/workflows/build.yml
A prebuilt release binary for OpenStackService ARM64 is available for download here: https://github.com/ader1990/OpenStackService/releases/tag/v1.arm64
Now we are ready to use Cloudbase-Init for guest initialization on Windows 10 PRO ARM64.
Main takeaways:
The post Windows on ARM64 with Cloudbase-Init appeared first on Cloudbase Solutions.
Due to COVID-19 pandemic, OpenStack PTG for Wallaby development planning and discussions are held virtually from Oct 26 – 30th, 2020. Overall we had a good discussion and another successful virtual PTG though most of us missed the face to face interaction. This blog covers the policy popup (consistent RBAC) discussion that happened in the Forum and Monday and Tuesday PTG.
Progress of consistent RBAC:
We discussed the current and possible future challenges while migrating to the new policy. policy file in JSON format is one know and we talked about a current workaround and long term plan.
Deprecation warnings are still an issue and a lot of warnings are logged. This is logged as a nova bug. Also, the HTML version of the policy doc does not have the deprecation rule and reason. Example: https://docs.openstack.org/nova/latest/configuration/policy.html We need to add it in these docs too.
A clear step by step document about how to use system scope in cloud.yaml as well as in general with all migration steps are much needed.
We also asked if any deployment is migrated to new policy but there is none yet.
We carried the Forum sessions discussion in PTG with few extra topics.
We talked about it and decided that it will be great to do this in advance before projects start moving towards the new policy. and having this as a community goal in wallaby will help this effort to move faster. I have proposed this as a goal in TC PTG and it was agreed to select it for wallaby (Goal proposal) We also need to update devstack for the neutron policy file which is policy.json.
There is no best solution for deprecation warnings when it is a lot in case of new policy work. We cannot stop logging warnings completely. We discussed an option to provide a config option to disable the warnings (enable by default) and only for default value change, not the policy name change. The policy name change is a little critical even in policy overridden case also that is why we should not make it switchable. This way operators can disable warnings after seeing it for the first time and if it is too noisy.
It is challenging to adopt new policy in Horizon when some projects have new policies and some not. We left this for now and will continue brainstorming once we do some investigation on the usage of system token for new policy and old one. For now, amotoki proposed a workaround to keep the policy file with the deprecated rule as the overridden rules. –
In the end, we discussed what all things to be targeted as part of new policy work and what all as a separate effort. Below is the list and I will be documenting it on wiki.
Due to COVID-19 pandemic, OpenStack PTG for Wallaby development planning and discussions are held virtually from Oct 26 – 30th, 2020. Overall we had a good discussion and another successful virtual PTG though most of us missed the face to face interaction. This blog covers the QA (Quality Assurance) discussion that happened on Monday and Tuesday.
We talked about the retrospective of the last cycle, one main challenge we discussed is the slow review in QA
due to fewer contributors. Below are the action items that came up from improving things:
Action Items:
With daylight ending in Nov, we decided to shift the QA office hour by an hour late which is at 14 UTC.
Tempest and Horizon team decided to bring back the single horizon test to Tempest tree and retire the
tempest-horizon plugins which was too much maintenance for single tests. It is unlikely to happen but if
Tempest team in future plan to remove this test then it needs to be consulted with Horizon team first.
Action items:
For now, validation resources automation and skip ssh part via run_validation is done in API test only and this
proposal is to extend it to scenario tests also. There is no reason not to do that which will help to run tempest
scenario tests on images where ssh is not allowed. There is a most possible situation where complete tests
need to be skipped but we will see case by case and at least skipping automatically will help tester to avoid
explicitly excluding the scenario tests to the skip list.
Action Items:
We are lacking the maintainer in Patrole project which is a big challenge to release a stable version of it. Sonia
and doug (already helping) will try to spend more bandwidth in Patrole. Another
thing to reduce the Patrole tests execution time. We discussed a few options like policy engine to return the
API based on the flag or use fake driver etc but we did not conclude any option yet as they need more investigation.
Action items:
This is just a thought. There is TC tag ‘assert:supports-zero-downtime-upgrade’ which is not
used by any of the projects and also we do not have any testing framework which can verify this
tag. We talked about it if we can do such testing in grenade or not. Before moving forward on
discussion and investigation we checked if anyone can volunteer for this work. As of now, no
volunteer.
We only test cirros guest image to create the test VM and all proposal from Paras is to try more
images to enhance the different configuration scenarios in upstream testing. This will help
to able to catch more failure/scenario at the upstream gate itself compare to the current situation where
most of them are reported from downstream testing.
Action items:
‘primary’ and ‘alt_promary’ credentials in Tempest are hardcoded to non-admin and they are assigned
configured ‘tempest_role’. There is no way we can assign a different role to both of these creds. The idea here
is to make ‘primary’ and ‘alt_promary’ credentials configurable so that different deployment can configure
these with different roles in various combinations. We will be adding two config option similar to
‘tempest_role’ and default to an empty list so that we continue the default behavior with what we have currently.
So that there is no backward-incompatible change instead it is an additional new thing provided.
Action Items:
We talked about a few method changes to make the scenario manager stable. If any method is used among plugins
only and not in Tempest then we do not actually need to move it to Tempest and they can stay on the side of the plugin itself.
We ran out of time here and will continue the brainstorming in office hours or so.
(this etherpad – https://etherpad.opendev.org/p/tempest-scenario-manager)
Action Items:
We did not prioritize the things as such but listed all working items in etherpad along with Victoria cycle backlogs.
Due to COVID-19 pandemic, OpenStack PTG for Wallaby development planning and discussions are held virtually from Oct 26 – 30th, 2020. Overall we had a good discussion and another successful virtual PTG though most of us missed the face to face interaction. This blog covers the TC discussion that happened on Tuesday and Friday.
Four projects are still pending for leadership assignment. We discussed the next step. If you would like to help any of these or using in your cloud/distro, this is the right time to step up:
1. Karbor
2. Qinling
3. Searchlight
4. Placement
There are a couple of things TC finished in last cycle, few of them are:
TC is going to try (re-try) the weekly meeting every Thursday 15:00 UTC.
Action Items:
This is one of the important items and still no good progress on this. From TC perspective, we talked about how to ask or motivate projects to migrate to OSC. TC needs to be very clear on whether it’s a strict policy or just guidelines. After a long discussion on this, we are going with the below strategy: These strategies will be documented as TC resolution.
Action items:
There are two proposals for the Wallaby community-wide goals. First is ‘oslo.rootwrap to oslo.privsep’ which is already being discussed in the last cycle also and all set to be selected. 2nd proposal came up during PTG itself. During policy-popup PTG, it came up that deprecating the JSON formatted of policy file will be a good advance step before the projects move to new RBAC policies. This will help operators to smoothly migrate to new policies. This does not involve much work and ok to select as 2nd goal for Wallaby cycle,
TC agreed to have the below goals for the Wallaby cycle:
Action items:
In the Victoria cycle, we started the process to audit all the TC tags and start cleanup those. We removed the ‘tc:approved-release’ tag in the Victoria cycle. In this PTG we discussed two more tags.
1. assert:supports-zero-downtime-upgrade:
Currently, there is no project who has this tag and also no testing framework available. Testing for zero downtime is not so easy in upstream testing. We decided to remove this tag as it is advertising something we aren’t doing. If anyone interested to spend time on this in the future then we can add it after projects start testing it and document it.
2. assert:supports-api-interoperability:
This tag is whether project API is interoperable also or not and this is important from the interop trademark program also. We only have Nova having this tag. Our goal is to promote more projects to apply for this tag. During the discussion, we found that we need to clarify this tag more clearly. For example, this tag is not about implementing the microversion but any versioning schema which provides the feature (API changes) discoverability. And also it is about how we change the API not how our APIs are currently. As long as services have some versioning mechanism to discover the changes and follow the API SIG guidelines for interoperability, and test it via branchless testing way, that service is applicable to apply for this tag.
TC will document this tag in a more clear way and encourage each project to start working on applying this tag.
Action Items:
This is one of the exciting discussion for everyone. We hosted this cross-community meeting with Kubernetes steering committee teams. Bob and dims from the Kubernetes steering committee joined us in this discussion. It was started with a quick introduction from both teams and then started the discussion on the below topics:
k8s governance hierarchy is LF->CNCF->k8s steering commitee->various SIG/working group. k8s Steering the committee consists of 7 elected members and doesn’t actually influence the direction of the code instead leave it to SIG and arch committee. There is no chair in this committee and host biweekly private as well as public meetings. Each SIG (repos) team has approver and reviewer roles where the reviewer review the code and the approver are responsible for merging the code. Naser explained the OpenStack governance model.
The SIG lead has to sign off and accept that they are willing to take ownership + handle maintenance + releasing etc. It is general consensus that we try to distribute much work as possible to the subgroup and keep it out of k/k. There is general consensus across k8s leadership that work should be delegated out to subgroups. For API interoperability challenge in a distributed model, k8s have conformance tests that exercise the API performance and vendors try and upload conformance results every release or dot release.
release and CI part was discussed. and also on how COVID things are impacting community health. k8s community almost lost their independents or part-time contributors. also, the k8s community is doing 3 releases per year compared to 4.
To stay connected it will be a good idea if we extend an invite for K8s to join PTG.
We have three upstream opportunities defined for 2020 but there is no help on any of these, even in previous (2018, 2019) upstream opportunities also. We started the discussion if we need to continue this for 2021 years also or just stop defining it and decide the area to help when we have someone interested. Before deciding anything on this mnaser will discuss this with the board of directors and get their opinion.
Action Items:
Currently, we have two active popup teams 1. policy 2. Encryption. TC checked the progress on both teams. The policy team is very active and finished some work in Voctoria cycle (cyborg
finished it and Barbican started) and also hosted forum, PTG sessions, and discussed Wallaby development plan. This team host biweekly meeting to discuss and review the progress.
The encryption team is also active. Josephine explained the progress on this. Glance spec is merged.
Both teams will continue in Wallaby cycle also.
There are still 19 teams pending to finish this audit. Those are listed in etherpad
We encourage all those pending teams to finish the audit, TC members will start following with those projects every week.
Action Items:
We ran out of time and all the pending (below) topics will be discussed in TC regular meetings. TC will skip this month’s meeting but will have weekly meetings on 12th Nov onwards.
Last week, thousands of community members participated in the Open Infrastructure Summit. This time, the commute for over 10,000 attendees was short, because just like every other 2020 conference, the Summit was virtual. Hosted by the Open Infrastructure Foundation (previously the OpenStack Foundation (OSF)—if you missed this announcement, check out the news), the Summit gathered people from over 120 countries and 30 different open source communities to collaborate around the next decade of open infrastructure.
The hallway track and networking activities were missed, but the week was still full of announcements and new (and growing!) users sharing their open source, production use cases. The event was free to participate, and this was only possible with the support of the Summit sponsors:
Headline: Canonical (ubuntu), Huawei, VEXXHOST
Premier: Cisco, Tencent Cloud
Exhibitor: InMotion Hosting, Mirantis, Red Hat, Trilio, VanillaStack, ZTE
Below is a snapshot of what you missed and what you may want to rewatch.
Like I mentioned earlier, the OSF opened the Summit with some big news. Jonathan Bryce, executive director, announced the Open Infrastructure Foundation (OIF) as the successor to the OSF during the opening keynotes on Monday, October 19. With support from over 60 founding members and 105,000 community members, the OIF remains focused on building open source communities to build software that runs in production. Bryce was joined by Mark Collier, OIF COO, who announced that the OIF board of directors approved four new Platinum Members (a record number of new Platinum Members approved at one time): Ant Group, Facebook Connectivity, FiberHome, and Wind River.
The OIF builds open source communities who write software that runs in production.The OpenStack and Kata Containers communities celebrated software releases, and dozens users shared their open infrastructure production use cases.
Five days before the Summit, the OpenStack community released its 22nd version, Victoria. In Tuesday’s keynote, Kendall Nelson, chair of the First Contact SIG, talked about some of the features that landed including Ironic features for a smaller standalone footprint and supporting more systems at the edge. There were also features around hardware enablement and supporting FPGAs that she says will continue through the Wallaby cycle, which the upstream developers are discussing at the Project Teams Gathering (PTG) this week.
Right in time for the Summit, the Kata Containers community released its 2.0 version, including a rewrite of the Kata Containers agent to help reduce the attack surface and reduce memory overhead. The agent was rewritten in Rust, and users will see a 10x improvement in size, from 11MB to 300KB. Xu Wang, a member of the Kata Architecture Committee, joined the keynotes on Monday to talk about how Kata 2.0 is already running in production at Ant Group, home of the largest payment processor in the world as well as other financial services. At Ant Group, Kata Containers is running on thousands of nodes and over 10,000 CPU cores.
Ant Group is one of many users who shared information around their production use cases. Below are some highlights of the users who spoke. You can now watch all of the breakout and keynote sessions, and there will also be some Forum sessions uploaded in the coming days.
Production use cases:
The OIF announced its newest open infrastructure pilot project, OpenInfra Labs, a collaboration among universities and vendors to integrate and optimize open source projects in production environments and publish complete, reproducible stacks for existing and emerging workloads. Michael Daitzman, a contributor to the project, delivered a keynote introducing the project, thanking the community for their work with projects like OpenStack, Kubernetes, and Ceph, and inviting new contributors to get involved.
Magma, an open source mobile packet core project initiated by Facebook Connectivity, was front and center at the Summit last week. In the opening keynotes, Amar Padmanabhan, engineer at Facebook, introduced Magma and shared the community’s mission to bridge the digital divide and connect the next billion people to the Internet. The project was further discussed in a production use case from Mariel Triggs, the CEO of MuralNet, who talked about the connectivity issues that indigneous nations face and how her organization leverages Magma for an affordable way to keep them connected. Boris Renski, founder and CEO of FreedomFi, returned to the Summit keynote stage to show that building an LTE network with Magma is so easy, even a goat could learn to do it. And sure enough, the goat successfully deployed the network. I’m pretty sure the looks on these faces sum it all up.
Announced a few weeks ago, Verizon is running Wind River’s distribution of StarlingX in production for its 5G virtualized RAN. During Tuesday’s keynote, Ildiko Vancsa talked about their use case and why Verizon relies on StarlingX for ultra low latency, high availability, and zero-touch automated management.
Over 15 million compute cores are managed by OpenStack around the world. Imtiaz Chowdhury, cloud architect at Workday, talked about how their deployment has contributed to that growth with their own 400,000 core OpenStack deployment.
Additional OpenStack users talking about their production use cases include:
Volvo Cars shared their Zuul production use case to kick off the second day of Summit keynotes. Johannes Foufas and Albin Vass talked about how premium cars need premium tools, and Zuul is a premium tool. The team uses Zuul to build several software components including autonomous driving software, and Foufas says speculative merge and the prioritized queue system are two Zuul features their team relies on.
SK Telecom 5GX Labs won the 2020 Superuser Awards for their open infrastructure use case integrating multiple open source components in production, including Airship, Ceph, Kubernetes, and multiple components of OpenStack.
This was the first year the Superuser Awards ceremony was only held once, and there were eight organizations who shared production open infrastructure use cases that were reviewed by the community and advisors to determine the winner.
Learn how the 2020 Superuser Awards nominees are powering their organization’s infrastructure with open source in production:
If you missed any of the above sessions or announcements, check out the Open Infrastructure Foundation YouTube channel. Then, join the global Open Infrastructure community, and share your own personal open source story using #WeAreOpenInfra on social media.
The post Virtual Open Infrastructure Summit Recap appeared first on Superuser.
by CERN (techblog-contact@cern.ch) at October 28, 2020 03:00 PM
OpenStack Victoria, the latest version of the global open source project, was released on the 14th of October 2020. This is the 22nd iteration of OpenStack. At VEXXHOST, we couldn’t be more excited about this much-awaited release. We are also proud to inform you, our new private cloud operations and services are already running with the latest version.
The release of Victoria coincided with the Open Infrastructure Summit 2020, held virtually this time. OpenStack received more than 20,059 code changes for Victoria. These came from 790 developers belonging to 160 organizations in 45 countries. A large global open source community backs OpenStack. The number of contributors mentioned above stabilizes OpenStack’s ranking among the top three open source projects worldwide.
The main theme behind OpenStack Victoria is its work on native integration with Kubernetes. The update also supports diverse architectures and provides enhanced networking capabilities.
OpenStack’s prominent strength is that it optimizes the performance of virtual machines and bare metal, and with the Victoria release, this is further boosted. In addition to several enhancements to OpenStack’s stable and reliable core and highly flexible integration options with other open source projects, the new release offers the following innovative features:
OpenStack Victoria provides greater native integration with Kubernetes through the different modules of the cloud platform. For instance, Ironic deploying bare-metal has been split into several phases to better integrate with Kubernetes and standalone use. This marks an important trend since bare-metal via Ironic saw 66% more activity over the OpenStack Ussuri cycle. It also offers decomposition of the various implementation steps and new possibilities, such as provisioning without credentials for BMC and DHCP-less deployments.
Kuryr, a solution that bridges container framework network models and the OpenStack network bark traction, now supports custom resource definitions (CRDs). Kuryr will no longer use annotations to store data about OpenStack objects in the Kubernetes API. Instead, corresponding CRDs (KuryrPort, KuryrLoadBalancer, and KuryrNetworkPolicy) are created.
Tacker, the OpenStack service for NFV orchestration, now supports additional Kubernetes objects and VNF LCM APIs. It also provides an additional method for reading Kubernetes object files and CNF artifact definitions in the CSAR package. Tacker also offers more extensive standard features for ETSI NFV-SOL (such as lifecycle management, scale-up, and VNF management) and a Fenix plug-in for rolling updates of VNFs using Fenix and Heat.
The Cyborg-AP I now supports a PATCH call that allows direct programming of FPGAs with pre-uploaded bitstreams. The Victoria release also adds support for Intel QAT and Inspur FPGA accelerators.
Octavia now supports HTTP / 2 over TLS based on Application Layer Protocol Negotiation (ALPN). It is now also possible to specify minimum TLS versions for listeners and pools.
Vitrage now supports loading data via the standard TMF639 Resource Inventory Management API.
Neutron now offers a metadata service that works over IPv6. This service can be used without a config drive in networks that are completely based on IPv6. Neutron is now helping flat network users support Distributed Virtual Routers (DVR), Floating IP port forwarding for the back-end of OVN, and availability zones for routers OVN.
Kuryr now supports automatic detection of the VM bridging interface in nested configurations.
Octavia’s load balancer pools now support version two of the PROXY protocol. This makes it possible to pass client information to participating servers when using TCP protocols. This version provides improved performance when establishing new connections to participating servers using the PROXY protocol, especially while using IPv6.
OpenStack Victoria releases at a time when OpenStack is officially a decade old. VEXXHOST is proud to be a part of the OpenStack Journey for nine out of those ten years. We are always among the earliest to implement the upgrades in our cloud systems and our new private clouds running on Victoria is a testament to that. And we’re very proud to offer the latest OpenStack upgrades for your private cloud and public cloud solutions. Contact us with all your cloud-related queries; we’re all ears!
The post OpenStack Victoria is Here! Let’s Get To Know Version 22 appeared first on VEXXHOST.
And that’s a wrap! Open Infrastructure Summit 2020 comes to an end and it was quite an eventful few days, won’t you agree?
Owing to the pandemic situation, the event was held virtually this time, from 19th to 23rd of October. We definitely missed the face-to-face interaction but feel that the virtual summit was a different kind of vibrant experience altogether. At VEXXHOST, we had even more reason to be proud as we were a headline sponsor this time, and made quite a few exciting announcements during our keynote session.
Image Credit: OpenStack
The collective energy we felt from participants from across the world through keynotes, workshops, and at our own virtual booth made for a summit like never before.
First of all, we greatly appreciate the spirit of the open source community to really come together, organize, and make an event of this magnitude a grand success. Considering the challenging nature of things due to the pandemic, the effort deserves to be lauded.
Open source developers, IT decision-makers, and operators representing as much as 750 companies spanning 110 countries attended the four-day event.
Members of open source communities such as Ansible, Ceph, Kubernetes, Airship, Docker, ONAP, Kata Containers, OpenStack, Open vSwitch, Zuul, StarlingX, OPNFV, and many more were eager participants of the summit from start to finish.
There were numerous keynotes, forums, sessions, presentations, and workshops on relevant such as Container Infrastructure, 5G, NFV & Edge, Public, Private & Hybrid Clouds, CI/CD, AI, Machine Learning, HPC, and Security.
The Open Infrastructure Summit also saw a huge announcement from the foundation.
During the Summit, the OpenStack Foundation announced its evolution into the ‘Open Infrastructure Foundation’. This move came as a surprise for many but was welcomed with much cheer and fan fervor from attendees. The renaming is part of the foundation’s multi-year community evolution initiative which promises to better the way open source projects work. VEXXHOST congratulates the foundation on this occasion. We are also proud to be partnering in as founding Silver Member of the Open Infrastructure Foundation in this new beginning.
VEXXHOST – Silver Member – Open Infrastructure Foundation
The OpenStack Foundation was founded in 2012 to govern the OpenStack projects and several other open source projects that evolved from it. Over the years, Foundation has developed into an entity including much more under its wings. Moreover, modern use case demands placed on infrastructure, such as containers, 5G, machine learning, AI, NFV, edge computing, etc., were also responsible for this shift.
Even with its evolution into Open Infrastructure Foundation, the initiative will still have OpenStack project at its heart. The only difference is that the development and adoption of other projects will get a greater scope and attention as well.
The foundation also announced that even more innovations are planned and will be announced to the community shortly. We can’t wait to see what’s in store.
Speaking of announcements, we had a few important ones during the summit as well.
This year, Team VEXXHOST was proud to be a headline sponsor of the summit. We had a virtual booth of our own and interacted with members from various open source communities. We also gave away virtual bags with many exciting offers and credits to people who visited us at our booth.
Mohammed Naser – Keynote – Open Infrastructure Summit 2020
Our CEO, Mohammed Naser, delivered a keynote presentation and a talk on Tuesday, October 20th. During the keynote, he announced a revamp of our public cloud offerings and here are the relevant details for you:
Find all the juicy details about our revamp here.
To share our happiness on this occasion, we’re offering free credit to users to experience our OpenStack powered cloud. This free trial will provide you with a straightforward user interface grant you access to all the cool tools you need in a web-based console.
Hind Naser’s breakout session on “The Big Decision When Adopting OpenStack as a Private Cloud”.
On Day 2 of the summit, Hind Naser, our Director of Business Development, presented a breakout session talk on “The Big Decision When Adopting OpenStack as a Private Cloud”. Through the session, Hind provided informative insights to the attendees on the various decisions, limitations, and pitfalls when a user is starting the private cloud journey.
We had a great time at Open Infrastructure Summit 2020 with all the new announcements, keynotes, sessions, workshops etc. Thank you one and all, for attending the summit and visiting us at our virtual booth. If you would like to know more about our public cloud, private cloud or other solutions, do contact us!
The post Open Infrastructure Summit 2020 – Recap of the First-Ever Virtual Summit appeared first on VEXXHOST.
We have video recording available for you to learn how you can benefit the New Galera Manager. It includes live demo how to install Galera Manager and deploy easily Galera Cluster on Amazon Web Service for Geo-distributed Multi-master MySQL, Disaster Recovery and fast local reads and writes. Now you can monitor and manage your Galera Cluster with Graphical Interface.
“The presentation was great with lots of valuable information. We will definitely try to implement Galera Manager in our environment very soon” stated attendee of the webinar.
The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Victoria on Ubuntu 20.10 (Groovy Gorilla) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Victoria release can be found at: https://www.openstack.org/software/victoria.
To get access to the Ubuntu Victoria packages:
Ubuntu 20.10
OpenStack Victoria is available by default for installation on Ubuntu 20.10.
Ubuntu 20.04 LTS
The Ubuntu Cloud Archive for OpenStack Victoria can be enabled on Ubuntu 20.04 by running the following command:
sudo add-apt-repository cloud-archive:victoria
The Ubuntu Cloud Archive for Victoria includes updates for:
aodh, barbican, ceilometer, cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, ovn-octavia-provider, panko, placement, sahara, sahara-dashboard, sahara-plugin-spark, sahara-plugin-vanilla, senlin, swift, vmware-nsx, watcher, watcher-dashboard, and zaqar.
For a full list of packages and versions, please refer to:
http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/victoria_versions.html
Reporting bugs
If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:
sudo ubuntu-bug nova-conductor
Thank you to everyone who contributed to OpenStack Victoria. Enjoy and see you in Wallaby!
Corey
(on behalf of the Ubuntu OpenStack Engineering team)
TheJulia was kind enough to update the docs for Ironic to show me how to include IPMI information when creating nodes.
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node delete $UUID; done
I removed the ipmi common data from each definition as there is a password there, and I will set that afterwards on all nodes.
{
"nodes": [
{
"ports": [
{
"address": "00:21:9b:93:d0:90"
}
],
"name": "zygarde",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.10"
}
},
{
"ports": [
{
"address": "00:21:9b:9b:c4:21"
}
],
"name": "umbreon",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.11"
}
},
{
"ports": [
{
"address": "00:21:9b:98:a3:1f"
}
],
"name": "zubat",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.12"
}
}
]
}
openstack baremetal create ./nodes.ipmi.json
$ openstack baremetal node list
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| 3fa4feae-0d5c-4e38-a012-29258d40651b | zygarde | None | None | enroll | False |
| 00965ad4-c972-46fa-948a-3ce87aecf5ac | umbreon | None | None | enroll | False |
| 8702ea0c-aa10-4542-9292-3b464fe72036 | zubat | None | None | enroll | False |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ;
do openstack baremetal node set $UUID --driver-info ipmi_password=`cat ~/ipmi.password` --driver-info ipmi_username=admin ;
done
EDIT: I had ipmi_user before and it does not work. Needs to be ipmi_username.
And if I look in the returned data for the definition, we see the password is not readable:
$ openstack baremetal node show zubat -f yaml | grep ipmi_password
ipmi_password: '******'
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node power on $UUID ; done
Change “on” to “off” to power off.
Codership is pleased to announce a new Generally Available (GA) release of the multi-master Galera Cluster for MySQL 5.6, 5.7 and 8.0, consisting of MySQL-wsrep 5.6.49 (release notes, download), 5.7.31 (release notes, download), and 8.0.21 (release notes, download) with Galera Replication library 3.31 (release notes, download) implementing wsrep API version 25 for 5.6 and 5.7, and Galera Replication library 4.6 (release notes, download) implementing wsrep API version 26 for 8.0. This release incorporates all changes to MySQL 5.6.49, 5.7.31 , and 8.0.21 respectively, adding a synchronous option for your MySQL High Availability solutions.
It is recommend that one upgrades their Galera Cluster for MySQL 5.6, 5.7 and 8.0 because it releases a fix for security vulnerability CVE-2020-15180. The binary tarball is also compiled with OpenSSL 1.1.1g.
A highlight of this release is that with MySQL 8.0.21, you will now have access to using the Percona audit log plugin, which will help with monitoring and logging connection and query activity that has been performed on specific servers. This implementation is provided as an alternative to the MySQL Enterprise Audit Log Plugin.
In addition to fixing deadlocks that may occur between DDL and applying transactions, in 8.0.21 the write-set replication patch is now optimised to work with the Contention-Aware Transaction Scheduling (CATS) algorithm that is present in InnoDB. You can read more about transaction scheduling in the MySQL manual.
For those that requested the missing binary tarball package, the MySQL 8.0.21 build includes just that. Packages continue to be available for: CentOS 7 & 8, Red Hat Enterprise Linux 7 & 8, Debian 10, SLES 15 SP1, as well as Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. The latest versions are also available in the FreeBSD Ports Collection.
The Galera Replication library has had some notable fixes, one of which improves memory usage tremendously. The in-memory GCache index implementation now uses sorted std::deque instead of std::map, and this leads to an eightfold reduction in memory footprint. Hardware CRC32 is now supported on x86_64 and ARM64 platforms.
There are also three new status variables added: wsrep_flow_control_active (to tell you whether flow cotrol is currently active (replication paused) in the cluster), wsrep_flow_control_requested (to tell you whether the node has requested a replication pause because the received events queue is too long) and wsrep_gmcast_segment (to tell you which cluster segment the node belongs to).
For Galera Replication library 3.31, this is the last release for Debian Jessie and openSUSE 15.0. For Galera Replication library 4.6, this is the last release for openSUSE 15.0. For MySQL-wsrep 5.6 and 5.7, this is also the last release for Debian Jessie. For MySQL-wsrep 5.7 and MySQL-wsrep 8.0, this is the last release for openSUSE 15.0.
SK Telecom Cloud 5GX Labs is the 12th organization to win the Superuser Awards. The news was announced today during the virtual 2020 Open Infrastructure Summit. You can watch the announcement on demand in the Summit platform.
Elected by members of the community, the team that wins the Superuser Awards is lauded for the unique nature of its use case as well as its integration and application of open infrastructure. SK Telecom 5GX Cloud Labs was among eight nominees for the Award this year and is the first to receive the Award for an Airship use case.
In addition to contributing upstream to OpenStack and Airship, an open source project supported by the Open Infrastructure Foundation, SK Telecom developed a containerized OpenStack on Kubernetes solution called SKT All Container Orchestrator (TACO), based on OpenStack-helm and Airship. TACO is a containerized, declarative, cloud infrastructure lifecycle manager that enables them to provide operators the capability to remotely deploy and manage the entire lifecycle of cloud infrastructure and add-on tools and services by treating all infrastructure like cloud native apps. They deployed it to SKT’s core systems including telco mobile network, IPTV services, which currently has 5.5 million subscriptions; also for external customers (next generation broadcasting system, VDI, etc). Additionally, the team strongly engaged in community activity in Korea, sharing all of their technologies and experiences to regional communities (OpenStack, Ceph, Kubernetes, etc).
Just before the big announcement, Jeff Collins and Matt McEuen discussed the upcoming Airship 2.0 release, which is now in beta. Rewatch the announcement now!
The post SK Telecom 5GX Cloud Labs wins the 2020 Superuser Awards appeared first on Superuser.
Open infrastructure will underpin the next decade of transformation for cloud infrastructure. With the virtual Open Infrastructure Summit well underway, the first major announcement has been the formation of a new foundation, the Open Infrastructure Foundation. StackHPC is proud to be a founding member.
StackHPC's CEO, John Taylor, comments "We are extremely pleased to be a part of the new decade of Open Infrastructure and welcome the opportunity to continue to transfer the values of "Open" to our clients."
StackHPC's CTO, Stig Telfer, recorded a short video describing how the concept of open infrastructure is essential to our work, and how as a company we contribute to open infrastructure as a central part of what we do:
If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.
Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.
Last updated:
January 22, 2021 12:23 AM
All times are UTC.
Powered by: