November 22, 2017

OpenStack Superuser

Why atmail chose OpenStack for email-as-a-service

In a world where most companies broadcast what they’re having for lunch, atmail likes working hard in the background. For users of over 150 million email accounts in nearly 100 countries, the Australian company toils behind the scenes making sure those emails get delivered.

Working from headquarters in the small town of Peregian Beach, Queensland, atmail provides email solutions for service providers (such as internet service providers (ISPs), hosted service provider (HSPs), telecoms, global corporations and government agencies in Australia and the U.S.

OpenStack is critical to getting the job done — currently more than 15 percent of atmail infrastructure is powered by OpenStack-powered DreamHost Cloud. Its infrastructure has gradually migrated from hardware to virtualized.

In a recent talk at the OpenStack Summit Sydney, atmail senior dev ops engineer Matt Bryant talks about why the company chose OpenStack, the journey into cloud and future plans. He was joined onstage by Dreamhost’s VP of product and development, Jonathan LaCour.

atmail started looking to the cloud to replace aging hardware in multiple data centers, maximize cost efficiency and increase flexibility. They searched for the right partner, finding a good match with DreamHost. “A lot of OpenStack providers now are regional,” says LaCour. “They’re serving very specific use cases in particular markets that Amazon probably doesn’t about care that much, this is a good example of why OpenStack has a long-term future.”

The pair set out with a very high goal – to make the transition with little or no impact on our customers,” says Bryant. “What that meant in practice was we had to revisit our architecture at the software and infrastructure layer.”

Looking at their needs through a “prism” of security, performance and scalability, Bryant adds what they started with was pre-cloud, based on the idea of a mail server in a box. “There were a few decisions early on that didn’t play well with a cloud environment. We had to decide what to re-code and what to work around.” Automation was another big component — they went with Ansible’s OpenStack module as well as in-house Perl scripts — and then it came time to “test the hell out of it” Bryant says to see if it was possible to maintain the servers and level of service.

“We got to the stage where both of us were happy, and then we were on to migration.” This is where both companies learned a few important lessons. “You can’t do mail migrations completely without customer interaction,” Bryant says. “There was a whole lot of data, a massive amount of storage (a few terabytes) to pore over,” adds LaCour. They ran into a number of issues with networking and storage.

Among the other takeaways were to keep it simple (forget traditional network topologies and VLANs), the importance of reviewing architecture, dedicating enough resources, having direct access to engineers (they had an IRC channel with their DreamHost counterparts) and finally, test, test and test again.  “Know your failure scenarios and what can go wrong,” Bryant underlines.

“We went from a fairly simple architecture in bare metal to one behind load balancers and multiple nodes behind load balancers,” Bryant says. When you have an intermittent problem on a five-node cluster, it may only happen so often but can be much more complicated to fix, he adds.

What’s next? Bryant says they’re looking into a number of OpenStack projects, namely: Manila (shared file systems), Octavia (load balancer), Monasca (monitoring), Heat (orchestration) and Vitrage (Root Cause Analysis service).

“The more that we can push off into services and concentrate more on our core product, the better,” Bryant says.

You can catch the whole 27-minute talk below.

The post Why atmail chose OpenStack for email-as-a-service appeared first on OpenStack Superuser.

by Superuser at November 22, 2017 03:32 PM

Derek Higgins

Booting baremetal from a Cinder Volume in TripleO

Up until recently in tripleo booting, from a cinder volume was confined to virtual instances, but now thanks to some recent work in ironic, baremetal instances can also be booted backed by a cinder volume.

Below I’ll go through the process of how to take a CentOS cloud image, prepare and load it into a cinder volume so that it can be used to back the root partition of a baremetal instance.

First I do make a few assumptions

  1. you have a working ironic in a tripleo overcloud
    – if this isn’t something you’re familiar with you’ll find some instructions here
    – If you can boot and ssh to a baremetal instance on the provisioning network then your good to go
  2. You have a working cinder in the TripleO overcloud with enough storage to store the volumes
  3. I’ve tested tripleo(and openstack) using RDO as of 2017-11-14, earlier versions had at least one bug and wont work

 

Baremetal instances in the overcloud traditionally use config-drive for cloud-init to read config data from, config-drive isn’t supported in ironic boot from volume, so we need to make sure that the metadata service is available. To do this, if your subnet isn’t already attached to one, you need to create a neutron router and attach it to the subnet you’ll be booting your baremetal instances with,

 $ neutron router-create r1
 $ neutron router-interface-add r1 provisioning-subnet

Each node defined in ironic that you would like to use for booting from volume needs to use the cinder storage driver, the iscsi_boot capability needs to be set and it requires a unique connector id (increment <NUM> for each node)

 $ openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
 $ openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>

The last thing you’ll need is a image capable of booting from iscsi, we’ll be starting with the Centos Cloud image but need to alter it slightly so that its capable of booting over iscsi

1. download the image

 $ curl https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz > /tmp/CentOS-7-x86_64-GenericCloud.qcow2.xz
 $ unxz /tmp/CentOS-7-x86_64-GenericCloud.qcow2.xz

2. mount it and change root into the image

 $ mkdir /tmp/mountpoint
 $ guestmount -i -a /tmp/CentOS-7-x86_64-GenericCloud.qcow2 /tmp/mountpoint
 $ chroot /tmp/mountpoint /bin/bash

3. load the dracut iscsi module into the ramdisk

 chroot> mv /etc/resolv.conf /etc/resolv.conf_
 chroot> echo "nameserver 8.8.8.8" > /etc/resolv.conf
 chroot> yum install -y iscsi-initiator-utils
 chroot> mv /etc/resolv.conf_ /etc/resolv.conf
 # Be careful here to update the correct ramdisk (check/boot/grub2/grub.cfg)
 chroot> dracut --force --add "network iscsi" /boot/initramfs-3.10.0-693.5.2.el7.x86_64.img 3.10.0-693.5.2.el7.x86_64

4. enable rd.iscsi.firmware so that dracut gets the iscsi target details from the firmware[1]

The kernel must be booted with rd.iscsi.firmware=1 so that the iscsi target details are read from the firmware (passed to it by ipxe), this needs to be added to the grub config

In the chroot Edit the file /etc/default/grub and add rd.iscsi.firmware=1 to GRUB_CMDLINE_LINUX=…

 

5. leave the chroot, unmount the image and update the grub config

 chroot> exit
 $ guestunmount /tmp/mountpoint
 $ guestfish -a /tmp/CentOS-7-x86_64-GenericCloud.qcow2 -m /dev/sda1 sh "/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg"

You now have a image that is capable of mounting its root disk over iscsi, load it into glance and create a volume from it

 $ openstack image create --disk-format qcow2 --container-format bare --file /tmp/CentOS-7-x86_64-GenericCloud.qcow2 centos-bfv
 $ openstack volume create --size 10 --image centos-bfv --bootable centos-test-volume

Once the cinder volume is finish creating(wait for it to become “available”) you should be able to boot a baremetal instance from the newly created cinder volume

 $ openstack server create --flavor baremetal --volume centos-test-volume --key default centos-test
 $ nova list 
 $ ssh centos@192.168.24.49
[centos@centos-test ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk 
└─sda1 8:1 0 10G 0 part /
vda 253:0 0 80G 0 disk 
[centos@centos-test ~]$ ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx. 1 root root 9 Nov 14 16:59 ip-192.168.24.7:3260-iscsi-iqn.2010-10.org.openstack:volume-e44073e9-0df9-43a0-ad05-9a6c41c80670-lun-0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Nov 14 16:59 ip-192.168.24.7:3260-iscsi-iqn.2010-10.org.openstack:volume-e44073e9-0df9-43a0-ad05-9a6c41c80670-lun-0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 9 Nov 14 16:58 virtio-pci-0000:00:04.0 -> ../../vda

To see how the cinder volume target information is being passed to the hardware you need to take a look at the iPXE template for the server in questions e.g.

 $ cat /var/lib/ironic/httpboot/<NODEID>/config
<snip/>
:boot_iscsi
imgfree
set username vRefJtDXrEyfDUetpf9S
set password mD5n2hk4FEvNBGSh
set initiator-iqn iqn.2010-10.org.openstack.node1
sanhook --drive 0x80 iscsi:192.168.24.7::3260:0:iqn.2010-10.org.openstack:volume-e44073e9-0df9-43a0-ad05-9a6c41c80670 || goto fail_iscsi_retry
sanboot --no-describe || goto fail_iscsi_retry
<snip/>

[1] – due to bug in dracut(now fixed upstream [2]) setting this means that the image can’t be used for local boot
[2] – https://github.com/dracutdevs/dracut/pull/298

by higginsd at November 22, 2017 12:23 AM

November 21, 2017

OpenStack Superuser

A book on OpenStack for the cloud challenged

All clients are newbies. Or “dummies” to use the term that publisher Wiley made popular. That’s why London-based vScaler wrote a book for potential clients who may be cloud-challenged called “OpenStack for Dummies.”

The book is designed to explain cloud computing, the OpenStack cloud development platform and to illustrate how these technologies are used in various applications as well as how it can be used to build and deploy clouds. Written in collaboration with Wiley, the eBook is offered free with registration on their website and serves as a “gentle starting point” for newcomers.

David Power, CTO at vScaler who you may have seen handing out copies in the marketplace at the recent Sydney Summit, talks to Superuser about what’s on his bookshelf, who the real dummies are and how to get up to speed.

Who will this guide help most?

Dummies 🙂

So despite what the title suggests, we wanted to produce a guide that would be as relevant to commercial and non-technical readers as it would to techies coming from a VMware or non-Linux background looking for a basic understanding of OpenStack. It’s not a guide for the deeply technical as the name suggests, but something that covers a much wider audience and ties it back to a much wider view of the development of the commercial cloud.

How did this book come about?

While OpenStack is really becoming a mainstream product in the technical world, I have come up against a number of customers who had heard of OpenStack but didn’t have much of an appreciation for what it could do or how it all worked together. I would often be asked to ‘send me over material on OpenStack that I can get up to speed on quickly’ but that often meant multiple different sources, and potentially different sources depending on the use case, and we found that there wasn’t really one starters guide to OpenStack out there.

What are some of the most common mistakes newbies (“dummies”) make?

I wouldn’t necessarily say there are common mistakes, but there are certainly a lot of common misconceptions that seem to have surfaced over the years around Openstack. Misconceptions, that suggest OpenStack is not supported, OpenStack isn’t stable, or that it’s not production ready. I often find myself correcting or educating people on these misconceptions. I’m hoping this book helps individuals and business leaders to quickly understand a number of successful use cases of OpenStack in the real world and to identify its capability across a broad user base.

Why is a book helpful now — in addition to IRC, mailing lists, documentation, video tutorials etc.?

I think a beginner’s book is complimentary to all the great material that is out there already. Non-technical folks wouldn’t necessarily hang out on IRC or subscribe to mailing lists. Also I think the volume of information out there on OpenStack, while certainly comprehensive, can be a little daunting to the unfamiliar. I’d like to think our book would be a gentle starting point which would then give them a good base to then go and get further information from the resources you’ve mentioned.

What are some of the most exciting things you’ve seen recently in terms of OpenStack developments?

What I find exciting is probably not that exciting to average reader 🙂 but we are very excited about the uptake of OpenStack in scientific and research and development environments. We have customers using vScaler OpenStack across a range of use cases such as machine learning, data analytics, high performance computing – and currently a pilot project around autonomous vehicles. These areas would not have been traditional cloud use cases but vScaler OpenStack has demonstrated that they can be!

What’s on your OpenStack bookshelf?

I’ve got a couple of books on OpenStack networking and general OpenStack admin books but more recently I’ve gone through “The Crossroads of Cloud and HPC: OpenStack for Scientific Research” which I found really interesting about the great work the scientific community is doing around OpenStack. (There might also be the odd “Far Side Gallery”…)

For more OpenStack books, check out the OpenStack Marketplace — your one-stop shop for training, distros, private-cloud-as-a-service and more — which now offers a selection of technical publications, too. The books listings are not affiliate links, but offered as a way to highlight the efforts of community members.

 

 

The post A book on OpenStack for the cloud challenged appeared first on OpenStack Superuser.

by Superuser at November 21, 2017 05:01 PM

RDO

Recent blog posts

It's been a little while since we've posted a roundup of blogposts around RDO, and you all have been rather prolific in the past month!

Here's what we as a community have been talking about:

Hooroo! Australia bids farewell to incredible OpenStack Summit by August Simonelli, Technical Marketing Manager, Cloud

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!

Read more at http://redhatstackblog.redhat.com/2017/11/16/hooroo-australia-bids-farewell-to-incredible-openstack-summit/

Build your Software Defined Data Center with Red Hat CloudForms and Openstack – part 2 by Michele Naldini

Welcome back, here we will continue with the second part of my post, where we will work with Red Hat Cloudforms. If you remember, in our first post we spoke about Red Hat OpenStack Platform 11 (RHOSP). In addition to the blog article, at the end of this article is also a demo video I created to show to our customers/partners how they can build a fully automated software data center.

Read more at https://developers.redhat.com/blog/2017/11/02/build-software-defined-data-center-red-hat-cloudforms-openstack/

Build your Software Defined Data Center with Red Hat CloudForms and Openstack – part 1 by Michele Naldini

In this blog, I would like to show you how you can create your fully software-defined data center with two amazing Red Hat products: Red Hat OpenStack Platform and Red Hat CloudForms. Because of the length of this article, I have broken this down into two parts.

Read more at https://developers.redhat.com/blog/2017/11/02/build-software-defined-data-center-red-hat-cloudforms-openstack-2/

G’Day OpenStack! by August Simonelli, Technical Marketing Manager, Cloud

In less than one week the OpenStack Summit is coming to Sydney! For those of us in the Australia/New Zealand (ANZ) region this is a very exciting time as we get to showcase our local OpenStack talents and successes. This summit will feature Australia’s largest banks, telcos, and enterprises and show the world how they have adopted, adapted, and succeeded with Open Source software and OpenStack.

Read more at http://redhatstackblog.redhat.com/2017/10/30/gday-openstack/

Restarting your TripleO hypervisor will break cinder volume service thus the overcloud pingtest by Carlos Camacho

I don’t usualy restart my hypervisor, today I had to install LVM2 and virsh stopped to work so a restart was required, once the VMs were up and running the overcloud pingtest failed as cinder was not able to start.

Read more at http://anstack.github.io/blog/2017/10/30/restarting-your-tripleo-hypervisor-will-break-cinder.html

CERN CentOS Dojo, Part 4 of 4, Geneva by rbowen

On Friday evening, I went downtown Geneva with several of my colleagues and various people that had attended the event.

Read more at http://drbacchus.com/cern-centos-dojo-part-4-of-4-geneva/

CERN CentOS Dojo, part 3 of 4: Friday Dojo by rbowen

On Friday, I attended the CentOS Dojo at CERN, in Meyrin Switzerland.

Read more at http://drbacchus.com/cern-centos-dojo-part-3-of-4-friday-dojo/

CERN Centos Dojo, event report: 2 of 4 – CERN tours by rbowen

The second half of Thursday was where we got to geek out and tour various parts of CERN.

Read more at http://drbacchus.com/cern-centos-dojo-cern-tours/

CERN Centos Dojo 2017, Event Report (1 of 4): Thursday Meeting by rbowen

On Thursday, prior to the main event, a smaller group of CentOS core community got together for some deep-dive discussions around the coming challenges that the project is facing, and constructive ways to address them.

Read more at http://drbacchus.com/cern-centos-dojo-2017-thursday/

CERN Centos Dojo 2017, Event report (0 of 4) by rbowen

For the last few days I’ve been in Geneva for the CentOS dojo at CERN.

Read more at http://drbacchus.com/cern-centos-dojo-2017/

Using Ansible Openstack modules on CentOS 7 by Fabian Arrotin

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

Read more at https://arrfab.net/posts/2017/Oct/11/using-ansible-openstack-modules-on-centos-7/

Using Falcon to cleanup Satellite host records that belong to terminated OSP instances by Simeon Debreceni

In an environment where OpenStack instances are automatically subscribed to Satellite, it is important that Satellite is notified of terminated instances so that is can safely delete its host record. Not doing so will:

Read more at https://developers.redhat.com/blog/2017/10/06/using-falcon-cleanup-satellite-host-records-belong-terminated-osp-instances/

My interview with Cool Python Codes by Julien Danjou

A few days ago, I was contacted by Godson Rapture from Cool Python codes to answer a few questions about what I work on in open source. Godson regularly interviews developers and I invite you to check out his website!

Read more at https://julien.danjou.info/blog/2017/interview-coolpythoncodes

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part Two by Dan Macpherson, Principal Technical Writer

Previously we learned all about the benefits in placing Ceph storage services directly on compute nodes in a co-located fashion. This time, we dive deep into the deployment templates to see how an actual deployment comes together and then test the results!

Read more at http://redhatstackblog.redhat.com/2017/10/04/using-red-hat-openstack-platform-director-to-deploy-co-located-ceph-storage-part-two/

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One by Dan Macpherson, Principal Technical Writer

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.

Read more at http://redhatstackblog.redhat.com/2017/10/02/using-red-hat-openstack-director-to-deploy-co-located-ceph-storage-part-one/

by Rich Bowen at November 21, 2017 02:55 PM

Anomaly Detection in CI logs

Continous Integration jobs can generate a lot of data and it can take a lot of time to figure out what went wrong when a job fails. This article demonstrates new strategies to assist with failure investigations and to reduce the need to crawl boring log files.

First, I will introduce the challenge of anomaly detection in CI logs. Second, I will present a workflow to automatically extract and report anomalies using a tool called LogReduce. Lastly, I will discuss the current limitations and how more advanced techniques could be used.

Introduction

Finding anomalies in CI logs using simple patterns such as "grep -i error" is not enough because interesting log lines doesn't necessarly feature obvious anomalous messages such as "error" or "failed". Sometime you don't even know what you are looking for.

In comparaison to regular logs, such as system logs of a production service, CI logs have a very interresting characteristic: they are reproducible. Thus, it is possible to carefully look for new events that are not present in other job execution logs. This article focuses on this particular characteristic to detect anomalies.

The challenge

For this article, baseline events are defined as the collection of log lines produced by nominal jobs execution and target events are defined as the collection of log lines produced by a failed job run.

Searching for anomalous events is challenging because:

  • Events can be noisy: they often includes unique features such as timestamps, hostnames or uuid.
  • Events can be scattered accross many differents files.
  • False positives events may appear for various reasons, for example when a new test option has been introduced. However they often share a common semantic with some baseline events.

Moreover, there can be a very high number of events, for example, more than 1 million lines for tripleo jobs. Thus, we can not easily look for each target event not present in baseline events.

OpenStack Infra CRM114

It is worth noting that anomaly detection is already happening live in the openstack-infra operated review system using classify-log.crm, which is based on CRM114 bayesian filters.

However it is currently only used to classify global failures in the context of the elastic-recheck process. The main drawbacks to using this tool are:

  • Events are processed per words without considering complete lines: it only computes the distances of up to a few words.
  • Reports are hard to find for regular users, they would have to go to elastic-recheck uncategorize, and click the crm114 links.
  • It is written in an obscure language

LogReduce

This part presents the techniques I used in LogReduce to overcome the challenges described above.

Reduce noise with tokenization

The first step is to reduce the complexity of the events to simplify further processing. Here is the line processor I used, see the Tokenizer module:

  • Skip known bogus events such as ssh scan: "sshd.+[iI]nvalid user"
  • Remove known words:
    • Hashes which are hexa decimal words that are 32, 64 or 128 characters long
    • UUID4
    • Date names
    • Random prefixes such as "(tmp req- qdhcp-)[^\s\/]+"
  • Discard every character that is not [a-z_\/]

For example this line:

  2017-06-21 04:37:45,827 INFO [nodepool.builder.UploadWorker.0] Uploading DIB image build 0000000002 from /tmpxvLOTg/fake-image-0000000002.qcow2 to fake-provider

Is reduced to:

  INFO nodepool builder UploadWorker Uploading image build from /fake image fake provider

Index events in a NearestNeighbors model

The next step is to index baseline events. I used a NearestNeighbors model to query target events' distance from baseline events. This helps remove false-postive events that are similar from known baseline events. The model is fitted with all the baseline events transformed using Term Frequency Inverse Document Frequency (tf-idf). See the SimpleNeighbors model

vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(
    analyzer='word', lowercase=False, tokenizer=None,
    preprocessor=None, stop_words=None)
nn = sklearn.neighbors.NearestNeighbors(
    algorithm='brute',
    metric='cosine')
train_vectors = vectorizer.fit_transform(train_data)
nn.fit(train_vectors)

Instead of having a single model per job, I built a model per file type. This requires some pre-processing work to figure out what model to use per file. File names are converted to model names using another Tokenization process to group similar files. See the filename2modelname function.

For example, the following files are grouped like so:

audit.clf: audit/audit.log audit/audit.log.1
merger.clf: zuul/merger.log zuul/merge.log.2017-11-12
journal.clf: undercloud/var/log/journal.log overcloud/var/log/journal.log

Detect anomalies based on kneighbors distance

Once the NearestNeighbor model is fitted with baseline events, we can repeat the process of Tokenization and tf-idf transformation of the target events. Then using the kneighbors query we compute the distance of each target event.

test_vectors = vectorizer.transform(test_data)
distances, _ = nn.kneighbors(test_vectors, n_neighbors=1)

Using a distance threshold, this technique can effectively detect anomalies in CI logs.

Automatic process

Instead of manually running the tool, I added a server mode that automatically searches and reports anomalies found in failed CI jobs. Here are the different components:

  • listener connects to mqtt/gerrit event-stream/cistatus.tripleo.org and collects all success and failed job.

  • worker processes jobs collected by the listener. For each failed job, it does the following in pseudo-code:

Build model if it doesn't exist or if it is too old:
	For each last 5 success jobs (baseline):
		Fetch logs
	For each baseline file group:
		Tokenize lines
		TF-IDF fit_transform
		Fit file group model
Fetch target logs
For each target file:
	Look for the file group model
	Tokenize lines
	TF-IDF transform
	file group model kneighbors search
	yield lines that have distance > 0.2
Write report
  • publisher processes each report computed by the worker and notifies:
    • IRC channel
    • Review comment
    • Mail alert (e.g. periodic job which doesn't have a associated review)

Reports example

Here are a couple of examples to illustrate LogReduce reporting.

In this change I broke a service configuration (zuul gerrit port), and logreduce correctly found the anomaly in the service logs (zuul-scheduler can't connect to gerrit): sf-ci-functional-minimal report

In this tripleo-ci-centos-7-scenario001-multinode-oooq-container report, logreduce found 572 anomalies out of a 1078248 lines. The interesting ones are:

  • Non obvious new DEBUG statements in /var/log/containers/neutron/neutron-openvswitch-agent.log.txt.
  • New setting of the firewall_driver=openvswitch in neutron was detected in:
    • /var/log/config-data/neutron/etc/neutron/plugins/ml2/ml2_conf.ini.txt
    • /var/log/extra/docker/docker_allinfo.log.txt
  • New usage of cinder-backup was detected accross several files such as:
    • /var/log/journal contains new puppet statement
    • /var/log/cluster/corosync.log.txt
    • /var/log/pacemaker/bundles/rabbitmq-bundle-0/rabbitmq/rabbit@centos-7-rax-iad-0000787869.log.txt.gz
    • /etc/puppet/hieradata/service_names.json
    • /etc/sensu/conf.d/client.json.txt
    • pip2-freeze.txt
    • rpm-qa.txt

Caveats and improvements

This part discusses the caveats and limitations of the current implementation and suggests other improvements.

Empty success logs

This method doesn't work when the debug events are only included in the failed logs. To successfully detect anomalies, failure and success logs need to be similar, otherwise all the extra information in failed logs will be considered anomalous.

This situation happens with testr results where success logs only contain 'SUCCESS'.

Building good baseline model

Building a good baseline model with nominal job events is key to anomaly detection. We could use periodic execution (with or without failed runs), or the gate pipeline.

Unfortunately Zuul currently lacks build reporting and we have to scrap gerrit comments or status web pages, which is sub-optimal. Hopefully the upcomming zuul-web builds API and zuul-scheduler MQTT reporter will make this task easier to implement.

Machine learning

I am by no means proficient at machine learning. Logreduce happens to be useful as it is now. However here are some other strategies that may be worth investigating.

The model is currently using a word dictionnary to build the features vector and this may be improved by using different feature extraction techniques more suited for log line events such as MinHash and/or Locality Sensitive Hash.

The NearestNeighbors kneighbors query tends to be slow for large samples and this may be improved upon by using Self Organizing Map, RandomForest or OneClassSVM model.

When line sizes are not homogeneous in a file group, then the model doesn't work well. For example, mistral/api.log line size varies between 10 and 8000 characters. Using models per bins based on line size may be a great improvement.

CI logs analysis is a broad subject on its own, and I suspect someone good at machine learning might be able to find other clever anomaly detection strategies.

Further processing

Detected anomalies could be further processed by:

  • Merging similar anomalies discovered accross different files.
  • Looking for known anomalies in a system like elastic-recheck.
  • Reporting new anomalies to elastic-recheck so that affected jobs could be grouped.

Conclusion

CI log analysis is a powerful service to assist failure investigations. The end goal would be to report anomalies instead of exhaustive job logs.

Early results of LogReduce models look promising and I hope we could setup such services for any CI jobs in the future. Please get in touch by mail or irc (tristanC on Freenode) if you are interrested.

by tristanC at November 21, 2017 02:55 PM

Chris Dent

OpenStack Casual Contribution

This post is related to the earlier OpenStack Developer Satisfaction post, which may be of interest.

As part of the Forum at the recent OpenStack Summit in Sydney, there was a session on "Making OpenStack more palatable to part-time contributors". The session had an etherpad and there are some useful additional notes on Colleen Murphy's blog. Update: and she did a great related (sort of the flip side of the coin here) presentation at summit: Communication through Code How to get work done upstream

In the 2019 Vision Statement for the Technical Committee, one of the achievements is "more sympathy to the needs of part time contributors". To reach that, being attractive and inclusive to part-time contributors is important. Thus the session.

While no concrete plan was made during the session, some of the issues were enumerated and some differences in understanding were illuminated. That's a good first step to getting anywhere.

Casual Contributors

A primary difference in understanding was over the concept of "part-time". The etherpad lists some scenarios, running the gamut from what have been called "drive-by" contributors to contributors who used to be full-time but are now, for whatever reason, only able to contribute a small amount.

While it is important that individuals who fall into that latter category are included (it is likely their contributions are exceptionally valuable) in the community, due to their experience with and awareness of the processes and social mores, "palatability" doesn't strike me as a specific concern (except with regard to the larger satisfaction aspect). This suggests a different term than "part-time" might be in order.

"Casual contributor" might work, borrowing some of the connotations from the "casual gamer" term. These are people who need and want to contribute but it is not their sole or obsessive focus. With that phrasing, barriers to participation become relatively larger when compared with the perceptions of someone who is making a long-term or full-time commitment.

It also, hopefully, limits the extent to which old timer survivors who overcame those barriers through hard graft are able to say "do what I did, it worked for me".

Casual contribution ought to mean just that: casual: relaxed and unconcerned; not regular or permanent. It should also mean some chance of success; that is that any code contributions actually get reviewed, and if good, get merged.

This is not something we (where "we" is the OpenStack community) have always been particularly good at. Here's a reasonable looking patch to Nova that's been under review for more than two years.

Additional barriers to participation include:

  • Lots of project knowledge is tribal, and even when not, requires piecing together many disparate pieces. Ramping up to understanding is challenging.

  • Visibility has historically been a key factor in getting attention, especially in the more established projects. If you want to make changes, you need to be around, alot, and you often need to attend in person events (e.g., the Forum and the PTG).

While both of these situations may have some aspect of "that's just the way it is" that doesn't mean we should give up on trying to improve the situation.

Things to Try

There were several proposed solutions (some highlights below), but no concrete plan for next steps. OpenStack's culture and governance doesn't allow for decrees and ultimatums, only awareness. The thing to be aware of here is that things have changed: casual contribution is not only becoming the norm, it is a sign of health and maturity in the project. We want broader contribution. If you have an opportunity to do or influence any of the following, do!

  • Write weekly updates from projects or subteams to a relevant mailing list to make it clear what's in progress and what matters. This is happening more and more often and people seem to dig it, but it is only useful for people who make a habit of reading the mailing list; many people do not. It may make sense to syndicate them in some fashion.

  • Develop a culture of picking up and improving on proposed changes from contributors who are not able to attend to their patches on a frequent basis. Doing so would require getting past code ownership, which we should do anyway.

  • Produce simplified architectural diagrams; something to look at that explains the big picture of how things work together, within the individual projects. In some cases these already exist, but their discoverability could be improved.

  • Shrink or decompose projects. Smaller pieces with clear boundaries and limited goals are easier to ingest. They have a smaller surface area that a casual contributor may usefully traverse.

by Chris Dent at November 21, 2017 02:30 PM

StackHPC Team Blog

Nick Jones Joins our Team

We are excited to announce our newest team member: Nick Jones joins us from DataCentred. Nick is well known within the OpenStack community, as co-organiser of OpenStack Days UK 2017, the Manchester OpenStack Meetup, and as an active participant in the creation of the OpenStack Public Cloud WG.

StackHPC will be drawing on Nick's great depth of experience as head of cloud computing at DataCentred, to assist our clients in research computing with their transition into operation.

We will also be benefiting from Nick's strong technical skills to help power our development activities to achieve our goals for Scientific OpenStack.

Nick adds "As someone who grew up dragging my parents to Jodrell Bank on a regular basis, I'm extraordinarily proud to be given the opportunity to work with StackHPC at such an exciting time in the company's evolution."

Follow Nick on Twitter @yankcrime.

Nick Jones

by Stig Telfer at November 21, 2017 12:00 PM

Baremetal Cloud Capacity

OpenStack Pike

For many reasons, it is common for HPC users of OpenStack to use Ironic and Nova together to deliver baremetal servers to their users. In this post we look at recent changes to how Nova chooses which Ironic node to use for each user's nova boot request.

SKA SDP Performance Prototype

As part of the development of the SKA, StackHPC has created an OpenStack based cloud that researchers can use to prototype how best to process the data output from the telescopes. For this discussion of managing cloud capacity, the key points to note are:

  • Several types of nodes: Regular, a few GPU nodes, a few NVMe nodes, some high memory nodes.
  • Several physical networks, including 10G Ethernet, 25G Ethernet and 100Gb/s Infiniband.
  • A globally distributed team of scientists.
  • Some work will need exclusive use of the whole system, to benchmark the performance of prototype scientific pipelines.

Managing Capacity

Public cloud must present the illusion of infinite capacity. For private cloud use cases, and research computing in particular, the amount of available, unused capacity is of great interest. Most small clouds soon hit the reality of running out of space. There are two main approaches to dealing with capacity problems:

  • Explicit assignment (and pre-emption)
  • Co-operative multitasking

Given our situation described above, we have opted for co-operative multitasking, where the users delete their own instances when they are finished with those nodes, allowing others to do what they need.

To help reduce the strain on resources we are also prototyping having a shared execution frameworks such as a Heat provisioned OpenHPC Slurm cluster, a Magnum provisioned Docker Swarm cluster and a Sahara provisioned RDMA enabled Spark cluster from HiBD.

In this blog we are focusing on the capacity of Ironic based clouds. When you add virtualisation into the mix, there are many questions around how different combinations of flavors fit onto a single hypervisor, how to try to avoid wasted space. Similarly, we are focusing on statically sized private clouds, so this blog will ignore the whole world of capacity planning.

OpenStack Security Model

You can argue about this being a security model, or just the details of the abstraction OpenStack presents, but the public APIs try their best to hide any idea of physical hosts and capacity from non-cloud-admin users.

When building a public cloud as a publicly traded company, exposing via the API in realtime how many physical hosts you run or how much free capacity you have could probably break the law in some countries. But when you run a private could, you really want a nice view of what your friends are using.

Co-operative Capacity Management

"Play nice, or I will delete all your servers every Friday afternoon!"

That is a very tempting thing to say, and I basically have said that. But its hard to play nice when you have no idea how much capacity is in use. So we have a solution: the capacity dashboard.

OpenStack Pike

Talking to the users of P3, its clear having a visual representation of who is currently using what has been much more useful that a wiki page of requests that quickly drifted out of sync with reality. In the future we may consider Mistral to enforce lease times of servers, or maybe Blazar for a more formal reservation system, but for now giving the scientists the flexibility of a more informal system is working well.

Building the Dashboard

Firstly we have our monitoring infrastructure. This is currently built using OpenStack Monasca, making use of Kafka and Influx DB. (We also use Monasca with ELK to generate metrics from our logs, but that is a story for another day):

The dashboard is built using Grafana. There is a Monasca plugin, that means we can use the Monasca API as a data source, and a Keystone plugin that is used to authenticate and authorise access to both Grafana and its use of the Monasca APIs: https://grafana.com/plugins/monasca-datasource

Our system metrics are kept in a project general users don't have access to, but the capacity metrics and dashboards are associated with the project that all the users of the system have access to.

Now we have a system in place to ingest, store, query and visualize metrics in a multi-tenant way, we now need a tool to query the capacity and send metrics into Monasca.

Querying Baremetal Capacity

We have created a small CLI tool called os-capacity. It uses os_client_config and cliff to query the Placement API for details about the current cloud capacity and usage. It also uses Nova APIs and Keystone APIs to get hold of useful friendly names for the information that is in placement.

For the Capacity dashboard we use data from two particular CLI calls. Firstly we look at the capacity by calling:

os-capacity resources group
+----------------------------------+-------+------+------+-------------+
| Resource Class Groups            | Total | Used | Free | Flavors     |
+----------------------------------+-------+------+------+-------------+
| VCPU:1,MEMORY_MB:512,DISK_GB:20  |     5 |    1 |    4 | my-flavor-1 |
| VCPU:2,MEMORY_MB:1024,DISK_GB:40 |     2 |    0 |    2 | my-flavor-2 |
+----------------------------------+-------+------+------+-------------+

This tool is currently very focused on baremetal clouds. The flavor mapping is done assuming the flavors should exactly match all the available resources for a given Resource Provider. This is clearly not true for a virtualised scenario. It is also not true in some baremetal clouds, but this works OK for our cloud. Of course, patches welcome :)

Secondly we can look at the usage of the cloud by calling:

os-capacity usages group user --max-width 70
+----------------------+----------------------+----------------------+
| User                 | Current Usage        | Usage Days           |
+----------------------+----------------------+----------------------+
| 1e6abb726dd04d4eb4b8 | Count:4,             | Count:410,           |
| 94e19c397d5e         | DISK_GB:1484,        | DISK_GB:152110,      |
|                      | MEMORY_MB:524288,    | MEMORY_MB:53739520,  |
|                      | VCPU:256             | VCPU:26240           |
| 4661c3e5f2804696ba26 | Count:1,             | Count:3,             |
| 56b50dbd0f3d         | DISK_GB:371,         | DISK_GB:1113,        |
|                      | MEMORY_MB:131072,    | MEMORY_MB:393216,    |
|                      | VCPU:64              | VCPU:192             |
+----------------------+----------------------+----------------------+

You can also group by project, but in the current SKA cloud all users are in the same project, so grouping by user works best.

The only additional step is converting the above information into metrics that are fed into Monasca. For now this has also been integrated into the os-capacity tool, by a magic environment variable. Ideally we would feed the json based output of os-capacity into a separate tool that manages sending metrics, but that is a nice task for a rainy day.

What's Next?

Through our project in SKA we are starting to work very closely with CERN. As part of that work we are looking at helping with the CERN prototype of preemptible instances, and looking at many other ways that both the SKA and CERN can work together to help our scientists be even more productive.

The ultimate goal is to deliver private cloud infrastructure for research computing use cases that achieves levels of utilisation comparable to the best examples of well-run conventional research computing clusters. Being able to track available capacity is an important step in that direction.

by John Garbutt at November 21, 2017 09:42 AM

Mirantis

The OpenStack Foundation acknowledges the changing landscape

The cloud is becoming commoditized; what's important is how you use it.

by Nick Chase at November 21, 2017 05:35 AM

November 20, 2017

OpenStack Superuser

Kubernetes on OpenStack: Digging into technical details

SYDNEY — Sometimes you really do need to get into the weeds. OpenStack and Kubernetes are both complex projects and if you’re interested in taking full advantage of a Kubernetes on OpenStack deployment, you’ll need an understanding of where the two projects interact, how Kubernetes exposes OpenStack resources and which deployment combinations make sense.

This advanced-level 50-minute talk given at the recent Sydney Summit assumes an advanced knowledge of OpenStack and leverages that to build a corresponding advanced understanding of Kubernetes.

It was given by Angus Lees, currently a senior software engineer at Bitnami, who says he got started with OpenStack pretty much by chance. At a previous job, he “accidentally” wrote the OpenStack cloud provider plugin that was included with Kubernetes 0.15 – and he has been the primary maintainer ever since.

He goes into specific detail of where and how OpenStack is represented in the Kubernetes codebase, with a high-level description of the Kubernetes architecture for context. He also takes you through a tour of OpenStack provider: Instances, zones, load balancers, routes, as well as the Cinder volume plugin and the Keystone authenticator plugin.

You’ll learn the various points of integration between Kubernetes and OpenStack and how to influence the various automatic processes. You’ll also gain an understanding of some of the Kubernetes internals, and how to navigate the Kubernetes code to find the answers to future questions.

Lees also offers up a recommended deployment:

  • Dedicated Kubernetes cluster per hostile tenant
  • Three dedicated controller VMs for etcd/apiserver/controllers
    • Ideally spread across separate AZs
  • VMs for worker nodes:
    • Want enough for multi-AZ coverage (2-3 AZs depending on workload)
    • After that, size for fewer/larger VMs
  • Controller and worker VMs all on dedicated neutron private network
  • LBaaS loadbalancer pointing to apiserver(s)
  • LBaaS and floating-ip network for k8s Service

Check out the video below or download the slides here.

The post Kubernetes on OpenStack: Digging into technical details appeared first on OpenStack Superuser.

by Superuser at November 20, 2017 12:22 PM

November 19, 2017

Chris Dent

OpenStack Developer Satisfaction

When I ran for the OpenStack Technical Committee one of the roles I took for myself was something like a union rep for the people who are existing developers. The people who are there multiple times per week and self-identify as OpenStack developers. Earlier this year I started noticing (more than normal) that sometimes things are less than ideal for these people. This blog post tries to gather some thoughts on this topic as well as present some analysis of survey data. The survey analysis was presented at the recent leadership meeting at the OpenStack Summit in Sydney.

But first a caveat or disclaimer to get out of the way: Being a paid open source developer is a pretty great role that is typically only available to privileged people and achieving the role frequently gives those people yet more privilege. The job is many times better than many other options so when considering these issues of "satisfaction" it is important to keep in mind that this is relative to an ideal. The frustration that some people express is not because everything sucks but because the difference between what is happening and what could be happening is fraught with frustration. At the same time, the pressures which people experience—either because they impose it themselves or feel it from something external—is real and the mental and physical health implications are real. Members of any community, even one as privileged as OpenStack, should work together to limit dissatisfaction and pressures. If successful, the community may grow in a way that makes it more inclusive.

With that out of the way, the observations I had can be grouped in a few different ways, but mostly come down to either being too included (in the vortex of vast amounts of work and commitment) or not being included enough.

Some developers, often those who are unicorn "100% upstream" style developers, are obliged to act heroically to bring work to completion. They maintain continuity and feel such a commitment to the project that they fill all voids. Some even enjoin newbies to "double down" on their efforts, while others suggest that success upstream comes from catching up after normal business hours.

Other developers, either new ones or ones that do not wish to overcommit, are effectively excluded: while the heroes turn inward to each other, everyone else is on the fringes, unable to engage because of impedance mismatch.

In either case there is exhaustion: In the last year, I have heard from multiple contributors who, when presented with the opportunity to leave OpenStack, have leapt at the chance saying "thank god!". They are exhausted by trying to negotiate the complex social processes, the labyrinthine code, the glacial pace of review and difficulties gaining traction (when you are not "in"). It's a process which is not fun. It is riddled with exclusionary complexity not just for new people, but for people who are compelled to be around for whatever reason.

At the Board

With that as background, at the PTG in Denver I approached the Foundation board to say that I didn't think that "developer happiness" was sufficiently on the strategic radar. There were efforts in progress to welcome new contributors and to groom new leaders but little for the daily laborer. A couple things happened as a result:

  • I learned that this topic was within the domain of "community health" (one of 5 overarching strategic themes) but had not had much attention yet due to lack of people. Thus I was volunteered.

  • Two questions were added to the survey given to PTG attendees to gauge satisfaction. See the results.

Those two questions were:

  • What is the most important thing we should do to improve the OpenStack software over the next year?

  • What is one change that would make you happier as an OpenStack Upstream developer?

I had asked for a question that was yes or no, or at least a Likert scale (something like "If presented with a reasonable opportunity to leave the OpenStack community would you take it? 1-7, disagree-agree" to inspect the "thank god!" issue described above). Something where the responses could be analysed in a relatively unbiased way. Since the answers to the chosen questions are completely open-ended the best option for analysis is thematic analysis. I did that in this spreadsheet, applying tags to each response. The presentation has some summary analysis.

Prior to reviewing the survey results I had made a list of the issues I thought were present in the topic of "developer happiness":

  • too much clubbiness in some groups
  • code quality makes people not want to participate
  • project velocity make people not want to participate
  • corporate commitment limited
  • too much work, overall
  • legacy code and architecture that is weird

The first four were well represented in the survey results. The three main themes identified as areas of concern were:

  • Developer population, subdivided into:
    • Corporate involvement: concerns about shrinking commitments from traditional corporate participants
    • Developer quantity: bald statements of "we need more"
    • Developer retention: treatment, recognition, people leaving
    • Technical leadership: respondents asking for increased, or at least unified, voices on direction project-wide
    • Labor issues: not enough time to get things done, too much distraction from other responsibilities, conflicts in upstream community
  • Product Focus: What the projects should be working on
    • Refinement: making what's there the best it can be
    • Simplify: limiting the number of knobs
    • Stability: making what's there actually work
    • Upgradability: get upgrades working
    • Pro-Features: add new stuff
    • Pro-Fixes: Fix what's there, pay down debt
  • Clubs: People expressing concern with the difficulty of getting attention or breaking into perceived cliques.

The product focus theme was pretty stark. There were two responses that could be interpreted as "pro-feature" versus 56 that could be interpreted as "pro fix what's there". In general many of the responses could be interpreted as "this could be good if we cleaned up after ourselves". It is important, however, to take that attitude with a grain of salt. I think every large software project ever has presented that face at some point.

Product Focus

The clubbiness is a huge concern. We frequently speak about wanting OpenStack to be inclusive across many cultures and to many types of contributions. Many people perceive that there are strong groups in OpenStack and some of them are difficult to break into. Some of this is the result of technical learning curves, but some of it is social.

Overall it was challenging to get a specific read on what needs to change to make the experience of OpenStack developers happier or more satisfied. It was clear from the analysis that a lot of people want things to change but often that change is in conflicting directions. In the image below "Negative" does not mean "everything sucks" but rather that something is not how a respondent would like it to be.

General Feel

Based on my analysis of the survey results I made a few recommendations to the board for where we should focus additional attention:

  • Continue increasing focus on cross-project (activity and recognition)
  • Enhance focus on fixing what exists and paying down tech debt
  • More technical leadership
  • Faster creation of more cores
  • Do something about Nova (identified as a leading culprit where people experience difficulty "breaking in", something that has been talked about for years)
  • Ensure higher fidelity communication between upstream devs, and management and resourcing decision makers (there's a mistaken belief that upstream contributors are effectively communicating about their upstream experience to their internal management)
  • Keep having piano bar at PTG

Some of these are already in progress, others will get started in 2018. Suggestions or counterpoints are of course welcome. For now this is still in the information gathering and analysis stage, it would be premature to make any dramatic decisions based on the info gathered thus far.

I hope that by broaching the topic we can make it a bit less taboo and start working to find solutions at both the community and individual level to allow contributors to be productive and happy in a sustainable fashion. Even if the number of heroes burning themselves out and the number of contributors yearning for change is small, any number is too big.

by Chris Dent at November 19, 2017 04:30 PM

November 17, 2017

Ed Leafe

Sydney Summit Recap

Last week was the OpenStack Summit, which was held in Sydney, NSW, Australia. This was my first summit since the split with the PTG, and it felt very different than previous summits. In the past there was a split between the business community part of the summit and the Design Summit, which was where the … Continue reading "Sydney Summit Recap"

by ed at November 17, 2017 06:36 PM

OpenStack Superuser

OpenStack basics: An overview for the absolute beginner

SYDNEY — For the uninitiated, OpenStack’s role in cloud infrastructure can be a little hard to understand unless you know its capabilities and how it operates. And if you’re new to infrastructure-as-a-service, OpenStack can look a lot like virtualization or simply like public cloud offerings.

Ben Silverman offered this hour-long intro at the recent Sydney Summit that starts at the very beginning — what cloud is, the origin story of Amazon Web Services, OpenStack and why open source matters — straight through to architecture and managed OpenStack.

Along the way, he takes you on a walk-through of early services. This tour of the Project Navigator includes Nova and Swift which provided VMs and storage and were the building blocks for every OpenStack project today. He’ll also take you a journey through some of the latest services that have extended OpenStack into a stable and fully functional cloud operating system for the enterprise. His birds-eye view also goes into what drives OpenStack adoption for the enterprise (spoiler alert: cost, operational efficiency and accelerated innovation.)

He also takes a 20,000-foot view of why all of this matters.

“The value of cloud computing is in the outcomes it enables,” he said. “It’s like the value of an elliptical trainer — the value is in building heart health or losing weight.”

In a Q&A, he gives advice from a small cloud provider about how to work with a small team. “It’s a myth that you need a huge team of Python developers to run OpenStack,” he says. “You just need someone willing to learn the architecture and the tools to manage it like anything else.” It can be as few as a two people dedicated to it managing up to 10,000 instances, he adds.

Silverman is a principal cloud architect for OnX. An international cloud activist, he’s also co-author of “OpenStack for Architects.”  He started his OpenStack career in 2013 by designing and delivering American Express’ first OpenStack environment, worked for Mirantis as a senior architect and has been a contributing member of the OpenStack Documentation team since 2014. He also recently wrote for Superuser on how to get a job with OpenStack.

You can check out the entire 1:11-minute video below.

 

Cover Photo // CC BY NC

The post OpenStack basics: An overview for the absolute beginner appeared first on OpenStack Superuser.

by Superuser at November 17, 2017 02:52 PM

OpenStack Blog

Developer Mailing List Digest November 11-17

Summaries

  • POST /api-sig/news [0]
  • Release countdown for week R-14, November 18-24 [1]

[0] – http://lists.openstack.org/pipermail/openstack-dev/2017-November/124633.html
[1] – http://lists.openstack.org/pipermail/openstack-dev/2017-November/124631.html

 

Upstream Long Term Support Releases

The Sydney Summit had a very well attended and productive session [0] about to go about keeping a selection of past releases available and maintained for long term support (LTS).

In the past the community has asked for people who are interested in old branches being maintained for a long time to join the Stable Maintenance team with the premise that if the stable team grew, it could support more branches for longer periods. This has been repeated for about years and is not working.

This discussion is about allowing collaboration on patches beyond end of life (EOL) and enable whoever steps up to maintain longer lived branches to come up with a set of tests that actually match their needs with tests that would be less likely to bitrot due to changing OS/PYPI substrate. We need to lower expectations of what we’re likely to produce will get more brittle the older the branch gets. Any burden created by taking on more work is absorbed by people doing the work, as does not unduly impact the folks not interested in doing the work.

The idea is to continue the current stable policy more or less as it is. Development teams take responsibility of a couple of stable branches. At some point what we now call an EOL branch, instead of deleting it we would leave it open and establish a new team of people who want to continue to main that branch. It’s anticipated the members of those new teams are coming mostly from users and distributors. Not all branches are going to attract teams to maintain them, and that’s OK.

We will stop tagging these branches so the level of support they provide are understood. Backports and other fixes can be shared, but to consume them, a user will have to build their own packages.

Test jobs will run as they are, and the team that maintain the branch could decide how to deal with them. Fixing the jobs upstream where possible is preferred, but depending on who is maintaining the branch, the level of support they are actually providing and the nature of the breakage, removing individual tests or whole jobs is another option. Using third-party testing came up but is not required.

Policies for the new team being formed to maintain these older branches is being discussed in an etherpad [2].

Some feedback in the room expressed they to start one release a year who’s branch isn’t deleted after a year. Do one release a year and still keep N-2 stable releases around. We still do backports to all open stable branches. Basically do what we’re doing now, but once a year.

Discussion on this suggestion extended to the OpenStack SIG mailing list [1] suggesting that skip-release upgrades are a much better way to deal with upgrade pain than extending cycles. Releasing every year instead of a 6 months means our releases will contain more changes, and the upgrade could become more painful. We should be release often as we can and makes the upgrades less painful so versions can be skipped.

We have so far been able to find people to maintain stable branches for 12-18 months. Keep N-2 branches for annual releases open would mean extending that support period to 2+ years. If we’re going to do that, we need to address how we are going to retain contributors.

When you don’t release often enough, the pressure to get a patch “in” increases. Missing the boat and waiting for another year is not bearable.

[0] – https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
[1] – http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000149.html
[2] – https://etherpad.openstack.org/p/LTS-proposal

by Mike Perez at November 17, 2017 10:08 AM

November 16, 2017

Red Hat Stack

Hooroo! Australia bids farewell to incredible OpenStack Summit

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!

keynote-as
Monty Taylor (Red Hat), Mark Collier (OpenStack Foundation), and Lauren Sell (OpenStack Foundation) open the Sydney Summit. (Photo: Author)

And much like the varied weather, the Summit really reflected the incredible diversity of both technology and community that we in the OpenStack world are so incredibly proud of. With over 2300 attendees from 54 countries, this Summit was noticeably more intimate but no less dynamic. Often having a smaller group of people allows for a more personal experience and increases the opportunities for deep, important interactions.

To my enjoyment I found that, unlike previous Summits, there wasn’t as much of a singularly dominant technological theme. In Boston it was impossible to turn a corner and not bump into a container talk. While containers were still a strong theme here in Sydney, I felt the general impetus moved away from specific technologies and into use cases and solutions. It feels like the OpenStack community has now matured to the point that it’s able to focus less on each specific technology piece and more on the business value those pieces create when working together.

openkeynote
Jonathan Bryce (OpenStack Foundation) (Photo: Author)

It is exciting to see both Red Hat associates and customers following this solution-based thinking with sessions demonstrating the business value that our amazing technology creates. Consider such sessions as SD-WAN – The open source way, where the complex components required for a solution are reviewed, and then live demoed as a complete solution. Truly exceptional. Or perhaps check out an overview of how the many components to an NFV solution come together to form a successful business story in A Telco Story of OpenStack Success.

At this Summit I felt that while the sessions still contained the expected technical content they rarely lost sight of the end goal: that OpenStack is becoming a key, and necessary, component to enabling true enterprise business value from IT systems.

To this end I was also excited to see over 40 sessions from Red Hat associates and our customers covering a wide range of industry solutions and use cases.  From Telcos to Insurance companies it is really exciting to see both our associates and our customers sharing their experiences with our solutions, especially in Australia and New Zealand with the world.

paddy
Mark McLoughlin, Senior Director of Engineering at Red Hat with Paddy Power Betfair’s Steven Armstrong and Thomas Andrew getting ready for a Facebook Live session (Photo: Anna Nathan)

Of course, there were too many sessions to attend in person, and with the wonderfully dynamic and festive air of the Marketplace offering great demos, swag, food, and, most importantly, conversations, I’m grateful for the OpenStack Foundation’s rapid publishing of all session videos. It’s a veritable pirate’s bounty of goodies and I recommend checking it out sooner rather than later on their website.

I was able to attend a few talks from Red Hat customers and associates that really got me thinking and excited. The themes were varied, from the growing world of Edge computing, to virtualizing network operations, to changing company culture; Red Hat and our customers are doing very exciting things.

Digital Transformation

Take for instance Telstra, who are using Red Hat OpenStack Platform as part of a virtual router solution. Two years ago the journey started with a virtualized network component delivered as an internal trial. This took a year to complete and was a big success from both a technological and cultural standpoint. As Senior Technology Specialist Andrew Harris from Telstra pointed out during the Q and A of his session, projects like this are not only about implementing new technology but also about “educating … staff in Linux, OpenStack and IT systems.” It was a great session co-presented with Juniper and Red Hat and really gets into how Telstra are able to deliver key business requirements such as reliability, redundancy, and scale while still meeting strict cost requirements.

 

Of course this type of digital transformation story is not limited to telcos. The use of OpenStack as a catalyst for company change as well as advanced solutions was seen strongly in two sessions from Australia’s Insurance Australia Group (IAG). Product

IAG
Eddie Satterly, IAG (Photo: Author)

Engineering and DataOps Lead Eddie Satterly recounted the journey IAG took to consolidate data for a better customer experience using open source technologies. IAG uses Red Hat OpenStack Platform as the basis for an internal open source revolution that has not only lead to significant cost savings but has even resulted in the IAG team open sourcing some of the tools that made it happen. Check out the full story of how they did it and join TechCrunch reporter Frederic Lardinois who chats with Eddie about the entire experience. There’s also a Facebook live chat Eddie did with Mark McLoughlin, Senior Director of Engineering at Red Hat that further tells their story.

 

 

Ops!

An area of excitement for those of us with roots in the operational space is the way that OpenStack continues to become easier to install and maintain. The evolution of TripleO, the upstream project for Red Hat OpenStack Platform’s deployment and lifecycle management tool known as director, has really reached a high point in the Pike cycle. With Pike, TripleO has begun utilizing Ansible as the core “engine” for upgrades, container orchestration, and lifecycle management. Check out Senior Principal Software Engineer Steve Hardy’s deep dive into all the cool things TripleO is doing and learn just how excited the new “openstack overcloud config download” command is going to make you, and your Ops team, become.

jarda2
Steve Hardy (Red Hat) and Jaromir Coufal (Red Hat) (Photo: Author)

And as a quick companion to Steve’s talk, don’t miss his joint lightening talk with Red Hat Senior Product Manager Jaromir Coufal, lovingly titled OpenStack in Containers: A Deployment Hero’s Story of Love and Hate, for an excellent 10 minute intro to the journey of OpenStack, containers, and deployment.

Want more? Don’t miss these sessions …

Storage and OpenStack:

Containers and OpenStack:

Telcos and OpenStack

A great event

Although only 3 days long, this Summit really did pack a sizeable amount of content into that time. Being able to have the OpenStack world come to Sydney and enjoy a bit of Australian culture was really wonderful. Whether we were watching the world famous Melbourne Cup horse race with a room full of OpenStack developers and operators, or cruising Sydney’s famous harbour and talking the merits of cloud storage with the community, it really was an unique and exceptional week.

melcup2
The Melbourne Cup is about to start! (Photo: Author)

The chance to see colleagues from across the globe, immersed in the technical content and environment they love, supporting and learning alongside customers, vendors, and engineers is incredibly exhilarating. In fact, despite the tiredness at the end of each day, I went to bed each night feeling more and more excited about the next day, week, and year in this wonderful community we call OpenStack!

See you in Vancouver!

image1
Photo: Darin Sorrentino

by August Simonelli, Technical Marketing Manager, Cloud at November 16, 2017 10:56 PM

OpenStack Superuser

OpenStack Summit Sydney recap: 50 things you need to know

SYDNEY — From the moment the keynote doors opened to catching the excitement of the Melbourne Cup to the close of the last breakout session, the OpenStack Summit Sydney went all out.

If you didn’t attend—or if you did and want a replay—Superuser collected the announcements, user stories and Forum discussions you may have missed. You can also catch videos for the Summit sessions on the OpenStack website.

Jump to community activities, technical decisions and roadmap discussions
Jump to new case studies
Jump to news from the OpenStack ecosystem
Jump to what’s next

Let’s start with the OpenStack Foundation announcements:

1. OpenStack community leaders announced a new plan to overcome the hardest problem in open source today: integrating and operating open source technologies to solve real-world problems. The OpenStack Foundation and community are investing significant financial and technical resources in a four-part strategy to address integration of OpenStack and relevant open source technologies: documenting cross-project use cases, collaborating across communities, including upstream contributions to other open source projects, fostering new projects at the OpenStack Foundation and coordinating end-to-end testing across projects.

2. Launched on Monday during keynotes, the Public Cloud Passport Program is a collaborative effort between OpenStack public cloud providers to let users experience the freedom, performance and interoperability of open source infrastructure.

3. OpenLab is a community-led program to test and improve support for the most popular Software Development Kits (SDKs)—as well as platforms like Kubernetes, Terraform, Cloud Foundry and more—on OpenStack. The goal is to improve the usability, reliability and resiliency of tools and applications for hybrid and multi-cloud environments.

4. During Sunday’s OpenStack Board of Directors meeting, Tencent was approved as a Foundation Gold Member, securing the last available spot for the OpenStack Foundation Platinum and Gold Members. Shenzhen-based Tencent is one of the largest tech companies in Asia and is one of the top five internet companies worldwide by market capitalization.  

5. The OpenStack User Survey report was published Monday morning, highlighting a 95 percent  growth in deployments in 2017 compared to 2016, including the growth of OpenStack adoption in mainstream industries like finance, government, research, retail and telecom. Findings from this report were also supported by recent research from Cloudify, Heavy Reading, SDxCentral and SUSE. 

Here are some of the key community activities, technical decisions and roadmap discussions:

6. The Tencent TStack Team won the Superuser Awards during Monday’s keynote. With an impressive use case and contributions back to the community, they’re the eighth organization to take home the honor.

7. Working with the community, the OpenStack Foundation launched the first version of a new OpenStack community portal to ease the on-boarding process for new contributors. Featured content includes onboarding documentation for new upstream contributors, sample configurations for operators, API guides for end users and Upstream Institute, an in-person training session run at many OpenStack events that teaches new contributors about the tools and philosophy behind open-source collaboration.

8. At the Sydney Summit, China UnionPay led the inaugural meeting of the OpenStack Community Financial Services Team which was launched to help identify and fill in the gaps for OpenStack adoption in the financial industry. Learn more about how you can get involved with the team

9. At the Forum, the entire OpenStack community gathered to brainstorm the requirements for the next release, feedback on the past version and talk long-term strategy. The Sydney Forum was the start of the planning phase for the Rocky development cycle. Over the three days of the Summit, people dug into issues like upstreaming long-term support releases and programs like the Passport Program and the new “open infrastructure” strategy. You can catch up with all the Etherpads here.

10. The OpenStack Scientific Special Interest Group (SIG) released the second version of its HPC book, “The Crossroads of Cloud and HPC: OpenStack for Scientific Research.” Each section has been updated to reflect the now-current status of OpenStack projects typically used in specific science and research implementations. Learn more about the updated book and how you can get involved in the SIG.

11. In a recent report from SDxCentral, 87 percent of respondents said that OpenStack would be used to manage edge infrastructure. This significant interest was reflected in the Sydney Summit schedule with several sessions around the emerging use case from organizations including AT&T, Cisco, Ericsson, Huawei, Verizon, Vodafone and more. Beth Cohen, cloud technology strategist at Verizon also led a Forum session around edge architectures and use cases.

Now, a word from OpenStack users in production

Over 30 OpenStack users spoke at the Summit last week, including Australian companies speaking publicly for the first time. Below is a sample of some of the users who spoke last week, but you can catch them all on the OpenStack videos page.

12. With only four team members, Adobe Advertising Cloud is managing 100,000 cores of OpenStack to power their infrastructure with 30 percent cost savings compared to public cloud. Recognizing the need for a hybrid cloud environment, Nicolas Brousse and Joseph Sandoval discussed their cloud bursting strategy and their plans on further growing their OpenStack private cloud footprint, currently at six datacenters in three continents.

13. American Airlines and DXC shared their experience with OpenStack private cloud deployments in the enterprise realm, discussing different considerations such as automation, integration and application and platform-as-a-service (PaaS).

14. To satisfy growing customer demands and an ever-expanding network, AT&T discussed why the evolution of the OpenStack community’s culture and progress is crucial. During Monday’s keynotes Sorabh Saxena, president of business operations at AT&T Business Solutions, talked about the portfolio of products that rely on their AT&T Integrated Cloud (AIC), powered by OpenStack. These services include Network on Demand, DirecTV Now, Cricket Wireless and FirstNet, the first broadband network dedicated to America’s police, firefighters and emergency medical services.

15. Atmail, a commercial Linux messaging platform provider based in Peregian Beach, Australia, discussed why they chose OpenStack to power 15 percent of their infrastructure, which serves over 150 million email accounts in nearly 100 countries. 

16. CERN and StackHPC described some of the early work in supporting large-scale deployments and PaaS offerings to support prototyping and flexible production application delivery for scientists. Specifically, they discussed the scientific goals and infrastructure of the next generation of the Large Hadron Collider (LHC) at CERN and the Square Kilometre Array (SKA) radio telescope, particularly focusing on bare-metal OpenStack providing RDMA-enabled execution frameworks and underlying storage.

17. China Mobile talked about how they designed an automatic integration and testing system for their NFV/SDN network, which includes more than 10 OpenStack clouds deployed in four different provinces in China, with multiple vendors of hardware, virtualized infrastructure and VNF software.

18. In addition to launching the OpenStack Community Financial Services Team, Superuser Awards finalist China UnionPay, shared their five years of experience in building a financial cloud powered by OpenStack. China UnionPay’s OpenStack cloud supports 500 million users, averaging 50 million transactions per day and 3,000 transactions per second at peak.

19. Superuser Awards finalist China Railway Corporation shared their experience with the adoption of open source and the building of their private cloud, SinoRail Cloud. They discussed the prospect of the open source cloud to promote railway business innovation, as well as the trend and direction of cloud computing in large traditional enterprises. The SinoRail Cloud supported 31 billion daily average page view for a recent festival, and it has also supported stable, safe and 24/7 uninterrupted operation of real-time dispatching management for all the trains, locomotives and vehicles.

20. Financial services are one of the fastest-growing industries for OpenStack. Quinton Anderson, head of systems engineering at Commonwealth Bank, shared how open source “in the wild” is coming together for Commonwealth, one of Australia’s largest banks.

21. The Garvan Institute of Medical Research, one of Australia’s largest medical research institutions and in the top five worldwide for genome research, adopted OpenStack to better serve its users, data scientists and researchers. At the Sydney Summit, they went into detail on how OpenStack is helping provide innovative solutions for analyzing genomic data to both its genome sequencing center and its 80+ data scientists.

22. Insurance Australia Group (IAG) talked about the lessons they learned with their migration of many disparate systems into a consolidated open infrastructure, underpinned by Red Hat’s OpenStack technology. In a recent interview with ComputerworldUK, IAG shared how they are saving millions with OpenStack. 

23. Partnering with Red Hat and NEC, KDDI Research showed how they expanded the scope of their framework to utilize monitoring data for failure recovery citing benefits such as fast fault detection by shorter interval monitoring and silent failure detection by machine learning with various types of metrics.

24. Acknowledging the high costs of public cloud, Massachusetts Open Cloud presented its use case around High-Throughput Computing (HTC) and how this capability is available in the Massachusetts Open Cloud Marketplace, Giji.

25. In a technical presentation around converting Ceilometer from “legacy” to “Gnocchi” mode in a Mitaka cloud, Overstock.com discussed tradeoffs in performance, difficulties finding relevant documentation, and improvements that appear to be on the way.

26. While Stackers around the venue were betting on the Melbourne Cup—hats off to those who chose Rekindling—the Paddy Power Betfair team shared why they’re betting on OpenStack. In a range of sessions to share their use case and learnings, the former Superuser Awards winner covered covered load balancing, container self-service deployment using Kubernetes on OpenStack and how they have migrated over 400 applications to their OpenStack cloud.

27. Saudi Telecom Company (STC)a $36B USD telecommunications giant based in Saudi Arabia decided to build upon its dominant market position and offer public cloud services to the government, enterprise and SMB sectors across the Kingdom and the wider Middle East region. STC selected OpenStack technologies for the basis of their cloud and partnered with Mirantis and NetApp to help build out a reliable and cost-effective solution. In Sydney, they shared their preliminary goals as well as the challenges and economic decisions made along the way.

28. Telstra joined a panel to present its use case in development for migrating from physical network functions (PNF) to virtual network functions (VNF) as well as discuss the key challenges telcos face integrating the new network virtualization environments with legacy platforms like achieving 50ms convergence.

29. Professors Brendan Mackey and Gary Egan from the Nectar cloud discussed how OpenStack-powered research is enabling significant advancements in fields like climate change and brain imaging. With 12,000 registered users and over 27,000 cores, the Nectar cloud is used by 13 strategic research areas and over 300 projects with competitive research grant funding.

30. After presenting an edge computing demo at the Boston Summit, Verizon returned with two breakout sessions—the illusion of infinite capacity and massively distributed OpenStack, thinking outside the datacenter—as well as a Forum session to discuss reference architectures around edge computing.

31. Two weeks prior to the Summit, Workday published a case study on its strategy to evolve toward OpenStack in every data center, encompassing VMs and bare metal. At the Sydney Summit, Edgar Magana, senior principal software development engineer discussed the SaaS company’s current OpenStack footprint of 50,000 cores, and their plans to triple capacity by the end of 2018.

32. Yahoo! dove into how they are migrating hyper-scale enterprise from legacy infrastructure to OpenStack. Recognizing that it’s a daunting task, architect director James Penick discussed the technical solutions his team leveraged to make it a reality.

This just in from the OpenStack ecosystem


33. 451 Research published its latest OpenStack study on Monday at the Summit. Their prediction: service providers with OpenStack private cloud revenue will exceed revenue from service providers with OpenStack-based public cloud implementations in 2018, sooner than previously expected. Read more here.

34. Superuser Awards finalist City Network just announced that SBAB, Sweden’s fifth largest bank with 350,000 customers offering loan and saving products to consumers, tenant-owner associations and property companies in Sweden is using the company’s sector-specific services, City Cloud for Bank & Finance to power some of the bank’s IT infrastructure.

35. Cloud Foundry recently launched Container Runtime as the default container deployment method for Cloud Foundry via Kubernetes and BOSH. The Cloud Foundry Foundation also launched The Foundry, an online marketplace for its expanding ecosystem.

36. Cumulus Networks, the leading provider bringing web-scale networking to enterprise cloud, announced the availability of Cumulus in the Cloud for OpenStack environments.

37. Among research findings, Heavy Reading released results from a recent survey of telecom providers noting that 84 percent say that OpenStack is essential or important to their company’s success. Telecoms who are already implementing their NFV and cloud strategies are even more positive, with 96 percent and 85 percent respectively saying that it’s “essential” or “important.”

38. Mellanox Technologies announced the Innova-2 product family of FPGA-based smart network adapters. Innova-2 is the industry leading programmable adapter designed for a wide range of applications, including security, cloud, Big Data, deep learning, NFV and high performance computing.

39. NetApp, Cisco and Red Hat are delivering a jointly designed, developed and tested Cisco Validated Design for OpenStack deployment on FlexPod SF. The central benefit of this FlexPod design is that it is fundamentally easier to deploy when compared to acquiring and configuring disparate storage, compute and networking. Read more about it here.

40. Shortly after the Sydney Summit, Rackspace announced a partnership with HPE to deliver a pay-as-you-go OpenStack private cloud. This model is estimated to provide 40 percent cost savings compared to public cloud and to give private cloud customers the ability to more closely align resources to growth and handle burst capacity, traffic spikes without the need to pay for additional fixed capacity.

41. Radiance Technologies, an employee-owned small business offering a variety of services and products for government and commercial customers, deployed a tailored solution for the Department of Defense (DoD) with Red Hat Cloud Suite.

42. Red Hat OpenStack Platform 12 introduced containerized services, improving flexibility while decreasing complexity for faster application development. Red Hat OpenStack Platform 12 delivers many new enhancements, including upgraded distributed continuous integration (DCI) and improved security to help maintain data compliance and manage risk.

43. Momentum has grown for VNF Certification, with 31 VNF technologies now certified on the Red Hat OpenStack Platform. These certifications allow customers to deploy with more confidence that their chosen solution is enterprise-ready and supported by Red Hat and the specific partner.

44. To support their cost savings use case, IAG is using Red Hat OpenStack Platform to help consolidate and simplify its legacy infrastructure. A trusted partner of IAG for seven years, Red Hat is now helping IAG to connect disparate data sources into a single, scalable private cloud solution to improve customer experience.

45. Red Hat validated the strength of OpenStack adoption in APAC, citing enterprises—such as CapitalOnline Data Service, CargoSmart Limited, Lotte Data Communication Company, and MyRepublic—as customers deploying Red Hat OpenStack Platform to power their hybrid and private clouds for a variety of mission-critical deployments.

46. Storage Made Easy presented M-Stream, a new addition to their Enterprise File Fabric. M-Stream, currently in beta, is a feature that parallelizes large downloads for object storage clouds such as Ceph, OpenStack Swift and Amazon S3. Download the full press release here.

47. Vault Systems, an Australian built and managed highly secure cloud platform, announced a plan to invest $350 million in cloud infrastructure for Australian federal, state and local government. The initiative is supported by Moelis Australia, an Australian financial services group listed on the Australian Securities Exchange. The Moelis Australia Government Infrastructure Fund will build on Vault Systems’ Australian-based and locally owned secure footprint which is already home to global leading technology from vendors including Microsoft, SAP, ServiceNow and Oracle.  

48. VEXXHOST, an OpenStack infrastructure-as-a-service provider, launched a new OpenStack consulting service. The service aims to provide companies with the assistance and guidance needed to get any OpenStack project up and running smoothly.

49. VirTool Networks Inc. announced the release of VirTool Network Analyzer version 1.1, a significant update to their tool for fixing network problems on OpenStack cloud systems. VirTool Network Analyzer provides cloud-wide network visualization as well as groundbreaking distributed packet capture and path tracing, addressing the needs of large-scale cloud operators and enterprise customers.

50. ZTE Corporation officially released TECS 6.0, the new-generation Cloud Platform. Based on OpenStack Pike, TECS 6.0 features cloud native microservice architecture and an edge datacenter design. It also employs FPGA hardware acceleration technology and boasts powerful ICT convergence resource orchestration capability.

What’s next


That’s a strong finish for the Sydney Summit, but we’re already thinking about our next run.

We had such a great time at the 2015 Summit in Vancouver that we decided to go back! Join us at the Vancouver Summit May 21 – 24, 2018.

While you’re saving dates, mark November 13-15, 2018 for the OpenStack Summit Berlin

The post OpenStack Summit Sydney recap: 50 things you need to know appeared first on OpenStack Superuser.

by Superuser at November 16, 2017 08:48 PM

Lee Yarwood

OpenStack TripleO FFU Keystone Demo N to Q

This post will introduce a very rough demo of the new TripleO Fast-forward Upgrades (FFU) feature, warts and all, using an overcloud with only Keystone deployed. This should prove to be a useful starting point for anyone interested in this feature and could even be an approach used for future per-service FFU CI jobs.

Environment

I’m currently using the tripleo-quickstart project to deploy virtualised test environments. For this demo I’m using the following command line to create the demo environment:

$ bash quickstart.sh -w $WD -t all -R master-undercloud-newton-overcloud  \
   -c config/general_config/keystone-only.yml \
   -N config/nodes/1ctlr.yml $VIRTHOST

This is made possible by following unmerged changes to tripleo-quickstart:

https://review.openstack.org/#/q/topic:keystone_only_overcloud

Once deployed you should find the 10.0.3 Newton version of Keystone deployed on overcloud-controller-0:

$ ssh -F $WD/ssh.config.ansible overcloud-controller-0
[..]
$ rpm -qi openstack-keystone
Name        : openstack-keystone
Epoch       : 1
Version     : 10.0.3
Release     : 0.20170726120406.bd49c3e.el7.centos
Architecture: noarch
Install Date: Fri 10 Nov 2017 04:24:46 AM UTC
Group       : Unspecified
Size        : 175014
License     : ASL 2.0
Signature   : (none)
Source RPM  : openstack-keystone-10.0.3-0.20170726120406.bd49c3e.el7.centos.src.rpm
Build Date  : Wed 26 Jul 2017 12:07:53 PM UTC
Build Host  : n30.pufty.ci.centos.org
Relocations : (not relocatable)
URL         : http://keystone.openstack.org/
Summary     : OpenStack Identity Service
Description :
Keystone is a Python implementation of the OpenStack
(http://www.openstack.org) identity service API.

Before starting the upgrade I recommend that snapshots of the undercloud and overcloud-controller-0 libvirt domains are taken on the virthost:

$ ssh -F $WD/ssh.config.ansible virthost
$ for domain in $(virsh list | grep running | awk '{print $2 }'); do virsh snapshot-create-as ${domain} ${domain}_start ; done

UC - docker_registry.yaml

As with a normal container based deployment on >=Pike we will need a Docker registry file mapping each service to a container image. The following command will create this file, pointing to the offical RDO registry:

$ openstack overcloud container image prepare \
  --namespace trunk.registry.rdoproject.org/master \
  --tag tripleo-ci-testing \
  --output-env-file ~/docker_registry.yaml

Note that this will result in the container images being pulled from the remote RDO registry during the upgrade. We can pre-cache these images on the undercloud to speed the process up. However as we are only using a single host and minimal number of services in this demo I have chosen to skip this for now.

UC - tripleo-heat-templates

FFU itself is controlled by an Ansible playbook using tasks that are contained within the tripleo-heat-templates (THT) project. The following gerrit topic lists all of the current FFU changes up for review:

https://review.openstack.org/#/q/status:open+project:openstack/tripleo-heat-templates+branch:master+topic:bp/fast-forward-upgrades

For this demo we need to update the local copy of THT on the undercloud to include a subset of these changes:

$ cd /home/stack/tripleo-heat-templates
$ git fetch git://git.openstack.org/openstack/tripleo-heat-templates refs/changes/19/518719/2 && git checkout FETCH_HEAD

We also need the following noop-deploy-steps.yaml environment file that allows us to use openstack overcloud deploy to update the stack outputs of the overcloud without forcing an actual redeploy of any resources:

$ curl https://git.openstack.org/cgit/openstack/tripleo-heat-templates/plain/environments/noop-deploy-steps.yaml?h=refs/changes/97/520097/1 > environments/noop-deploy-steps.yaml

Finally, as we have deployed a custom set of services for the Controller role we now have to ensure that the Docker service is added to the role prior to our upgrade:

$ cat overcloud_services.yaml 
parameter_defaults:
  ControllerServices:
       - OS::TripleO::Services::Docker
       - OS::TripleO::Services::Kernel
       - OS::TripleO::Services::Keystone
       - OS::TripleO::Services::RabbitMQ
       - OS::TripleO::Services::MySQL
       - OS::TripleO::Services::HAproxy
       - OS::TripleO::Services::Keepalived
       - OS::TripleO::Services::Ntp
       - OS::TripleO::Services::Timezone
       - OS::TripleO::Services::TripleoPackages

OC - Ocata heat-agents

An older os-apply-config hiera hook and any legacy hiera data needs to be removed from the overcloud prior to our upgrade. The following ML post has more details on this workaround:

http://lists.openstack.org/pipermail/openstack-dev/2017-January/110922.html

For the time being this isn’t part of the upgrade playbook and so we need to run the following commands that will update the heat-agents on the host to their Ocata versions and remove the legacy data:

$ sudo rm -f /usr/libexec/os-apply-config/templates/etc/puppet/hiera.yaml /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles /etc/puppet/hieradata/*.yaml
$ sudo yum install -y \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/openstack-heat-agents-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-ansible-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-apply-config-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-docker-cmd-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-hiera-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-json-file-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm \
https://trunk.rdoproject.org/centos7-ocata/current-tripleo/python-heat-agent-puppet-1.0.1-0.20170412210405.769d0de.el7.centos.noarch.rpm

OC - Remove ceilometer

At present there is a packaging issue when upgrading the openstack-ceilometer packages directly from Newton to Queens. As these packages are installed by default in the Newton overcloud-full image used to deploy the environment but not used in our demo we can simply remove them for the time being:

$ sudo yum remove openstack-ceilometer* -y

UC - Update stack outputs

We can now use the openstack overcloud deploy command to update the overcloud stack and generate the new stack outputs, including the FFU playbook. To do this we simply add the previously created docker_registry.yaml, environments/docker.yaml and environments/noop-deploy-steps.yaml environment files to the original command used to deploy the environment.

$ . stackrc
$ openstack overcloud deploy \
  --templates /home/stack/tripleo-heat-templates \
[..]
  -e /home/stack/docker_registry.yaml \
  -e /home/stack/tripleo-heat-templates/environments/docker.yaml \
  -e /home/stack/tripleo-heat-templates/environments/noop-deploy-steps.yaml

The original command is logged under ~/overcloud_deploy.log on the undercloud, for example:

$ grep openstack\ overcloud\ deploy overcloud_deploy.log 
2017-11-16 14:36:11 | + openstack overcloud deploy --templates /home/stack/tripleo-heat-templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e /home/stack/cloud-names.yaml -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e /home/stack/tripleo-heat-templates/environments/low-memory-usage.yaml --validation-warnings-fatal -e /home/stack/overcloud_services.yaml --compute-scale 0 --ntp-server pool.ntp.org

UC - Download config

Now that the stack outputs have been updated we can download the overcloud config containing the FFU playbook onto the undercloud:

$ openstack overcloud config download

There is a known issue with the generated upgrade tasks at the moment where the ordering of conditionals causes Ansible to fail. To workaround this, simply edit the following Ansible tasks within the Controller/upgrade_tasks.yaml file to ensure the step conditional is always checked first:

- block:
  - name: Upgrade os-net-config
    yum: name=os-net-config state=latest
  - changed_when: os_net_config_upgrade.rc == 2
    command: os-net-config --no-activate -c /etc/os-net-config/config.json -v --detailed-exit-codes
    failed_when: os_net_config_upgrade.rc not in [0,2]
    name: take new os-net-config parameters into account now
    register: os_net_config_upgrade
  tags: step3
  when:
  - step|int == 3
  - not os_net_config_need_upgrade.stdout and os_net_config_has_config.rc == 0

UC - Run playbook

With the config present on the undercloud we can finally start the FFU upgrade using the following command line:

$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory \
    /home/stack/tmp/fast_forward_upgrade_playbook.yaml

OC - Verification

Once the FFU upgrade is complete we can verify that Keystone is functional in the overcloud with a few simple commands:

$ ssh -F $WD/ssh.config.ansible undercloud
$ . overcloudrc
$ openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                        |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| 15fd404ff8c14971b4251b81624edab8 | regionOne | keystone     | identity     | True    | admin     | http://192.168.24.10:35357 |
| 2e513f5fdfc140ec916b081b47a2b8f7 | regionOne | keystone     | identity     | True    | internal  | http://172.16.2.12:5000    |
| 96980f0f9ac44c718c038ef54af814bc | regionOne | keystone     | identity     | True    | public    | http://10.0.0.8:5000       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
$ openstack service list
+----------------------------------+------------+----------+
| ID                               | Name       | Type     |
+----------------------------------+------------+----------+
| 3fc546421e9048f39b2b847b13fa8ea5 | keystone   | identity |
| 7f819190dc6f44d8b995021277b24d67 | ceilometer | metering |
+----------------------------------+------------+----------+

We can also log into the overcloud-controller-0 host and verify that the relevant containers are running:

$ ssh -F $WD/ssh.config.ansible overcloud-controller-0
$ sudo docker ps
CONTAINER ID        IMAGE                                                                COMMAND                  CREATED              STATUS                          PORTS               NAMES
4f40f0cf98aa        192.168.24.1:8787/master/centos-binary-keystone:tripleo-ci-testing   "/bin/bash -c '/usr/l"   About a minute ago   Up About a minute                                   keystone_cron
0b9d5cc17f5d        192.168.24.1:8787/master/centos-binary-keystone:tripleo-ci-testing   "kolla_start"            About a minute ago   Up About a minute (healthy)                         keystone
db967d899aaf        192.168.24.1:8787/master/centos-binary-mariadb:tripleo-ci-testing    "kolla_start"            About a minute ago   Up About a minute (unhealthy)                       mysql
1f0b9aa72ec7        192.168.24.1:8787/master/centos-binary-rabbitmq:tripleo-ci-testing   "kolla_start"            2 minutes ago        Restarting (1) 29 seconds ago                       rabbitmq
8e689f5bac22        192.168.24.1:8787/master/centos-binary-haproxy:tripleo-ci-testing    "kolla_start"            2 minutes ago        Up 2 minutes                                        haproxy

As I said at the start this is a very rough demo that we can hopefully clean up and iterate on quickly over the coming weeks. The current goal is to have another working demo available by M2 that covers all of the required services to upgrade the computes so we can also start verification of the data plane during the upgrade.

November 16, 2017 01:00 PM

OpenStack Superuser

Get involved with the OpenStack Community Financial Services Team

The scope of the recently formed  OpenStack Community Financial Services Team is to coordinate with users, service providers and developers to identify and fill in the gaps for OpenStack adoption in the financial industry.

Superuser talks to members of this group from China UnionPay and Intel about current goals of the team and how you can get involved.

Why are financial services organizations turning to OpenStack?

The financial industry is rapidly evolving along with with new business innovations such as internet banking and mobile payments. Financial institutions also need to continually enhance and upgrade their IT infrastructure to satisfy business agility requirements and to reduce operating expenses.

What are some examples of workloads that financial services organizations want to run on OpenStack?

Many financial institutions are exploring new business services and ways of transitioning IT workloads to the cloud. UnionPay has been operating an OpenStack-based financial cloud in a live, stable production environment for more than 1,500 days (that’s about five years), serving many UnionPay business services and applications. These include UnionPay core services such as online payments, mobile payments and Wallet. (Learn more from this session at the Sydney Summit.)

What are some unique challenges that financial services organizations face in regards to infrastructure? Open source?

There are stringent regulatory requirements for the implementation of financial institutions’ infrastructure. There are also common best practices in the financial industry that must be to adhered to (e.g. reference architecture design for financial data center network infrastructure with specific characteristics).

Driven by software-defined infrastructure, open-source solutions are an opportunity for financial institutions to explore new operation models but they also introduce new challenges. Although open-source solutions provide an innovative platform for financial institutions to optimize their IT infrastructure, it’s also a challenge for organizations to improve their own IT capabilities in open-source technologies and to acquire the necessary skill sets to build talented open-source teams.

What are the unique requirements that financial services organizations have for OpenStack? Any notable gaps?

While implementing OpenStack at UnionPay, we discovered a few functional gaps that need to be improved, such as a better network management capability to meet the complex network requirement of financial institutions. We also noticed a gap in OpenStack operational usability, such as the ability to upgrade OpenStack infrastructure in batches. We would also recommend that the OpenStack community build more and better support in data center management tools, such as F5 and ASA.

What are the goals of the new working group and what are some of the outputs the community can look forward to seeing?

We hope to investigate the need and requirements of financial clouds with other industry partners.

A few key deliverables include:

  1. Documenting a successful OpenStack financial cloud use case
  2. Identifying gaps for OpenStack in financial clouds
  3. Developing reference architectures for specific technologies, such as software-defined networking reference architecture for financial networks

Industry regulations are clearly an important part of this mix. How much commonality do you see in regulations around the world (i.e. China, United States, European Union, etc.)? Do you expect there to be region-specific work?

At the moment, we have not concluded that there is any specific commonality or differentiation of regulations around the world (China, U.S., EU, etc). However, based on our initial experience with U.S. financial regulations, especially in the IT infrastructure realm, we can probably identify a few commonalities at the foundational infrastructure layer, such as data center infrastructure requirements.

We believe that a regional organizational structure is necessary and critically important for financial teams to promote the research and investigation of the needs for the global financial industry. First, we recommend organizing research work in specific regional areas (China, U.S., EU), then we can exchange and collaborate the research output across these regional teams. Such regional structure would allow teams to develop the work more effectively.

Which organizations are currently involved in this effort?

UnionPay is now leading the effort to establish the Team. Other financial institutions such as the Industrial & Commercial Bank of China, Agricultural Bank of China, Bank of China, Bank of Communications, Industrial Bank and Bank of Shanghai have also participated in related work. You can find more information on members at the Financial Team Wiki pages.

Do any of the financial services organizations involved currently contribute upstream? Are there any particular projects that are of interest to this group?

UnionPay is committed to contributing upstream. We have an open-source team of 50 engineers working on OpenStack projects.

We are specifically interested in contributing to Nova, Neutron and Keystone because we are actively using these projects in production.  We’re also interested in other innovative projects and features such as cross-data center management since these align with our future implementation strategy.

How can financial services organizations get involved?

We’ve set up the Wiki page at the OpenStack site (https://wiki.openstack.org/wiki/Financial_Team). Information on how to get involved can be found there as well.

We welcome all organizations/institutions to participate in the Financial Team. To join, just send us an email with some basic information.

We plan to share our results and deliverables through the Financial Team page or at https://github.com/openstack/workload-ref-archs. In the coming months, we’ll be organizing regular meetings and promoting joint research activities with relevant partners of common interest.

Cover Photo // CC BY NC

The post Get involved with the OpenStack Community Financial Services Team appeared first on OpenStack Superuser.

by Superuser at November 16, 2017 12:04 PM

November 15, 2017

OpenStack Superuser

How OpenStack powers genomics at Garvan Institute of Medical Research

SYDNEY — The Garvan Institute of Medical Research adopted OpenStack to better serve its users, data scientists and researchers. OpenStack and scientists have always gone together, from analyzing far away galaxies or exploring the very basic pieces of time and space.

At Garvan, the science is genomics, a multi-disciplinary branch of molecular biology.  Genomics also involves the sequencing and analysis of genomes — and that’s where OpenStack comes into play.

“We started using OpenStack because we saw the value of segregating hardware resources and the opportunity to isolate and simplify the infrastructure configuration to fit the needs of each individual project,” Manuel Sopena Ballesteros, a big data engineer at the Institute, says.

OpenStack is helping Garvan, one of Australia’s largest medical research institutions and in the top five worldwide for genome research, provide innovative solutions for analyzing genomic data to both its genome sequencing center and its 80+ data scientists. One of the most effective slides shown by Sopena Ballesteros at the Summit charted the cost of genome sequencing and Moore’s law.

The cost of genome sequencing plummeted from $100 in 2001 to around $1,000 in 2016. The result: a huge amount of data is  now generated at a much lower cost—but the cost of processing power is not decreasing as quickly.

The Garvan Institute has been using an HPC cluster for its genomics applications. However, this legacy infrastructure is not ready to keep up with the increasingly large data sets. The HPC system, while providing high performance and hardware efficiency, is unfortunately fairly rigid: not all the tools can be adapted to the HPC environment. The Data Intensive Computer Engineering (DICE) group at Garvan implemented OpenStack using Kolla-Ansible to deploy hyper-converged environments based on Docker containers. The new cloud-based environment reduced the time and the number of servers needed for computing and storage. OpenStack allowed decoupling hardware from software to get granular resource allocation, use bare metal, VM, containers, shared storage and commodity hardware. Ultimately, the group hopes to reduce cost and eventually migrate the HPC system to OpenStack, too.

You can check out the entire 35-minute presentation below.

 

Cover Photo // CC BY NC

The post How OpenStack powers genomics at Garvan Institute of Medical Research appeared first on OpenStack Superuser.

by Superuser at November 15, 2017 02:33 PM

Cloudwatt

Volume : Incremental Backup

When you make an incremental backup, only the difference with the last increment is stored, you optimize your storage needs and profite from a better backup performance.

The feature is available using the option --incremental in CLI - through API Volume V2 only.

How does that work ? To make an incremental backup of a volume, a classic backup is required. The Volume service retrieve all available backup for a volume, and will make an incremental backup, based upon the last created backup (which can be an increment or not). And so on. A backup which is used to make an incremental backup can’t be deleted. To know if a backup is incremental or if a backup has multiple incremental backups, you can use cinder backup-show command.

To use it If you wheel to use this feature, an adjustment of you quotas is maybe needed : default quotas are not scoped for this feature.

You can contact our support service support@cloudwatt.com

by Horia Merchi at November 15, 2017 12:00 AM

November 14, 2017

OpenStack Superuser

OpenStack at the crossroads of cloud and high-performance computing

Announced at the OpenStack Summit in Sydney, the second release of the book, “The Crossroads of Cloud and HPC: OpenStack for Scientific Research” is now available.

Some 200 copies of this new edition will be distributed at this year’s Supercomputing conference in Denver, November 12-17. If you’re attending, pick up your paperback copy at the HPC Birds of a Feather on Wednesday or at the OpenStack lightning talks at the Indiana University booth #601, from 10:15 a.m.-12:00 p.m. on Tuesday, November 14.

Every section has been updated to reflect the now-current status of OpenStack projects typically used in specific science and research implementations. The Scientific Special Interest Group (SIG, formerly Scientific Working Group) is instrumental in identifying and contributing to the features needed for high-performance computing, massive data storage and high-speed networking access since its inception in April 2016. (More on the SIG and how to get involved here.)

New and expanding scientific research projects in the areas of radio telemetry, cancer genome research, astronomy and bioinformatics are driving the working group. The cover of the updated book features three dishes from the 36-dish precursor of the Square Kilometre Array (SKA), a multinational radio telescope coming online in the next decade to explore phenomena such as pulsars, gravity, and the cosmic dawn. In all, 16 in-depth case studies are included throughout the new book.

A new section has been added to the book about Research Cloud Federation. Several federations are covered including the EGI Federated Cloud, the Nordic e-Infrastructure Collaboration (NeIC), federated identity at CERN, and the Open Research Cloud project.

If you can’t make it to Supercomputing, you can always read, download or order the new book on the OpenStack in Science web page.

Cover Photo // CC BY NC

The post OpenStack at the crossroads of cloud and high-performance computing appeared first on OpenStack Superuser.

by Kathy Cacciatore at November 14, 2017 01:34 PM

NFVPE @ Red Hat

Automated TripleO upgrades

Upgrading TripleO can be a hard task. While there are instructions on how to do it manually, having a set of playbooks that automate this task can help.With this purpose, I’ve created the TripleO upgrade automation playbooks (https://github.com/redhat-nfvpe/tripleo-upgrade-automation).Those are a set of playbooks that allow to upgrade an existing TripleO deployment, specially focused on versions from 8 to 10, and integrated with local mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors) In case you want to know more, please visit the tripleo-upgrade-automation project on github, and you’ll get instructions on how to properly use this repo to automate your upgrades.

by Yolanda Robla Mota at November 14, 2017 10:30 AM

November 09, 2017

StackHPC Team Blog

Scheduling Baremetal Resources in Pike

OpenStack Pike

For many reasons, it is common for HPC users of OpenStack to use Ironic and Nova together to deliver baremetal servers to their users. In this post we look at recent changes to how Nova chooses which Ironic node to use for each user's nova boot request.

Flavors

To set the scene, lets look at what at what a user gets to choose from when asking Nova to boot a server. While there are many options relating to the boot image, storage volumes and networking, let's ignore these and focus on the choice of Flavor.

The choice of Flavor allows the user to specify which of the predefined options of CPU, RAM and disk combinations bests suits their needs. In many clouds the choice of flavor maps directly to how much the user has to pay. In some clouds it is also possible to pick between a baremetal server (i.e. using the Ironic driver) or a VM (i.e. using the libvirt driver) by picking a particular flavor, while most clouds only use a single driver for all their instances.

Before Pike

Ironic manages an inventory of nodes (i.e. physical machines). We need to somehow translate Nova's flavor into a choice of Ironic node. Before the Pike release, this was done by comparing the RAM, CPU and disk resources for each node with what is defined in the flavor.

If you don't use the exact match filters in Nova, you will find Nova is happy to give users any physical machine that has at least the amount of resources requested in the flavor. This can lead to your special high memory servers being used by people who only requested your regular type of server. Some find this is a feature; if you are out of small servers your preference might be giving people a slightly bigger server instead.

All this confusion comes because we are trying to manage indivisible physical machines using a set of descriptions designed for packing VMs onto a hypervisor, possibly taking into account a degree of overcommit. Things get even harder when you consider having both VM and baremetal resources in the same region, with a single scheduler having to pick the correct resources based on the user's request. At this point you need the exact match filters for only a subset of the hosts. This problem is now starting to be resolved by the creation of Nova's placement service.

The Resource Class

The new Placement API brings its own set of new terms. Lets just say a Resource Provider has an Inventory that defines what quantity of each available Resource Class the Resource Provider has. Users can get a set of Allocation for specific amounts of a Resource Class from a given Resource Provider. Note: while there are a set of well known Resource Class names, you are also able to have custom names.

Furthermore, a Resource Provider can be tagged with Traits that describe the qualitative capabilities of the Resource Provider. The python library os-traits defines the standard Traits, but the system also allows custom traits. Ironic has recently added the ability to set a Resource Class on an Ironic Node.

In Pike Nova now reads the Ironic node resource_class property, and if it has been set updates the Inventory of the Resource Provider that represents that Ironic node to have an amount of 1 available of a given custom Resource Class.

Using Ironic's Resource Classes

Lots of technical jargon in that last section. What does that really mean?

Well it means we can divide up all Ironic nodes into distinct subsets, and we can label each distinct subset with a Resource Class. For an existing system, you can update any Node to add a Resource Class. But be careful, because once you add a Resource Class to a node, you can't change the field until the Ironic node is no longer being used (i.e. in the available state). (There are good reasons why, but lets leave that for another blog post).

If you are adding new Nodes or creating a new cloud, you can use Ironic inspector rules to set the Resource Class to an appropriate value, in a similar way to initializing any of the other Node properties you can determine via inspection.

Mapping Resource Classes to Flavors

So here is were it gets more interesting. Now we have defined these groups of Ironic nodes, we can map these groups to a particular Nova flavor. Here are the docs on how you do that.

Health warning time

You probably noticed our blog post on upgrading to Pike <{filename}2017-09-21-pike-upgrade.rst> Well if you want to do this, you need to make sure you have a bug fix we have helped develop to make this work. In particular you want to be on a new enough version of Pike that you get this backport.

Without the above fix, you will find adding the flavor extra specs such as resources:VCPU=0 cause the Nova scheduler to start picking Ironic nodes that are already being used by existing instances, triggering lots of retries, and likely lots of build failures.

One more heath warning. If you set a resource class of CUSTOM_GOLD in Ironic, that will get registered in Nova as CUSTOM_CUSTOM_GOLD. As such its best not to add the CUSTOM_ prefix in Ironic. There is a lot of history around why it works this way, for more details see the bug on launchpad.

An Unrelated Pike bug

While we are talking about Pike and using Ironic through Nova, if you have started using the experimental HA mode, where two or more nova-compute processes talk to one Ironic deploy, you will want to know about this bug that means it is quite badly broken in Pike.

Once we have the fix for that merged, we will let you know what can be done for Pike based clouds in a future blog post.

Something you must do before you upgrade to Queens

In Pike there is a choice between the old scheduling world and the new Resource Class based world. But you must add a Resource Class for every before you upgrade to Queens.

For more details on the deprecation of scheduling of Ironic nodes using VCPU, RAM and disk, please see the Nova release notes.

Once you update your Ironic nodes with the Resource Class (once you are on the latest version of Pike that has the bug fix in), existing instances that previously never claimed the new Resource Class.

Why not use Mogan?

I hear you ask, why bother with Nova any more, there is this new project called Mogan that is focusing on Ironic and ignores VMs?

Talking to our users, they like making use of the rich ecosystem around the Nova API that (largely) works equally well for both VMs and Baremetal, be that the OpenStack support in Ansible or the support for orchestrating big data systems in OpenStack Sahara. In my opinion, this means its worth sticking with Nova, and I am not just saying that because I used to be the Nova PTL.

Where we have got in Pike

In the SKA performance prototype we are now making use of the Resource Class based placement. This means placement picks only an Ironic Node that exactly matches what the flavor requests. Previously, because we did not use the exact filters or capabilities, we had GPU capable nodes being handed out to users who only requested a regular node.

When you look at the capacity of the cloud using the Placement API, it is now much simpler when you consider the available Resource Classes. You can see a prototype tool I created to query the capacity from Placement (using Ocata).

What is Happening in Queens and Rocky?

If you want to know more about the context around the work on the Placement API and the plans for the future, these two presentations from the Boston summit are a great place to start:

I recently attended the Project Team Gathering (PTG) in Denver. There was lots of discussion on how Ironic can make use of Traits for finer grained scheduling, including how you could use Nova flavors to pick between different RAID and BIOS configurations that are be optimized for specific workloads. More on how those discussions are going, and how the SKA (Square Kilometre Array) project is looking to use those new features in a future blog post!

by John Garbutt at November 09, 2017 11:01 PM

Cisco Cloud Blog

OpenStack: Solving for Integration in Open Source Adjacent Communities

I have had the pleasure of serving as an Individually elected member of the OpenStack Board of Directors during 2017. Prior to my current work, I served OpenStack to lead the development of and deliver innovative software building blocks for OpenStack in concert with over 28,000 other individuals involved with OpenStack. As a result of […]

by Steven Dake at November 09, 2017 01:19 PM

November 08, 2017

Mirantis

Is ONAP Required for NFV Transformation?

If you are a CSP you are probably thinking, do I really need to deal with ONAP? It’s actually a good question, and the answer is yes, but the timing could vary depending on your situation.

by Nick Chase at November 08, 2017 05:11 PM

Host-Telecom.com

Download from Down Under: OpenStack Summit Sydney Day 3

The t-shirts, spinners, geekos, flipflops, hats, and other tchotchkes have been dispensed. The barrels of coffee and contents of thousands of box lunches consumed, and the OpenStack Summit in Sydney has concluded.

As for Host-Telecom, we had a great time meeting our colleagues, hearing their thoughts and talking to attendees about the kinds of solutions they are looking for and the solutions we need to develop and perfect for the OpenStack cloud market. In between, CEO Pavel Chernobrov also had time for a good talk with our old friend Boris Renski, CMO of Mirantis and a longtime Host-Telecom customer.

New to the summit, but not to OpenStack

While this was our first OpenStack Summit, Host-Telecom has been involved with the community for years, serving Mirantis as its offerings have evolved over time. Host-Telecom continues to create custom data infrastructures to test and support their solutions, giving Mirantis developers a real-time view of how things work in production.

Noting the ease of working with us, Renski said Host-Telecom communicates extremely effectively with his team and gets Mirantis’ needs right out of the gate to deliver exactly what’s needed. Chernobrov added that winning Mirantis’ business is always competitive and credited the expertise of the Host-Telecom team for creating innovative IT infrastructure, made to order.

Just as keeping existing customers happy is vital, so is reaching out to new ones. With a presentation on VMware to OpenStack migration, we did just that.

Host-Telecom – OpenStack on bare metal and beyond

Our Tuesday presentation highlighted how we are trying to expand the OpenStack user base with our bare metal options, customized data infrastructure services, data replication and disaster recovery servers. Of particular interest to attendees was the main dish of VMware to OpenStack migration.

Seva Vayner from Host-Telecom along with partner Nickolay Smirnov, CEO from Hystax, painted a clear picture of what it means to get off VMware and why you should make the move. Displaying the VMware vSphere console and complicated maps of other migration paths, it was easy to see the many advantages of moving to OpenStack. And of course reducing software licensing fees makes it even more enticing.

VMware to OpenStack migration

The technical details

The software behind our data backup and disaster recovery services installs an agent on the client side that converts data and infrastructure architectures, including VMware vSphere, HyperV, and Virtuozzo platforms, to run on bare metal OpenStack. In case of data compromise or infrastructure failure, the system immediately and seamlessly begins to run on bare metal OpenStack, which is transparent to the user.

Disaster recovery uses the same OpenStack-based agent software as data backup to track data infrastructure architecture and changes, including network mapping, configuration settings, connections, and application interdependencies. In addition to replicating data, the agent transmits system snapshots to storage at specific intervals or continuously, depending on user requirements. All are modified to run in a bare metal OpenStack environment. If infrastructure fails, users start running seamlessly in OpenStack, though it looks and acts like their original infrastructure. Which brings us at last to OpenStack migration.

Moving to OpenStack for real

With Host-Telecom’s data backup and disaster recovery services, users are already running on OpenStack under the covers. The same agent that enables those services also provides a fast and easy migration to OpenStack cloud. VMware to OpenStack migration is already available, with paths from other platforms in the works.

And when you do get to OpenStack and start considering object storage, check out the advice I gave in my own presentation about engineering a sound infrastructure for Swift that scales. Expanding clusters isn’t the only answer. You can use JBODS, and must definitely rebalance if you want performance without latency and data compromise.

Goodbye to Sydney

Goodbye to Sydney

As we wave goodbye to Sydney, we look forward to meeting market imperatives that call for simplifying cloud solutions. We believe in the power and flexibility of OpenStack and that it’s a great platform for users. Returning home, our work continues to make migration simple and to keep adding services that customers need as they run their businesses day-to-day.

I also want to announce young Mr. Jonah Spector as the winner of our campaign to both free our kangaroo and give him a name of distinction. As the kangaroo rose to the top of a pile of business cards from folks who stopped by our table, it escaped its confinement and now runs free as Hooper the Kangaroo! Congratulations, Jonah! Hooper is very happy and digging his new name and his new-found freedom!

BY

Denise Boehm

Denise Boehm

The post Download from Down Under: OpenStack Summit Sydney Day 3 appeared first on Host-Telecom.com.

by Denise Boehm at November 08, 2017 04:21 PM

OpenStack Superuser

How to build OpenStack on Kubernetes

Workday’s OpenStack journey has been a fast one: from no cloud to five distributed data centers, over a thousand servers and 50,000 cores in four years.

The team has done much of the work themselves. At the Sydney Summit, Edgar Magana and Imtiaz Chowdhury, both of Workday, gave a talk about how they work with Kubernetes to achieve zero downtime for large-scale production software-as-a-service.

“We’re very happy about what we’ve achieved with OpenStack,” says Magana, who has been involved with OpenStack since 2011 and currently chairs the User Committee. (You can read more about OpenStack at Workday in this Superuser story.)

As deploying Openstack services on containers becomes more popular and simpler, operators are jumping on the container bandwagon. However, although many open source and paid solutions are available, few offer the options to customize OpenStack deployment to meet requirements for security, business and ops.

“Everything we’ve done so far, it’s fully automated and we do it in a way that developers can make changes and deploy it all the way to production after getting it tested,” says Chowdhury. “We also want to make sure we can upgrade and maintain with zero downtime.”

Magana gave an overview of the current architecture at Workday, noting that “We’re not doing  anything crazy, we have a typical reference architecture for OpenStack, but we have made a few changes.”

 

Where you’d normally have the OpenStack controller with all the OpenStack projects (Keystone, Nova, etc.) Workday decided to abstract out the stateful services and build out what they call an “infrat server.”

Coming at it from an ops perspective, they go into of toperationalizing production-grade deployment of Openstack on Kubernetes in a data center using community solutions.

They cover:

  • How to build a continuous integration and deployment pipeline for creating container images using Kolla
  • How to harden OpenStack service configuration with Openstack-Helm to meet enterprise security, logging and monitoring requirements

In this 40-minute talk, the pair share also lessons learned (“good, the bad and the ugly”) and best practices for deploying Openstack on Kubernetes running on bare metal.

The post How to build OpenStack on Kubernetes appeared first on OpenStack Superuser.

by Superuser at November 08, 2017 02:46 PM

Amrith Kumar

My presentation at OpenStack Summit in Sydney

Davanum Srinivas (dims) and I did a presentation at OpenStack Summit in Sydney. The slides from the presentation are here.Filed under: OpenStack

by amrith at November 08, 2017 05:26 AM

OpenStack Superuser

OpenStack Community Contributor Awards recognize unsung heroes

SYDNEY — On the final day of the Sydney Summit, the Community Contributor Awards gave a special tip of the hat to those who might not know how much they are valued.

These awards are a little informal and quirky but still recognize the extremely valuable work that everyone does to make OpenStack excel. These behind-the-scenes heroes were nominated by other community members. OpenStack’s upstream developer advocate Kendall Nelson runs the program and handed out the honors at the Summit feedback session.

There are three main categories: those who might not be aware that they are valued, those who are the active glue that binds the community together and those who share their knowledge with others.

Tobias Rydberg and Howard Zhipeng – Open Infrastructure Shield

As co-chairs of the public cloud working group, they’ve not only brought the community together, they have brought the world together!  They’ve avenged this via the Passport program which is enabling new users worldwide to have wings to roam any cloud. More importantly, they are helping us all realize that clouds do not have to be controlled by central monopolies but can be a community effort where ‘rising rides float all boats.’  Long live federated open infrastructure worldwide!

Ian Wienand – Smiling in the Face of Adversity Cup

He’s often up early to attend the weekly Infra meeting, is always willing to volunteer and pick up work that needs to be done, and is known to work on his Australian weekends to help us with Friday (PDT) afternoon outages.

Recently he jumped on diagnosing why review.openstack.org had disappeared allowing the rest of us to sleep knowing that the situation was under control. This included a phone call to Texas from Australia. Living in Australia with the work week offset and timezone differences I’m not sure that Ian knows how much we appreciate the work he does as we aren’t around when he is around to show it. It’s hard to show appreciation with the time zones involved, but we do notice and it is a huge help. So thank you for being such a team player despite the timezone difficulty.

Chandan Kumar – Don’t Stop Believin’ Cup

He stepped up to champion the “Split Tempest plugins” goal for the Queens cycle. “We need more new leaders like Kumar to step up and help others achieve their objectives.”

Eduardo Gonzalez and Jeffrey Zhang – Mentor of Mentors

Since I started contribution in the Kolla community, I have seen these two guys are very active in responding to any question asked by new folks in IRC as well on dev mail. Gonzalez has been quite impressive in solving the gate issue along with Jeffrey in Kolla. Any of the complex issue found in Kolla always get by solved by these two…”

Gene Kuo – The Giving Tree Award

He made a major contribution by translating the entire User Survey into traditional Chinese and for the analysis translated both the traditional and simplified Chinese versions. He jumps at the opportunity to volunteer to better serve the OpenStack community… He is an incredibly valuable member of the OpenStack community and his dedication to jumping in to help deserves recognition.

Mohammed Naser – Key to Stack City

In addition to being CEO for a successful company and a respected OpenStack operator, Naser stepped up to be the PTL of the Puppet-OpenStack upstream project team. We need more users like Naser directly involved in OpenStack development.  He’s contributed a lot to the OpenStack community, through his own time and development efforts, as well as infrastructure resources through his public cloud. He helped support Chris Hoge and the team working to getting OpenStack in the gate for CNCF and has also supported the team helping get OpenLab off the ground. Cross community testing is really important for the success of OpenStack.

Lisa-Marie Namphy – The ‘Does anyone actually use this?’ Trophy

She brings a lot of energy to the OpenStack community, networking, educating and generally cheer leading. As an OpenStack Ambassador, she’s invested in helping grow user groups across the U.S. and globally. She runs one of the few online meetups using hangouts making the content more accessible for those who cannot attend or do not live in a major city. She is also active in the Women of OpenStack, and has moderated or participated in lots of panels at OpenStack and industry events. She helps build bridges across open source communities and most importantly keeps our Bay Area community ticking.

Gary Kevorkian – Bonsai Caretaker

Kevorkian is Mr. Positivity. He juggles so much in his roles at Cisco and the community, but manages to make the Los Angeles user group a shining example around the world. He always takes the time to provide feedback and even helpful criticism, but you can always tell it comes from a caring place. His infectious smile and positive vibes make a big impact on the community. He is especially good at making behind-the-scenes people feel valued.

The post OpenStack Community Contributor Awards recognize unsung heroes appeared first on OpenStack Superuser.

by Superuser at November 08, 2017 01:51 AM

November 07, 2017

NFVPE @ Red Hat

AWX: The Poor Man’s CI?

I’m just going to go ahead and blame @dougbtv for all my awesome and terrible ideas. We’ve been working on several Ansible playbooks to spin up development environments; like kucean.

Due to the rapid development nature of things like Kubernetes, Heketi, GlusterFS, and other tools, it’s both possible and probable that our playbooks could become broken at any given time. We’ve been wanting to get some continous integration spun up to test this with Zuul v3 but the learning curve for that is a bit more than we’d prefer to tackle for some simple periodic runs. Same goes for Jenkins or any other number of continous integration software bits.

Enter the brilliantly mad mind of @dougbtv. He wondered if AWX (Ansible Tower) could be turned into a sort of “Poor Man’s CI”? Hold my beer. Challenge accepted!

by Leif Madsen at November 07, 2017 06:48 PM

Deploying AWX to OpenStack RDO Cloud

Recently I’ve been playing around with AWX (the upstream, open source code base of Ansible Tower), and wanted to make it easy to deploy. Standing on the shoulders of giants (namely @geerlingguy) I built out a wrapper playbook that would let me easily deploy AWX into a VM on an OpenStack cloud (in my case, the RDO Cloud). In this blog post, I’ll show you the wrapper playbook I built, and how to consume it to deploy a development AWX environment.

by Leif Madsen at November 07, 2017 03:20 PM

Host-Telecom.com

Download from Down Under: OpenStack Summit Sydney Day 2

We spent a lot of our time yesterday meeting our Summit neighbors and finding out where they’re headed with OpenStack. You can catch up on that news and check out all our broadcasts, blogs, and the new additions anytime. Moving into our second day in Sydney, our coverage extended to chats with analysts, an OpenStack PTL, some industry luminaries, as well as a cool new startup out of Singapore offering a multi-platform cloud management suite that the whole C-Suite can use. First, we turn to the analyst prospective.

What the gurus are saying

We had the great good fortune to catch up with Ben Kepes, the Kiwi industry analyst with prescient insights. While he would never describe himself that way, I enjoyed his take on the maturation of OpenStack as it takes its rightful place as a cloud component in the open source universe. A few years ago, OpenStack was the marketing message of multiple cloud purveyors trying to counter the AWS-takes-over-the-world environment they were competing in. From HP Helion (RIP) to Mirantis, Red Hat, SUSE, and Dell, OpenStack was the word.

Kepes, as well as Forrester analyst Paul Miller, say the OpenStack brand is not a persuasive appeal to a cloud customer base looking for business solutions that are easy to deploy in addition to providing a flexible, powerful platform that is also affordable. While no one disagrees that VMware is an expensive cloud option, OpenStack solutions have no real cachet with customers unless they work and scale easily.

OpenStack Summit environment

At my third OpenStack Summit since 2013, I can certainly see the diminution in sponsor participation as well as attendance. However, the folks who are there are no less keen to share the technology and offer solid user solutions. It’s more that the hype has settled, as perhaps it should have a few years ago, as providers get down to really addressing the practical needs of end-users.

As a child of the Linux generation from the very early 2000s, I am rather happy to see “open source” subsume its OpenStack child into its rightful place as a powerful cloud component instead of it holding court as some sort of independent entity spinning on its own axis. With the development and integration of open source solutions as a whole, the OpenStack platform only becomes more powerful.

It’s interesting, as well, to remember the OpenStack “Big Tent” from a few years ago, when lots of extraneous projects came into play in a rather unwieldy glob that made OpenStack even more difficult for outsiders to decipher. That attitude has changed in favor of greater simplicity and a more straight-forward cloud architecture. If you want to trick OpenStack cloud out with after-market add-ons, that’s strictly a consumer decision. Meanwhile the nitty-gritty work of OpenStack development continues.

I also had the chance to chat with Graham Hayes, OpenStack DNS PTL, who echoed some of Kepes’ and Miller’s observations that OpenStack is no longer the main story, but that platform development continues. While DNS modifications are still baking and the technical details are very much in the realm of high-level geekdom, integration of compute and networking with DNS has a serious view to user adoption.

Did you say Kubernetes?

Yeah, so there sits Kubernetes, except we all know it’s not just sitting there. It’s the orchestration tool OpenStack loves but cannot marry. In a chat with RackN CEO and OpenStack guru Rob Hirschfeld, he talked containers, orchestration, and Digital Rebar, the fast, open data provisioning and control scaffolding with cloud native architecture. His brain is too big for me to encapsulate here, so please go check out our Nov. 7 interview with him to get the full details on his views around automation and orchestration. It’s worth the listen.

Startups from Singapore

I ended the day with a delightful conversation with Glenn Chai from Infinities, a Singapore-based startup offering a platform management suite that delivers development workflow planning to deployment across multi/hybrid clouds. It also incorporates visualized log analytics and functionality through a single user interface that the entire C-Suite can access for the information they need about cloud activity, financial information, and other insights to help run the business. A perfect example of really serving cloud user needs without respect to the underlying infrastructure. More to come tomorrow.

BY

Denise Boehm

Denise Boehm

The post Download from Down Under: OpenStack Summit Sydney Day 2 appeared first on Host-Telecom.com.

by Denise Boehm at November 07, 2017 01:44 PM

Opensource.com

Getting started with Gnocchi

Gnocchi, which enables storage and indexing of time series data and resources at large scale, is purpose-built for today's huge cloud platforms.

by jdanjou at November 07, 2017 08:02 AM

November 06, 2017

OpenStack Superuser

Your passport to what’s next in OpenStack

SYDNEY — Even a spring storm didn’t dampen spirits at OpenStack’s first Australian Summit. The 16th keynote brought a mix of new announcements and fascinating case studies.

Here are some key news points:

Updates to the OpenStack Powered program

There’s a new way to move forward with OpenStack: The Passport program. Launched by the Foundation’s Mark Collier and Lauren Sell, it’s a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. The pair showed a world map with OpenStack public clouds to show just where you can go with OpenStack.

To explore more on what this means for everyone, they were joined by OpenStack public cloud user Monty Taylor, who provided an update on Zuul v3. Zuul v3 is the third major version of the project gating system developed for use by the OpenStack project as part of its software development process. This latest version uses Ansible in place of Jenkins Job and supports GitHub.

OpenLab

Jonathan Bryce, OpenStack Foundation executive director, was joined onstage by OpenStack Board Member Allison Randal to talk about how as OpenStack technology has matured and been deployed in thousands of clouds around the world, the number of industries, workloads and even the form factor for delivering infrastructure is becoming increasingly diverse.

The pair discussed how the OpenStack Foundation and community is evolving to meet these diverse needs and how the community is tackling the biggest challenge in open source: integration. More on that from the press release, here. Bryce also introduced OpenLab, a community effort to do large scale end-to-end testing. OpenLab is an effort to pool resources for the things you build on top of OpenStack. One of the key needs for such things are solid SDKs and continuing improvement of their interaction with APIs.

AT&T and the daring demo

As usual, there are some fearless folks who risk a live demo in front of thousands. This time it was a trio from AT&T: Sorabh Saxena, Ryan van Wyk and Alan Meadows.

They came up to talk about how AT&T’s network has evolved since its 2015 announcement of the AT&T Integrated Cloud (AIC). Today, AT&T provides an enhanced software-defined networking services ecosystem that touches every industry, supported by OpenStack at its core. The example they brought on stage was a timely one: FirstNet, which supports some three million first responders across the us. It’s always available, secure and geo diverse.

Superuser award winner

The Tencent TStack team was chosen as winner at the Sydney Summit. The award was presented during Monday’s keynotes by representatives from the previous winners, Paddy Power Betfair’s Thomas Andrew.

Small team, huge results

Joseph Sandoval and Nicolas Brousse Adobe Systems took the stage to talk about how they’re running OpenStack on six data centers in the US, Europe and Asia, 100K cores with a team of just four. Adobe Advertising Cloud is running OpenStack in production in a high volume real-time bidding application that requires low latency and high throughput to meet growing customers demands. More details on the Superuser article here.

Welcome to the multi-cloud world

Lew Tucker, Cisco CTO, took the stage to talk about how they’re working in a complex world made up of large public cloud service providers, private clouds, and legacy systems. Meanwhile, customers are demanding a seamless and secure experience across these environments. Containers and microservices-based architectures running in the cloud make it easier for application developers, but may bring in a new set of issues and complexities as the line between applications and services becomes blurred.
Cisco’s cloud portfolio includes hybrid cloud solutions, cloud apps, security, networking and cloud professional services.

Climate change, brain and imaging research on Openstack

The morning closed out with a fascinating look into two Australian research projects. The first from Professor Brendan Mackey, director, Griffith Climate Change Response Program, was about how OpenStack can help fight human created climate change by letting every Australian relate climate change to biodiversity. The second was a fascinating look into brain research imaging with Professor Gary Egan.

 

You can catch all the keynotes on the OpenStack Foundation video page. Stay tuned for more from Sydney…

The post Your passport to what’s next in OpenStack appeared first on OpenStack Superuser.

by Superuser at November 06, 2017 09:38 PM

Mirantis

Kubernetes and Docker Mini-Bootcamp: Some questions (and answers)

As we get ready for tomorrow's mini-class, Top Sysadmin Tasks and How to Do Them with OpenStack, we thought we'd revisit our last one, Kubernetes and Docker Mini-Bootcamp. Here are the Q&As.

by Andrew Lee at November 06, 2017 06:57 PM

Host-Telecom.com

The downlow from Down Under at Day 1 of the OpenStack Summit in Sydney

With our mascot, Kangy the Kangaroo, we at Host-Telecom have been bouncing our way around the marketplace on the first day of the OpenStack Summit, talking to other event sponsors about their offerings, where they see the technology going, and also about the consolidation and elevated level of OpenStack user friendliness that we as vendors are enabling to expand the OpenStack user base.

Sponsors

What attendees and vendors are saying

Of course, we’ve also been talking to Summit goers about our OpenStack on bare metal solutions, including data backup and disaster recovery, as well as VMware-to-OpenStack migration. The migration issue has been a hot topic among attendees from both large and small organizations as well as vendors looking toward platform consolidation on one hand, or using specialized services for specific tasks on the other.

It’s common for visitors to tell us they are using OpenStack, VMware, and other proprietary cloud solutions depending on the needs of the projects they are running. And while OpenStack is the theme, there’s something of a throwback to referencing “open source” code with OpenStack taking its place as a cloud technology that is paired with other open software solutions. However, open source isn’t necessarily ubiquitous here in Sydney.

Storage stories vary with the vendor

As I chatted with other Summit vendors, I got multiple perspectives on storage solutions. While Swift is the OpenStack offering for object storage, not everyone sticks to that narrow path, and many never traveled it. IBM folks explained their deep OpenStack roots while mentioning their proprietary storage solutions were completely compatible with OpenStack cloud. Of course Red Hat raises the flag high in favor of Ceph, which is open source code that provides object, block and file storage, while many other vendors are sticking with Swift as they try to bend it to their will, both in terms of code and optimal data infrastructure for reliable scaling. In fact, I’ll be covering optimizing the IT infrastructure to support Swift on Wednesday at 1:50pm.

Meanwhile, our Host-Telecom developers have had a busy day, so let’s take a look at what’s interesting them today.

The developer’s take

Host-Telecom’s development team was busy attending technical presentations all day, including explanations of using OpenStack block storage for workloads, and selecting the best backend from selections such as LVM, Ceph, and other options. While they didn’t come to any conclusions, they learned more about Cinder and the factors to consider when building storage.

Meanwhile our guys also sat in as the Ceph team and folks using it for large OpenStack deployments got into more details with OpenStack+CephFS. With OpenStack deployments integrating distributed file storage solutions for virtual servers to provide fault-tolerance, mobility, and shared states between servers, our experts listened in to the role Ceph is playing in that area, providing horizontally scalable and POSIX-compatible file stores for OpenStack operators.

Speaking of containers… OK, we weren’t, but Roman and Vladimir, our developers on the scene were as they got caught up on the Zun project, which is container orchestration developed for OpenStack. Here they learned about how issues and user needs are being addressed, with new features in the Zaqar Pike release, including dead letter queue, notification retry policy, and other improvements.

And how can we talk about containers without Kubernetes? Roman and Vladimir were hot on the details. They learned about integration points between Kubernetes and OpenStack and how to affect the various automatic processes, with insights into the guts of Kubernetes code and architecture. Our brains are full for today, so let’s talk about some humanitarian issues, including our kangaroo mascot.

Hop on bare metal

Help us give Kangy a real name

Aside from the technical concerns we’re tracking, we are also trying to free our kangaroo as it floats to the top of its enclosure on a sea of business cards. Not only that, if you hate the name “Kangy” as much as our intern Erin Walters does, feel free to head on over to Host-Telecom and offer your suggestions for more inspired appellations! Get on over there anyway as we are broadcasting video live from the Summit three times a day and collecting your questions and input about the topics you’d like us to cover while we are here.

We’re back tomorrow with interviews with industry luminaries, including the Cloud Don, seer of all things OpenStack. See you here on Tuesday, same bat time, same bat channel! In the meantime, Kangy needs your help! Give him/her a real name. Free Kangy!!!

Kangy

BY

Denise Boehm

Denise Boehm

The post The downlow from Down Under at Day 1 of the OpenStack Summit in Sydney appeared first on Host-Telecom.com.

by Denise Boehm at November 06, 2017 09:51 AM

Opensource.com

Edge computing moves the open cloud beyond the data center

Edge computing, like public cloud at scale, requires a convenient, powerful cloud software stack that can be deployed in a unified, efficient and sustainable way. Open source is leading the way.

by Mark Collier at November 06, 2017 06:00 AM

November 05, 2017

SUSE Conversations

How to combine OpenStack with Software Defined Networking

Software Defined Networking or SDN has been around for a while and it is a very important piece in the architecture if you have advanced networking use cases which require complex network automation without increasing the cost dramatically. As you might know, OpenStack is today capable of providing basic network services through its networking component …

+read more

The post How to combine OpenStack with Software Defined Networking appeared first on SUSE Blog. Manuel Buil

by Manuel Buil at November 05, 2017 11:53 PM

OpenStack Superuser

And the Superuser Award goes to…

SYDNEY — After weeks of reviews and deliberation by the OpenStack community and Superuser editorial advisors, the Tencent TStack team was chosen as winner at the Sydney Summit. The award was presented during Monday’s keynotes by representatives from the previous winners, Paddy Power Betfair.

Tencent is a leading provider of Internet value added services in China. Their OpenStack-based private cloud platform cuts server costs by 30 percent and operator and maintainer costs by 55 percent, and saves them RMB100 million+ each year. It shortens resource delivery from two weeks to 0.5 hours and supports the development teams (such as QQ, WeChat, and Game) for services that generate tens of billions of revenues.

The Tencent TStack team is comprised of 76 members who developed the OpenStack-based TStack private cloud platform. It consists of four sub-teams:

  • Product design: Responsible for requirement analysis and interaction design
  • Technical architecture: Responsible for solution design and technology research
  • Product development: Responsible for feature design and implementation
  • Operations support: Responsible for deployment, monitoring and troubleshooting.

They built a private cloud to provide services for internal IT environments and testing environments (e.g. QQ, WeChat), and they also provide complete hybrid cloud services for government departments and enterprises in China.

The winners competed against impressive finalists China Railway, China UnionPay, and City Network. Previous winners include AT&T, CERN, China Mobile, Comcast and NTT Group, Paddy Power Betfair and UKCloud. The Superuser Awards launched in 2014 to recognize organizations that have used OpenStack to meaningfully improve their business while contributing back to the community.

Interested in nominating a team for the next Awards at the Vancouver Summit? Stay up-to-date at superuser.openstack.org/awards.

The post And the Superuser Award goes to… appeared first on OpenStack Superuser.

by Superuser at November 05, 2017 05:49 AM

OpenStack Blog

Developer Mailing List Digest October 28th – November 3rd

News

  • Sydney Summit Etherpads [0]

Summaries

  • Nova Placements Resource Provider Update by Eric Fried [0]
  • Nova Notification Update by Balazs Gibizer [1]
  • Technical Committee Status update by Thierry Carrez [2]
  • Technical Committee Report by Chris Dent [3]
  • Release Countdown by Sean McGinnis [4]
  • POST /api-sig/news by Chris Dent [5]

TC Election Results (continued)

Congrats to our 6 newly elected Technical Committee members:
  • Colleen Murphy (cmurphy)
  • Doug Hellmann (dhellmann)
  • Emilien Macchi (emilienm)
  • Jeremy Stanley (fungi)
  • Julia Kreger (TheJulia)
  • Paul Belanger (pabelanger)
Full results are available [0]. The process and results are also available [1]. 420 voted out of 2430 electorate, giving us a 17.28% turn out with a delta of 29.16% [2].
Reasons for the low turnout is hard to tell without knowing who is voting and what their activity is in the community. More people are beginning to understand the point of the TC activities, being more around duties than rights (e.g. stewardship and leadership). People could care a bit less about specific individuals and are less motivated by the vote itself. If the activity of the TC was a lot more conflict and a lot less consensus, people might care about it more.

Security SIG

Our governance used to only have project teams to recognize activity in OpenStack, so we created a security team. Introduction of sigs provide a new construct for recognizing activity around a group that share interest around a topic or practice that are not mainly around software bits.
Security is a great example of a topic that could benefit from this construct to gather all security-conscious people in our community. SIGs can have software by-products and own git repositories, and the software is more about security in general than a piece of OpenStack itself.
It’s important to consider the Vulnerability Management Team (VMT) under the new model, which acts as an independent task force.
The Security team discussed the idea of a SIG in their meeting, and overall think it’s worth exploring by having the SIG and team exist in parallel to see if there is traction.

by Mike Perez at November 05, 2017 02:57 AM

November 03, 2017

NFVPE @ Red Hat

Security hardened images with volumes

Starting to apply since Queens This article is a continuation of http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html How to build the security hardened image with volumes Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems. The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute…

by Yolanda Robla Mota at November 03, 2017 11:37 AM

Opensource.com

Why aren't you an OpenStack mentor yet?

With complex projects like OpenStack, it can be intimidating to jump straight in. Fortunately, there are great mentoring programs to help new contributors get started.

by Jason Baker at November 03, 2017 07:00 AM

November 02, 2017

OpenStack @ NetApp

OpenStack Summit Sydney looks to be a Bonzer!

Bonzer: Australian slang for “most excellent” Join the NetApp team down under and learn why we are the #1 Commercial OpenStack® Storage solutions provider! Did you know that 5 out of 7 OpenStack Superusers are NetApp customers? And that 4 out of 7 are using NetApp SolidFire? Be sure to stop by our booth—A13 in the ... Read more

The post OpenStack Summit Sydney looks to be a Bonzer! appeared first on thePub.

by Pete Brey at November 02, 2017 10:15 PM

OpenStack Superuser

Five users stories to catch at the Sydney Summit

Dozens of superusers will take the stage in Sydney to tell their stories about working with OpenStack. Hear these tales from the trenches as presenters detail the evaluation process, advantages and challenges encountered along the way. Below is a sample of some of the great content found across tracks — you can also find on the Summit schedule app (iOS, Google). If you miss them at the Summit, videos of the sessions will be made available here.

Monitoring the Nectar Research Cloud

Monday, November 6, 3:10 p.m.-3:50 p.m.

A large OpenStack cloud consists of many moving parts that all need to be operating correctly to ensure a working service for end users.

Andy Botting and Jake Yip, both of Nectar, will talk about how they got started with existing Open Source tools like Tempest, Nagios, Jenkins, Puppet, Ganglia and tied them together with some custom tools for a holistic view of the health of their systems. Starting from the load balancer at the top of the stack, through the control plane and down to the underlying hardware and then across the availability of all our individual services that Nectar provides for our users, they can be confident that their systems are operating correctly, but most importantly, can quickly identify where to look when things go bad.

“I pity the fool that builds his own cloud!”–Overcoming challenges of OpenStack based telco clouds

Tuesday, November 7, 9:50 a.m. -10:30 a.m.

A panel discussion between Ericsson, AT&T, SK Telecom & NTT on the topic of addressing the challenges faced when developing, testing, and deploying OpenStack based clouds to support telco workloads. The panel will touch on the realities of staffing for scale delivery, open source collaboration learnings, and challenges of evolving at a rapid pace with the latest cloud and NFV technologies.

Paddy Power Betfair’s journey to OpenStack Cloud – The Good, the Bad and the Ugly.

Tuesday, November 7, 4:10 p.m.-4:50 p.m.

Superuser award winners PaddyPower Betfair have migrated about 400 applications to their OpenStack cloud. In this talk, they’ll cover the organizational, cultural and technical challenges, the good decisions the bad decisions.

Government use of OpenStack in Australia

Tuesday, November 7, 5:50 p.m.-6:30 p.m.

Rupert Taylor-Price, CEO and founder of Vault Systems, will share his extensive knowledge of the use of OpenStack in the Australian Government. Vault leveraged the simplicity and openness provided by OpenStack to build one of the world’s most secure cloud platforms exclusively for use by the Australian Government. Vault is only one of two companies globally to gain Australian Signals Directorate certification for the storage and processing of classified data.

Taylor-Price will share his experiences in working with major Government Departments, including the Digital Transformation Agency, Department of Health, Defense and Australian intelligence community. He will discuss how Government has benefited from the use of OpenStack to deliver digital services and solutions across a robust and secure platform. Taylor-Price will also provide insights on how Government policies currently being developed will shape the cloud services industry over the next two-three years and what this means for OpenStack.

Standing Up and Operating a Container Service on top of OpenStack Using OpenShift

Wednesday November 8 , 1:50 p.m.-2:30 p.m.

Massachusetts Open Cloud offers virtual machine based IaaS services on top of OpenStack. They recently started offering containers as a service using OpenShift by tightly integrating it with the OpenStack services. Our deployment uses Keystone to authenticate users, Swift to host the Docker registry and Cinder to automatically provision persistent volumes for containers. They use the OpenShift Console to expose an OpenStack cluster to container users in a generic way where they have access to compute, networking, and storage. By coupling container and VM users they’re able to achieve a higher density in cluster usage. The presenters (Ata Turk, MOC Research Scientist, Dan McPherson, Red Hat, Robert Baron, Boston University) will present their architecture, the challenges faced while setting up this service, best practices for offering a reliable service, and the experiences of the first large-scale user of our service, a medical imaging service on the cloud.

 

Download the Summit schedule app (iOS, Google) and start building your personal schedule.

The post Five users stories to catch at the Sydney Summit appeared first on OpenStack Superuser.

by Superuser at November 02, 2017 09:34 PM

Stackmasters team

What to watch out for at the Sydney OpenStack Summit

G’day, mate. How are you going? You may be wondering why we’re talking like an Australian. Well Strewth! The truth is, we’re pretty excited about the upcoming OpenStack Summit in Sydney.

Sadly, a member of the Stackmasters team will not be on site this time round. It’s because as the Aussies might say, “we’re stuffed” with a heavy workload. We’ll be watching from afar though. And with much anticipation and excitement about the latest news and trends coming out of one of our favorite events.

Sydney OpenStack Summit

Latest from planet cloud

What is the OpenStack Summit? Put simply, it’s the definitive event for IT professionals – a three-day conference for IT business leaders, cloud operators and developers. It covers the open infrastructure landscape and is the world’s largest open source cloud computing gathering.

It’s aim is to give IT leaders and stakeholders the latest insights, knowledge and practical training for all things on cloud. Let’s face it, the cloud phenomenon continues to shape-shift its way into the #1 method of managing IT. If you ask us, Managed Cloud services are the present, and the future.

What to expect at OpenStack Summit

The reason for the above is pretty simple. The world now runs on open infrastructure. And this year in Sydney, the agenda is action-packed. There will be opportunities galore to learn about the mix of open technologies building the modern infrastructure stack, including OpenStack, Kubernetes, Docker, Ansible, Ceph, OVS, OpenContrail, OPNFV, and many more.

Things to watch out for

Irrespective of whether you are pursuing a private, public or multi-cloud approach, the OpenStack Summit is THE place to network, enhance your skill set, as well as plan your cloud strategy. There will be demos and presentations aplenty on new features in the Pike (current) release.

Looking at the agenda, there will also be absorbing keynote speeches with top professionals sharing real-life use cases and their experiences with OpenStack, as is usually the case.

Here’s our quick at a glance list of what to do while you’re there (and what to watch afterwards if you’re not there):

Key Keynotes

There’s a load of cool keynotes confirmed so far. We’re looking forward to hearing from Adobe pair Joseph Sandoval and Nicolas Brousse talking about “Lean infrastructure powered by OpenStack at Adobe Marketing Cloud” on the first day.

There’s also a topical talk about an issue which is very much on the global social conscience right now: climate change. Prof. Brendan Mackey, PHD, Director of Griffith Climate Change Response Program, and Professor Gary Egan, PhD, MBA, Director of Monash Biomedical Imaging will be discussing “Climate change, Brain and Imaging Research on OpenStack”.

There’s a boatload of other presentations, “lightning talks” and workshop-type sessions, so be sure to check out the full details of the schedule.

Collaborate at the forum

This year at the forum, the hot topic will be gathering feedback on the current release, Pike, and to start drawing up requirements for the next ones in line, Queens and Rocky. There will be a wide variety of sessions focused on enhanced agility as provided by the OpenStack services in order to support better containers integration and NFV applications.

Overall, as Tom Fifield points out, having all the Stacker community in one place (developers, operators and end users ), is a great chance for:

  • Strategic discussions – to think about the big picture, including beyond just one release cycle and new technologies
  • Cross-project sessions – in a similar vein to what has happened at past design summits. But with increased emphasis on issues that are of relevant to all areas of the community
  • Project-specific sessions – during which developers can ask users specific questions about their experiences. Users can give feedback from the last release and cross-community collaboration on the priorities and top ‘blue sky’ ideas for the next release

Multi-cloud on top

We think you’ll agree that there’s a lot to look forward to. We believe that an important aspect, which will come out of the OpenStack Summit is the rate of adoption growth, driven by the multi-cloud approach many companies pursue, in order to avoid vendor lock-in. As we all know, OpenStack is the leading open source private cloud technology. But it can also be combined with public clouds or even consumed on a Private-Cloud-as-a-Service model.

So, if you manage to get to Sydney for OpenStack Summit, follow our guidelines and enjoy the experience. If you don’t, then try to watch some of the recorded keynotes and content, which will be available one week after the big event.

“No worries”, then. And “good on ya”.

What to watch out for at the Sydney OpenStack Summit was last modified: November 2nd, 2017 by Stackmasters

The post What to watch out for at the Sydney OpenStack Summit appeared first on Stackmasters.

by Stackmasters at November 02, 2017 04:03 PM

Major Hayden

Changes in RHEL 7 Security Technical Implementation Guide Version 1, Release 3

ansible-hardening logoThe latest release of the Red Hat Enterprise Linux Security Technical Implementation Guide (STIG) was published last week. This release is Version 1, Release 3, and it contains four main changes:

  • V-77819 – Multifactor authentication is required for graphical logins
  • V-77821 – Datagram Congestion Control Protocol (DCCP) kernel module must be disabled
  • V-77823 – Single user mode must require user authentication
  • V-77825 – Address space layout randomization (ASLR) must be enabled

Deep dive

Let’s break down this list to understand what each one means.

V-77819 – Multifactor authentication is required for graphical logins

This requirement improves security for graphical logins and extends the existing requirements for multifactor authentication for logins (see V-71965, V-72417, and V-72427). The STIG recommends smartcards (since the US Government often uses CAC cards for multifactor authentication), and this is a good idea for high security systems.

I use Yubikey 4’s as smartcards in most situations and they work anywhere you have available USB slots.

V-77821 – Datagram Congestion Control Protocol (DCCP) kernel module must be disabled

DCCP is often used as a congestion control mechanism for UDP traffic, but it isn’t used that often in modern networks. There have been vulnerabilities in the past that are mitigated by disabling DCCP, so it’s a good idea to disable it unless you have a strong reason for keeping it enabled.

The ansible-hardening role has been updated to disable the DCCP kernel module by default.

V-77823 – Single user mode must require user authentication

Single user mode is often used in emergency situations where the server cannot boot properly or an issue must be repaired without a fully booted server. This mode can only be used at the server’s physical console, serial port, or via out-of-band management (DRAC, iLO, and IPMI). Allowing single-user mode access without authentication is a serious security risk.

Fortunately, every distribution supported by the ansible-hardening role already has authentication requirements for single user mode in place. The ansible-hardening role does not make any adjustments to the single user mode unit file since any untested adjustment could cause a system to have problems booting.

V-77825 – Address space layout randomization (ASLR) must be enabled

ASLR is a handy technology that makes it more difficult for attackers to guess where a particular program is storing data in memory. It’s not perfect, but it certainly raises the difficulty for an attacker. There are multiple settings for this variable and the kernel documentation for sysctl has some brief explanations for each setting (search for randomize_va_space on the page).

Every distribution supported by the ansible-hardening role is already setting kernel.randomize_va_space=2 by default, which applies randomization for the basic parts of process memory (such as shared libraries and the stack) as well as the heap. The ansible-hardening role will ensure that the default setting is maintained.

ansible-hardening is already up to date

If you’re already using the ansible-hardening role’s master branch, these changes are already in place! Try out the new updates and open a bug report if you find any problems.

The post Changes in RHEL 7 Security Technical Implementation Guide Version 1, Release 3 appeared first on major.io.

by Major Hayden at November 02, 2017 03:00 PM

November 01, 2017

OpenStack Superuser

Spark your interest with OpenStack lightning talks

There are over 50 of these 15-minute talks on the schedule at the Sydney Summit. The talks — you can find a list of them in the schedule here — are chosen by the same track chairs who decide what goes into longer sessions. They take place at the Level 2 Parkside 2 Foyer and cover everything from orchestrations and distributions to how to contribute to OpenStack. You can also find them on the Summit schedule app (iOS, Google). (And if you blink and miss them, the videos will be available following the conference.)

A few Superuser picks:

Diversity in Tech – Not just a Race and Gender issue

Tuesday November 7 , 10:05 a.m.-10:15 a.m.
Kris Murphy of IBM will go into how “diversity is thought of as a hiring issue, but for an open source community it is important for leaders and team members to be aware of what they can do to encourage diversity and ensure a variety of voices and opinions are heard.” Murphy will offer advice on how to advertise the roles and goals of a project in a way that attracts people with a variety of job statification criteria to participate in your open source community. Common pitfalls of diversity efforts and how to avoid them will also be covered.

Brain in the cloud: Machine Learning On OpenStack Done Right!

Tuesday November 7, 4:10 p.m.-4:20 p.m.
Erez Cohen, Mellanox Technologies, and  Blair Bethwaite, Monash University, will describe system requirements to effectively run a machine learning cluster with popular frameworks such as TensorFlow. They’ll discuss how such a system can be deployed in an OpenStack-based cloud without compromises, enjoying high-performance DNN programming paradigm as well as the benefits of cloud and software-defined data centers along with a case study from Monash University.

Designing Cloud Native Applications with Microservices and Containers over Openstack

Tuesday November 7, 5:50 p.m.-6:00 p.m
Chris Gascoigne, technical solutions architect at Cisco Systems, will give beginners a brief analysis on the differences between the traditional IT applications and the new cloud native IT applications, understanding the benefits of cloud computing architecture principles using Openstack architecture, using cloud patterns as the framework for the application, how microservices are used to design, build and test the application and how containers, container management technologies and Openstack projects for containers are used as the underlay infrastructure.

Bringing Worlds Together: Designing and Deploying Kubernetes on an OpenStack multi-site environment

Wednesday November 8, 10:20 a.m.-10:30 a.m.
This talk is aimed at OpenStack administrators, system administrators, cloud administrators and container platform administrators. Roger Lopez and Julio Villarreal Pelegrino, both at Red Hat, will offer up some best practices on building a highly available multi-site Kubernetes container platform environment on OpenStack.

Edge Computing (Platform) as a Service powered by OpenStack

Wednesday November 8, 2:05 p.m.-2:15 p.m.
Yin Ding, Futurewei Technologies Inc., and Chaoyi Huang, Huawei, will discuss how to build Edge Computing (Platform) as a Service. The cloud is powered by OpenStack, the edge nodes are OpenStack/Kubernetes running applications in VM, Container or serverless. In this intermediate-level talk, they’ll demonstrate how to add a new edge node and manage its lifecycle including update, patch, monitoring, warning and troubleshooting etc.

Check out the full schedule for lightning talks here and check back with Superuser for links to the videos.

The post Spark your interest with OpenStack lightning talks appeared first on OpenStack Superuser.

by Superuser at November 01, 2017 11:01 PM

Get started with serverless architecture in OpenStack

Qinling is Function-as-a-Service for OpenStack. It aims to provide a platform to support serverless functions (like AWS Lambda, Google Cloud Functions, etc.) following the best practices hammered out by the OpenStack community.

Qinling can support different container orchestration platforms (Kubernetes/Swarm/Magnum, etc.) and different function package storage backends (local/Swift/S3) thanks to its plugin mechanism. (Side note: its name comes from the Qinling mountains, the range standing as a natural boundary between Northern and Southern China, home to the great salamander.)

Lingxian Kong, senior developer at Catalyst IT, will be talking about in detail about severless architecture at the upcoming Sydney Summit. He’ll be going into Qinling’s architecture, functionality, use case and current status as well as will showing two demos about how to trigger customized function based on defined events in the cloud environment and in application development respectively.

For now though, Kong who has been working with OpenStack since 2015 previously at Huawei, offers up a taste of Qinling.

In under five minutes and in just five steps with this video tutorial, Kong shows you how to use Qinling to automatically resize photos uploaded to Swift:

1. Admin user creates runtime (Python in this demo).

2. Create a function using the runtime ID.

3. Create an event alarm in Aodh.

4. Create empty containers in Swift.

5. Upload a photo to a container, and a thumbnail for the photo is created in another container
automatically.

If you’re interested in finding out more about Qinling, check out the Wiki or these sources:

The post Get started with serverless architecture in OpenStack appeared first on OpenStack Superuser.

by Superuser at November 01, 2017 05:46 PM

Ben Nemec

Ansible Tower with TripleO

I was asked to look into integrating Ansible Tower with TripleO a while back. It actually wasn't that difficult to do, but I've finally gotten around to recording a demo video showing how. In this post I will also provide a brief overview of how I installed Ansible Tower.

First, the Ansible Tower and TripleO video. It should be noted that from the time I started working on this to the time I recorded the demo, it seems Tower released a new version that moved some things around. YMMV on whether your version of Tower exactly matches up with mine, but the basic steps should be the same.

I will also note that we've removed the hiera call in stackrc in recent versions of master TripleO (this is during the Queens cycle, for those reading later) so it should no longer be necessary to manually retrieve the password. This had the extra beneficial effect of making stackrc usable on any non-undercloud system, not just one running Ansible Tower.

Finally, a quick overview of how I set up this environment. First, I stood up an OpenStack Virtual Baremetal environment that included an undercloud, two overcloud nodes, and an extra node for Tower. The extra node functionality is a recent addition to OVB (note that that commit was missing a necessary file, this was fixed in a followup commit. I recommend just using latest master in any case though). I do not believe Ansible Tower could be installed directly on the undercloud due to Tower and the undercloud wanting to manage some of the same services.

I deployed TripleO as documented into the environment and then set up the Tower VM according to their install guide. It may be worth noting that I used CentOS for the TripleO deployment but RHEL for the Tower instance.

For reference, here were the major steps I took:

  1. Subscribe the system to the necessary repos:
    # Register the node
    sudo subscription-manager register
    # Find the appropriate pool from the list
    sudo subscription-manager list --available | less
    # Attach to the pool. Note that this is a Red Hat employee pool, so if you don't work for Red Hat your pool will be different
    sudo subscription-manager attach --pool 8a85f9833e1404a9013e3cddf99305e6
    # Disable all repos to start clean. This avoids issues if your pool auto-enables some repos.
    sudo subscription-manager repos --disable '*'
    # Enable the necessary repos. As of this writing RHEL OSP 11 was the latest version, but those packages didn't work properly with the upstream Queens (which will be OSP 13) undercloud that I was using so I had to use upstream OpenStack packages. If you're using all RHEL OSP you should enable the appropriate repo here.
    sudo subscription-manager repos --enable rhel-7-server-rpms --enable rhel-7-server-extras-rpms --enable rhel-7-server-rh-common-rpms #--enable rhel-7-server-openstack-11-rpms
  2. Download and extract Ansible Tower
    wget https://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz
    tar zxvf ansible-tower-setup-latest.tar.gz
    # This version will change over time
    cd ansible-tower-setup-3.2.1
  3. Edit the inventory to set passwords, as described in the Tower install docs
  4. Run the install: sudo ./setup.sh
  5. Set the Tower password: sudo tower-manage changepassword admin
  6. On your first visit to the Tower web UI you will need to register it with the key you get from Ansible. I believe there are trial keys available if you're just looking to kick the tires.

In my OVB environment, I had to set an IP address on the provisioning interface for the Tower VM. I did this with sudo ifconfig eth1 9.1.1.111 netmask 255.255.255.0. As noted above, I also enabled the upstream OpenStack and TripleO repos per the TripleO docs.

That's the basics of using Ansible Tower with TripleO. I know I've only scratched the surface of what Tower can do, but these basic steps should enable the use of all Tower's features with TripleO deployments. As always, if you have any questions or comments please let me know.

by bnemec at November 01, 2017 04:21 PM

Opensource.com

How to explain OpenStack to a complete newcomer

Learn how OpenStack has become the defacto standard for building an open source cloud, and how to get started with the project.

by Ben Silverman at November 01, 2017 07:00 AM

October 31, 2017

Chris Dent

TC Report 44

There's been a fair bit of various TC discussion but as I'm packing for my rather tortured journey to Sydney and preparing some last minute presentation materials, just a short TC Report this week, made up of links to topics in the IRC logs:

I will attempt to take notes at the board meeting and report them here. There will be no TC report next week due to summit. Nor the week after as I'll still be in Sydney, so expect the next one 21st of November.

by Chris Dent at October 31, 2017 08:15 PM

Mirantis

MCP DriveTrain and upgrades: a few questions and answers

Mirantis Field CTO Ryan Day recently sat down to demonstrate how users can use Mirantis DriveTrain to upgrade OpenStack over two versions, from Mitaka to Ocata.  Along the way, he answered a number of different questions about how it works, and what DriveTrain is all about.  You can view the entire demonstration, and we’ve decided … Continued

by Ryan Day at October 31, 2017 06:13 PM

Stephen Finucane

Deploying Real Time Openstack

Recent versions of OpenStack nova have added support for real-time instances, that is, instances that provide the determinism and performance guarantees required by real-time applications. While this work was finally marked complete in the OpenStack Ocata release, it built upon lots of features added in previously releases. The below is a guide that covers a basic, single-node deployment of OpenStack suitable for evaluating basic real-time instance functionality. We use CentOS 7, but the same instructions can be modified for RHEL 7 or Fedora, and any CentOS-specific aspects are called out.

October 31, 2017 05:08 PM

OpenStack Superuser

OpenStack Days Nordic 2017: On the move

A crowd of about 250 from a variety of industries including telecommunications, automotive, consulting and cloud providers got together recently for two days for OpenStack Days Nordic.

This year, the event moved from Stockholm where it was held in 2016 to Copenhagen’s Tivoli Hotel and Congress Center.

“It’s an interesting crowd with a good mix of techies and decision makers,” says CERN data center manager Jan Van Eldik, who also attended last year’s event in Stockholm.

Organized by City Network, OP5, Dek Technologies and Rise, participating companies included Danish Broadcasting, Red Hat, Telenor, Juniper Networks as well as the local branches of NetApp, T-Systems Nordics, Ericsson, Suse, Canonical, Volvo Cars and Mellanox.

The mission of the event is to mission to increase awareness, utilization and competence surrounding OpenStack. To that end there were talks on everything from cloud strategy and business case development to operational best practices and technical deep dives. Organizers say the interest has shifted — now most people want to know how to best use their deployment depending on their industry and current deployment.

In addition to talks, the event also featured free training from OpenStack Upstream Institute, with 8 lessons broken down into three-hour sessions. Much appreciated by attendees for its hands-on approach to either getting started or deploying OpenStack, organizers say having the training is “essential” to the continued success of the event.

Organizers tell Superuser that there was also a mini-basketball contest with an OpenStack Summit ticket as a prize as well as a classification but were mum on the winners. Participants at the event — and possibly on the court? — also enjoyed local Carlsberg and Tuborg.

If you’re interested in getting involved for next year, stay tuned to the OpenStack Days Nordic website where you can sign up for the mailing list. See more from the OpenStack Days Nordic in the roundup video below.

Cover Photo // CC BY NC

The post OpenStack Days Nordic 2017: On the move appeared first on OpenStack Superuser.

by Superuser at October 31, 2017 02:05 PM

Host-Telecom.com

All packed for Sydney and ready to meet the community!

Host-Telecom is very excited to join the OpenStack community at the OpenStack Sydney Summit next week to share our IT infrastructure solutions with you and learn about the initiatives you are engaged in as well. We’ll go into more details about our bare metal options and discuss how we’ve built our OpenStack-based data backup and disaster recovery solutions while creating a rapid migration path from VMware to OpenStack.

Why OpenStack on bare metal?

We want to extend the OpenStack user base so that more organizations have access to a powerful platform that helps them scale with flexibility without going broke paying for the license and support fees of commercial platforms. The Linux agents we provide for data backup and disaster recovery work across multiple platforms, and enable data and infrastructure replication that runs on OpenStack in the background. The same technology paves the way for seamless VMware to OpenStack migration.

Learn more

Stop by our booth to discuss details and see a demo of our solutions. Or just mosey on over for a friendly chat and tell us about your organization, what you’re doing, and the issues you’re addressing. We’re as interested in learning from you as we are in talking about us.

And if you do want to learn a bit more about Host-Telecom, join us at our Tuesday and Wednesday presentations that address, respectively, OpenStack migration and building Swift Object Storage infrastructure that scales.

And that’s not all!

You may see our team out and about interviewing OpenStack experts in hallway interviews. Stop by the Host-Telecom table and set up a time for us to talk to you about your community activity. While we don’t have elaborate production values, we are interested in your views about the technology, the market needs you’re addressing, and other activities you have going. Contact team members Denise Boehm and Ilia Stechkin to learn more.

Enter for a chance to fly the friendly Sydney skies

Yes, you’re in Sydney for business, but you should also have some fun. Want to see the best of Sydney and the coast? Drop your card at our table, and we’ll select a winner for a helicopter tour of the city harbor and surrounding beaches that should be a real bucket-list experience. Either I or the employee who annoys me least while in Sydney will join you at the pickup location the morning following the Summit.

Hope to meet you in Sydney! Host-Telecom is looking forward to talking to you.

BY

Pavel Chernobrov
CEO, Host-Telecom

Pavel Chernobrov

The post All packed for Sydney and ready to meet the community! appeared first on Host-Telecom.com.

by Pavel Chernobrov at October 31, 2017 12:24 PM

October 30, 2017

Red Hat Stack

G’Day OpenStack!

In less than one week the OpenStack Summit is coming to Sydney! For those of us in the Australia/New Zealand (ANZ) region this is a very exciting time as we get to showcase our local OpenStack talents and successes. This summit will feature Australia’s largest banks, telcos, and enterprises and show the world how they have adopted, adapted, and succeeded with Open Source software and OpenStack.

frances-gunn-41736
Photo by Frances Gunn on Unsplash

And at Red Hat, we are doubly proud to feature a lineup of local, regional, and global speakers in over 40 exciting sessions. Not only can you stop by and see speakers from Australia, like Brisbane’s very own Andrew Hatfield (Red Hat, Practice Lead – Cloud Storage and Big Data) who has two talks discussing everything from CephFS’s impact on OpenStack to a joint talk about how OpenStack and Ceph are evolving to integrate with the Linux, Docker, and Kubernetes!

Of course, not only are local Red Hat associates telling their own stories, but so too are our ANZ customers. Australia’s own dynamic Telecom, Telstra, has worked closely with Red Hat and Juniper for all kinds of cutting edge NFV work and you can check out a joint talk from Telstra, Juniper, and Red Hat to learn all about it in “The Road to Virtualization: Highlighting The Unique Challenges Faced by Telcos” featuring Red Hat’s Senior Product Manager for Networking Technologies, Anita Tragler alongside Juniper’s Greg Smith and Telstra’s Senior Technology Specialist extraordinaire Andrew Harris.

On Wednesday at 11:00AM, come see how a 160 year old Aussie insurance company, IAG, uses Red Hat OpenStack Platform as the foundation for their Open Source Data Pipeline. IAG is leading a dynamic and disruptive change in their industry and bringing important Open Source tools and process to accelerate innovation and save costs. They were nominated for a Super User award as well for their efforts and we are proud to call them Red Hat customers.

We can’t wait to meet all our mates!

ewa-gillen-59113
Photo by Ewa Gillen on Unsplash

For many of us ANZ-based associates the opportunity to meet the global OpenStack community in our biggest city is very exciting and one we have been waiting on for years. While we will of course be very busy attending the many sessions, one great place to be sure to meet us all is at Booth B1 in the Marketplace Expo Hall. At the booth we will have live environments and demos showcasing the exciting integration between Red Hat CloudForms, Red Hat OpenStack Platform, Red Hat Ceph Storage, and Red Hat OpenShift Container Platform accompanied by our very best ANZ, APAC, and Global talent. Come and chat with Solution Architects, documentation professionals, engineers, and senior management and find out how we develop products so that they continue to grow and lead the OpenStack and Open Source world.

And of course, there will some very special, Aussie-themed swag for you to pick up. There are a few once-in-a-lifetime items that we think you won’t want to miss and will make a truly special souvenir of your visit to our wonderful country! And of course, the latest edition of the RDO ducks will be on hand – get in fast!

There will also be a fun “Marketplace Mixer” on Monday, November 6th from 5:50PM – 7:30PM where you will find great food, conversation, and surprises in the Expo Hall. Our booth will feature yummy food, expert conversations, and more! And don’t miss the very special Melbourne Cup celebration on Tuesday, November 7th from 2:30 – 3:20. There will be a live stream of “the race that stops the nation,” the Melbourne Cup, direct from Flemington Racecourse in Victoria. Prepare your fascinator and come see the event Australia really does stop for!

framed-photograph-phar-lap-winning-melbourne-cup-1930-250974-small
Credit: Museums Victoria

If you’ve not booked yet you can still save 10% with our exclusive code, REDHAT10.

So, there you go, mate.

World’s best software, world’s best community, world’s best city.

As the (in)famous tourism campaign from 2007 asked in the most Australian of ways: “So where the bloody hell are you?

Can’t wait to see you in Sydney!

by August Simonelli, Technical Marketing Manager, Cloud at October 30, 2017 10:32 PM

OpenStack Blog

Developer Mailing List Digest October 21-27

News

  • TC election results [0]
  • Next PTG will be in Dublin, the week of February 26, 2018. More details will be posted on openstack.org/ptg as soon as we have them. [1]

SuccessBot Says

  • gothamr_  [0]:changes to the manila driverfixes branches can finally be merged xD Thanks infra folks for ZuulV3!
  • andreaf [1]: Tempest test base class is now a stable API for plugins
  • More [2]
[1] – http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2017-10-24.log.html

Community Summaries

  • TC Report 43 by Chris Dent [0]
  • Nova Notification Update Week 43 by Balazs Gibizer [1]
  • POST /api-sig/news by Chris Dent [2]
  • Technical Committee Status Update by Thierry Carrez [3]
  • Nova Placements Resource Provider Update [4]

Time to Remove the Ceilometer API?

Summarized by Jeremy Stanley

The Ceilometer REST API was deprecated in Ocata, a year ago, and the User Survey indicates more than half its users have switched to the non-OpenStack Gnocchi service’s API instead (using Ceilometer as a backend). The Ceilometer install guide has also been recommending Gnocchi at least as long ago as Newton. The old API has become an attractive nuisance from the Telemetry team’s perspective, and they’d like to go ahead and drop it altogether in Queens.

Keystone v2.0 API Removal

Summarized by Thierry Carrez

Keystone Queen’s PTL Lance Bragstad gives notice that the Queen’s release will not be included v2.0, except the ec2-api. This is being done after a lengthy given deprecation period.

by Mike Perez at October 30, 2017 09:53 PM

OpenStack Superuser

Come meet the users changing the world with OpenStack

At last count, there are over 400 things to get excited about at next week’s OpenStack Sydney Summit. You’ll want to plan for a full three days, from hands-on workshops to meeting the developers who are building the open infrastructure tools powering the transformation of companies all over the planet.

But for me, nothing compares to meeting the users themselves. The people building not just data centers but railroads that span thousands of miles and serve billions of people, payment services that process trillions of yuan, streaming TV services helping users cut the cord…

You name it, if it’s changing the world it’s probably powered by OpenStack!

To save y’all a little time, we’ve pulled together this handy list of users speaking with links to their talks:

Find one you like? Download the Summit schedule app (iOS, Google) and start building your personal schedule!

I’m off to catch a plane to meet some users in Sydney.

Mark Collier, chief operating officer at the OpenStack Foundation, can be found on Twitter at @sparkycolliereven in the friendly skies.

Cover Photo // CC BY NC

The post Come meet the users changing the world with OpenStack appeared first on OpenStack Superuser.

by Mark Collier at October 30, 2017 05:44 PM

James Page

OpenStack Charms in Sydney

Next week at the OpenStack Summit in Sydney we have a few sessions scheduled for the OpenStack Charms project.

If you’re new to OpenStack deployment using Juju and the OpenStack Charms then the general project update on Tuesday at 3.20 pm would be a good introduction. The session is only 20 minutes long so won’t take up to much of your day – Ryan and I will be doing a short 101 and providing some detail on new features for Pike and plans for Queens!

If you would like to get involved with OpenStack Charm development then pop along to the project on-boarding session at 3.10 pm on Monday – this session will be much more hands on and we’ll drive content based on what participants need, rather than having a fixed agenda,

If you’re an OpenStack Charm user and would like the opportunity to provide direct feedback to the development team then please come and tell us what you like and don’t like in the operators feedback session in the Forum on Tuesday at 9.50 am.

Looking forward to another great summit and seeing the other side of the planet for the first time – see you all in Sydney next week!


by JavaCruft at October 30, 2017 04:28 PM

Galera Cluster by Codership

Announcing Galera Cluster for MySQL 5.5.58, 5.6.38, 5.7.20 with Galera 3.22.

Codership is pleased to announce a new release of Galera Cluster for MySQL consisting of MySQL-wsrep 5.5.58, 5.6.38, 5.7.20 and new Galera 3.22 library, wsrep API version 25.

 

NEW FEATURES AND NOTABLE FIXES IN THE GALERA REPLICATION LIBRARY SINCE LAST BINARY RELEASE BY CODERSHIP (3.21):

 

New features and notable fixes in Galera replication since last binary release

* Reporting last committed write set fixed to respect commit ordering (MW-341)

* GComm socket level error handling improved to avoid backend thread exit
in case of unexpected input from ASIO IO service (GAL-518)

* Race condition fixed in GComm message sending codepath (GAL-520)

* Fix for EVS protocol stall due to exhausted send window setting. This
bug could stall cluster messaging until the next keepalive was sent by
some node, causing intermittent pauses in write set replication. (GAL-521)

* Code changes to avoid name collisions with FreeBSD system headers (GAL-523)

Read the full release notes (how to install, repository structure) 

 

 

NOTABLE BUG FIXES IN MYSQL-WSREP:

 

Version MySQL-wsrep 5.7.20 and Galera 3.22, wsrep API version 25.

* Preserve –wsrep-recover log for future reference when starting the server.
The preserved log is stored in a file under MySQL data directory,
either in wsrep_recovery.ok or wsrep_recovery.fail depending on recovery
success. (MW-318)

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* A bug which caused stored procedure with an error handler to commit
a statement even in case of certification error was fixed. (MW-388)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixed a crash caused by a dangling reference to wsrep status variables
array. (MW-399)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

* ALTER EVENT statement failure on slave was fixed (MW-417)

Read the full release notes  (known issues, how to install, repository structure) 

 

 

Version MySQL-wsrep 5.6.38 and Galera 3.22, wsrep API version 25

* Preserve –wsrep-recover log for future reference when starting the server.
The preserved log is stored in a file under MySQL data directory,
either in wsrep_recovery.ok or wsrep_recovery.fail depending on recovery
success. (MW-318)

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* A bug which caused stored procedure with an error handler to commit
a statement even in case of certification error was fixed. (MW-388)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixed a crash caused by a dangling reference to wsrep status variables
array. (MW-399)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

Read the full release notes (known issues, how to install, repository structure)

 

 

Version MySQL-wsrep 5.5.58 and Galera 3.22, wsrep API version 25

Notable bug fixes in MySQL-wsrep:

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

Read the full release notes (known issues, how to install, repository structure)

 

by Sakari Keskitalo at October 30, 2017 10:17 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
November 22, 2017 06:06 PM
All times are UTC.

Powered by:
Planet