April 22, 2017

Eran Rom

The Case for Data Centric Hyperconvergence

In this blog post I would like to introduce the notion of data-centric hyperconvergence. Why should someone care about it? In two words: cost reduction. In few more words: In the last decade the ability to store data grew 10 times more than the ability to move data. It grew 20 times more if we consider the last 5 years only (details below). Thus, moving stored data (even within the data center) for the purpose of processing it becomes too costly if not impossible. The above mentioned storage efficiency growth is driven from the need to store more data.

by eran at April 22, 2017 07:43 PM

April 21, 2017

OpenStack Superuser

Managing port level security in OpenStack

The OpenStack platform, specifically Neutron (the networking component), uses the concepts of “ports” in order to connect the various cloud instances to different networks and the corresponding virtual networking devices like Neutron routers, firewalls etc.

The default security on these ports is quite restrictive (and rightly so) since the platform is supposed to be an autonomous, mostly independent system hosting multiple cloud tenants (customers) or different cloud instances with varying security requirements. To get a better feel on ports, take a look at the diagram below.

psec6

What you see above is a simple network topology in OpenStack. It is comprised of one Router (RTR1TNT80CL3) connected to two networks (PubNetCL3, PriNetTNT81CL3) and one VM (vm1). In the pop-up, you can see two IP addresses (172.16.8.134, 10.103.81.1) assigned to two interfaces on the router. These are two ports connecting the router to the two networks.

In certain advanced use cases (as you will see below) we might need to change the restrictive security settings on ports to allow for further customization of the environment.

Thankfully, OpenStack allows us to manage security on the individual port level in an environment. By default, the following rules apply:

  • All incoming and outgoing traffic is blocked for ports connected to virtual machine instances. (Unless a ‘Security Group’ has been applied.)
  • Only traffic originating from the IP / MAC address pair known to OpenStack for a particular port, will be allowed on the network.
  • Pass through and promiscuous mode will be blocked.

Allowing additional addresses

In certain instances, we would need to allow traffic from multiple IP address/MAC pairs to pass through a port, or for multiple ports to share a MAC address or IP address. One such requirement would be for clustering instances. In order to do this we need to add these IP address / MAC pairs on the port.

Step 1 – In order to obtain a list of all ports run the following command:

$ neutron port-list 
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                     
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 0240a7c8-f892-4596-8420-af5e3295a535 |      | fa:16:3e:3f:dc:18 | {"subnet_id": "8071b06a-bd05-49da-a710-ec85c8df5efb", "ip_address": "10.101.13.25"} |
| 064e8473-9bd6-4225-9cf3-13f855cd805f |      | fa:16:3e:2c:75:42 | {"subnet_id": "8071b06a-bd05-49da-a710-ec85c8df5efb", "ip_address": "10.101.13.6"}  |
| 0c86a8c6-c415-41be-958d-867ae110b995 |      | fa:16:3e:cb:cc:a3 | {"subnet_id": "3608806e-0991-4c81-91a5-11f9f097be64", "ip_address": "172.16.6.133"} |
| 0cb383b8-dadd-4b89-b863-d595a2173685 |      | fa:16:3e:7f:4a:78 | {"subnet_id": "8071b06a-bd05-49da-a710-ec85c8df5efb", "ip_address": "10.101.13.5"}  |

The above output list all the ports in the environment. The first column gives the port ID (e.g. “0240a7c8-f892-4596-8420-af5e3295a535”) and the last column provides the IP address (e.g. “10.101.13.25.”). The IP address is usually a good way of identifying the port.

Step 2 – Once you have identified the port, use the following command to list the port details:

# neutron port-show ee976fc9-882d-46d4-80ef-c352090b8089
+-----------------------+---------------------------------
| Field                 | Value                    
+-----------------------+---------------------------------
| admin_state_up        | True
| allowed_address_pairs |
| binding:host_id       | v1net1.lab.gbmcloud.int
| binding:profile       | {}
| binding:vif_details   | {"port_filter": true, "ovs_hybri
| binding:vif_type      | ovs
| binding:vnic_type     | normal
| device_id             | dhcpdab444c7-710d-580a-b097-950c
| device_owner          | network:dhcp
| dns_assignment        | {"hostname": "host-10-101-12-2",
| dns_name              |
| extra_dhcp_opts       |
| fixed_ips             | {"subnet_id": "9bce8971-2e28-40b
| id                    | ee976fc9-882d-46d4-80ef-c352090b
| mac_address           | fa:16:3e:56:35:c4
| name                  |
| network_id            | fc2a69df-cf8c-45ac-8c0f-2dd8911a
| port_security_enabled | True
| security_groups       |
| status                | ACTIVE
| tenant_id             | b56cc570621243ca8fd9d53056279238
+-----------------------+---------------------------------

The above output gives you the details of the port. Note that ee976fc9-882d-46d4-80ef-c352090b8089 is the ID of the port you are interested in. The first attribute that we are interested in is “allowed_address_pairs”. This is blank for now, meaning it will only allow traffic for the IP/MAC pair that is already assigned to the port by Neutron.

psec1

In order to allow more than one IP/MAC pair to pass through a  particular port, you would need to add the additional IPs to “allowed_address_pairs”. Use the following commands to manage this attribute:

Step 3.a – To add an IP address:

# neutron port-update b7d1d8bd-6ca7-4c35-9855-ba0dc2573fdc --allowed_address_pairs list=true type=dict ip_address=10.101.11.5
psec2

Step 3.b – To add multiple IP addresses and an additional MAC address:

# neutron port-update b7d1d8bd-6ca7-4c35-9855-ba0dc2573fdc --allowed_address_pairs list=true type=dict mac_address=ce:9e:5d:ad:6d:80,ip_address=10.101.11.5 ip_address=10.101.11.6
psec3

Step 3.c – To add an IP subnet:

# neutron port-update b7d1d8bd-6ca7-4c35-9855-ba0dc2573fdc --allowed_address_pairs list=true type=dict ip_address=10.101.11.0/24
psec4

Note: If you do not provide the MAC address, Neutron defaults to the MAC address of the port known to OpenStack. This means that you cannot provide an IP / range of IP(s) without a MAC address. This is effective when you want to add more than one IP address on the same virtual machine instance. However this will not work if you want to create a virtual machine to allow traffic to pass through, for example a virtual machine representing a router or a firewall.

Disabling port security

In certain situations, you might need to allow traffic to pass through the virtual machine instance in OpenStack. For example, if this virtual machine represents a networking device, like a router or a firewall. In this situation, it is not possible to provide all the possible IP / MAC address combinations in “allowed_address_pairs” to cover all the possible machines that might send traffic via this networking instance. A better option in this case is to disable the port level security. Before disabling the port level security please note the following:

  • Port level security cannot be disabled if:
    • A security group is assigned to the instance
    • Allowed address pairs are set for the instance
  • Once port level security is disabled, all traffic (Ingress and Egress) will be allowed on this interface.
  • Make sure that the security is being managed by the virtual machine instance (e.g. firewall rules) to compensate for the disabled security at the OpenStack level.

As seen above you will need to remove any existing security groups from the instance. This can be easily done from the Horizon GUI. I will not dwell into the details of how to do this, since it’s very rudimentary.

Also you will need to disable the allowed address pairs. As of date I was unable to find the OpenStack command-line option to set the allowed address pairs to blank once it has been set. In order to do this we use the OpenStack API.

Use the following command to get an authentication token from keystone in order to access the OpenStack API:

# curl -d '{"auth":{"passwordCredentials":{"username": "admin","password": "pass"},"tenantName": "tenant1"}}' -H "Content-Type: application/json" http://10.10.1.10:5000/v2.0/tokens

Note that we need to provide the username and password for a valid OpenStack user that has access to the given tenant (tenant1 above) and also note the IP address ’10.10.1.10’ which represents the server where keystone resides. The output for this command will be as follows:

{"access": {"token": {"issued_at": "2016-06-05T01:49:36.477765", "expires": "2016-06-05T02:49:36Z", "id": "f3e704d837cf4074a0eb965d9de58c40", "tenant": {"description": "RDP Service UAT (TNT11_CL1)", "enabled": true, "id": "4dedb7b7ffe740c181d35a930809b22b", "name": "rdp_srv_uat"}, "audit_ids": ["H5JfGB-6QISRBlWObwAyRg"]}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://10.10.1.10:8774/v2/4dedb7b7ffe740c181d35a930809b22b", "region": "RegionOne", "internalURL": "http://10.10.1.10:8774/v2/4dedb7b7ffe740c181d35a930809b22b", "id": "78655fc75ce2432986e1469c0703d32c", "publicURL": ….

I have trimmed the above output for readability. The only item of interest for us is the session token ID (4dedb7b7ffe740c181d35a930809b22b). We will use this in the next command:

# curl -i http://10.10.1.11:9696/v2.0/ports/ccbd0ed6-3dfd-4431-af29-4a2d921abb38.json -X PUT -H "X-Auth-Token: 4dedb7b7ffe740c181d35a930809b22b" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"port": {"allowed_address_pairs": []}}'

The Token (4dedb7b7ffe740c181d35a930809b22b) is the same token we obtained from the previous command. The ID of the port for which we need to set “allowed_address_pairs” to “blank” is also listed above (ccbd0ed6-3dfd-4431-af29-4a2d921abb38). The last section of the command (“’port’: {‘allowed_address_pairs’: []}”) sets the allowed address pairs to “blank”.

Now that we have removed the security group and set allowed address pairs to blank we can disable the port level security. Run the following command to achieve this:

# neutron port-update ccbd0ed6-3dfd-4431-af29-4a2d921abb38 --port_security_enabled=False

The above command will disable port security on the particular port and allow all traffic to pass through without neutron dropping it as depicted below.

psec5

Thank you for reading. If you have any questions/comments please feel free to share below in the comments section so everyone from different sources can benefit from the discussion.

This post first appeared on the WhatCloud blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post Managing port level security in OpenStack appeared first on OpenStack Superuser.

by Nooruddin Abbas at April 21, 2017 11:12 AM

Galera Cluster by Codership

Galera Cluster team in OpenStack Summit May 8th -11th, 2017 Boston

A four-day conference for IT business leaders, cloud operators and developers covering the open infrastructure landscape.

Galera Cluster developers and experts are looking forward meeting you in Boston. Our booth is D15

by Sakari Keskitalo at April 21, 2017 09:04 AM

Opensource.com

What's new in OpenStack Ocata

Take a look at some of the main changes that came out in the latest upstream release of OpenStack.

by rbowen at April 21, 2017 07:00 AM

April 20, 2017

NFVPE @ Red Hat

Let’s run Homer on Kubernetes!

I have to say that Homer is a favorite of mine. Homer is VoIP analysis & monitoring – on steroids. Not only has it saved my keister a number of times when troubleshooting VoIP platforms, but it has an awesome (and helpful) open source community. In my opinion – it should be an integral part of your devops plan if you’re deploying VoIP apps (really important to have visibility of your… Ops!). Leif and I are using Homer as part of our (still WIP) vnf-asterisk demo VNF (virtualized network function). We want to get it all running in OpenShift & Kubernetes. Our goal for this walk-through is to get Homer up and running on Kubernetes, and generate some traffic using HEPgen.js, and then view it on the Homer Web UI. So – why postpone joy? Let’s use homer-docker to go ahead and get Homer up and running on Kubernetes.

by Doug Smith at April 20, 2017 09:20 PM

Doug Hellmann

Lessons learned from working on large scale, cross-project initiatives in OpenStack

I have been involved with OpenStack development since just before the Folsom summit in 2012. Over the course of that time, I have participated in innumerable discussions about 3 big features tied to OpenStack logging: translating log messages, adding request IDs to log messages, and adding unique message IDs to log messages. We have had … Continue reading Lessons learned from working on large scale, cross-project initiatives in OpenStack

by doug at April 20, 2017 06:30 PM

OpenStack Superuser

The interoperability challenge in telecom and NFV environments: A tutorial

SANTA CLARA, Calif.– Working together is the key to moving forward.

That was one of the main takeaways for the capacity crowd at the pre-conference tutorial on network functions virtualization (NFV) interoperability at the recent Open Networking Summit (ONS). 

Experienced teams discussed achieving interoperability in an NFV world through collaboration among stakeholders, demonstrating the importance of interoperability for communications service providers (CSPs), NFV infrastructure (NFVI) and virtualized infrastructure manager (VIM) vendors, application (VNF) vendors and open-source projects alike. 

Ericsson’s Chris Price summed up how vendors build interoperable applications, total solutions and implement click-to-buy, plug-and-play, application portability, interoperable functions and end-to-end automation. Tutorial attendees walked away with recommendations from the experts, available resources and where to find results. You can get up to speed by following the 1.5-hour session below or checking out the slides.

The session started with an overview of the current landscape where the OpenStack cloud platform is the most frequently deployed infrastructure manager for NFV. Open source brings richness in choice along with challenges that can be addressed by ensuring all components support current OpenStack releases and adhere to and validate use of open APIs using the OpenStack-provided faithful implementation test suite (FITS). Although the interop guidelines and test suite were originally designed for public cloud interoperability they are valuable for complex solutions such as NFV as well.

Extensive standalone and full-solution testing provides vendors the ability to build offerings that are easy to integrate and perform well in telecom environments. In addition to OpenStack, Open Platform for NFV (OPNFV), the  ETSI NFV Industry Specification Group (ISG) and testing organizations such as the European Advanced Networking Test Center (EANTC), offer programs and test suites to validate all layers of an NFV solution.

EANTC provides independent, vendor-neutral testing expertise for innovative telecom technologies. Managing director Carsten Rossenhövel presented recommendations to evaluate assurance qualities along with composite comparisons for the single vendor, light multi-vendor and full multi-vendor solutions they’ve tested.

Rossenhövel also shared the results of 2016 VNF/NFVI interoperability testing commissioned by the not-for-profit New IP Agency. Results were very good with over two-thirds of combinations passing. All seven NFVI/VIM solutions tested were popular NFV-ready OpenStack distributions. Helping providers learn how their offerings perform in relevant NFV scenarios was a key test objective and subsequent testing showed significant improvements. Look for the results of the in-progress testing of MANO and VIM combinations in May 2017.

All EANTC testing and the first ETSI NFV Plugtest adhere to the draft ETSI NFV Release 2 specification and testing methodology TST007. The January 2017 Plugtest was conducted with NFVI/VIM, MANO and VNF vendors and open source project participation. Interoperability testing at the Plugtest ranged from 97.7 percent to 100 percent. As each round of test results become available, it is clear the participants incorporate the experiences into their offerings.

The OpenStack community also held a mini-Summit at ONS: OpenStack Networking Roadmap, Community and Collaboration featuring Neutron core and distinguished engineer from SUSE Armando Migliaccio, AT&T principal technical staff member Paul Carver and OpenStack Foundation ecosystem lead, Ildiko Váncsa. You can check out those slides here.

The importance of collaboration was threaded through the four-day event which showcased how leading CSPs and enterprises worldwide are moving fast thanks to the benefits of a virtualized network environment.

In their joint keynote, John Donovan, chief strategy officer and group president, AT&T Technology and Operations, and Andre Fuetsch, CTO, President AT&T Labs, said over 250,000 percent growth in network traffic growth in the past 10 years mandated an open, virtualized and automated approach. Fuetsch urged the audience to focus on open source, “a much faster vehicle.”

AT&T open sourced the Enhanced Control, Orchestration, Management & Policy (ECOMP) platform, which they’ve run in production on the OpenStack-based AT&T Integrated Cloud in production for over 2.5 years. In February, ECOMP merged with the Open Orchestrator project (OPEN-O) to form the Open Networking Automation Platform (ONAP).

A five-year plan is the industry norm for achieving NFV and software-defined networking (SDN). However, the challenges of moving to a virtualized environment can result in multiple release versions of components from multiple vendors, which increase the need for integration and interoperability testing.

Cover Photo // CC BY NC

The post The interoperability challenge in telecom and NFV environments: A tutorial appeared first on OpenStack Superuser.

by Kathy Cacciatore at April 20, 2017 12:31 PM

Javier Peña

Successfully resetting the root password of a CentOS 7 VM in OpenStack

Even in a cloud world, sometimes you need to find a way to get inside your VM when SSH doesn’t work. Maybe your Ansible script broke the SSH configuration and you need to debug it, or you lost the key you used when creating the VM, or <insert random reason here>.

The good news is that OpenStack environments give you a connection to the VM console. However, CentOS 7 (and RHEL 7) images are a bit tricky when you want to boot in single user mode, as they request the root password, which we don’t have. There is a well documented procedure to boot using rd.break, but there are some little quirks to adapt it for OpenStack images. But fear not, I have summarized them in this post.

First, we start by interrupting the VM boot process and going to the GRUB menu. That’s sometimes easier said than done if the console connection has some lag, but it is doable.

Then, select the kernel and press e to edit the kernel command-line arguments.

The documentation tells us we should add rd.break enforcing=0 to the kernel command line (the one starting with linux16…). However, cloud images redirect console to ttyS0, which is the serial console, and not the graphical console we have in the OpenStack dashboard. So remove the console=ttyS0… parameters (there are two in this example), and make sure we have console=tty0 in the command line.

When finished, press Ctrl-x to boot. You’ll see the kernel boot messages, and get a root prompt. The root file system is mounted read-only under /sysroot, but you can remount it read-write and then chroot to it.

From there on, you can check what went wrong, update any SSH keys if needed, anything you want. Just make sure you do touch /.autorelabel before exiting, or you may have SELinux troubles after a reboot.

And that’s all! There is an alternative way for this, using Nova rescue mode, but that involves starting a new instance using the same image, attaching the existing disk as secondary… and if something goes wrong there, it might be even harder to recover your VM.

by jpena at April 20, 2017 09:49 AM

April 19, 2017

OpenStack Superuser

A beginner’s guide to attending an OpenStack Summit

Wondering what’s in store for the OpenStack Summit Boston? Well, apart from attending dozens of sessions led by professionals in the industry, meeting a ton of new people and taking home a lot of great swag, you’ll certainly bring home a great deal of thoughts and become capable of contributing more efficiently.

As per reports, around 50 – 60 percent of OpenStack summit attendees are first-timers. Being a beneficiary of the travel support program and a speaker at the Summit Barcelona, I hope this piece conveys my enriching journey and distances you from feeling overwhelmed in the whirlwind of activities.

The six-month release cycle and subsequent Summit gives OpenStack contributors, vendors and users a real-time view of growth and change, allowing for subtle changes that can often be difficult to notice. I participated in the main conference and the design summit focusing mostly on the Keystone project. But before I breakdown what I did, I’d like to extend many thanks to all those who planned, participated in and helped make this last summit a tremendous event.

Pre-game: I didn’t want to be stuck at the registration desk while everyone else was off to the races, so that’s the first thing I did after arriving at the venue. Being a speaker, I got an extra edge and, in no time, I was holding my access pass along with an OpenStack t-shirt. I then familiarized myself with the conference space using the venue map so that I didn’t get lost and get to sessions late. I then headed to the Pre-Summit Marketplace Mixer along with fellow Outreachy interns to mix, mingle and checkout the sponsor booths. This was the first time I met my coordinator, Victoria Martínez de la Cruz and my mentor, Samuel de Medeiros Queiroz in person and my enthusiasm skyrocketed.

With regard to the summit schedule, I have to say that there were many tempting talks, but it was physically impossible to attend them all, and it probably wasn’t a good idea to chase everything that glittered. To tackle this issue, I planned my schedule in advance and marked the sessions I couldn’t afford to miss on the OpenStack Foundation Summit app. Thankfully, the summit organizers put the videos up on Youtube and on the OpenStack website in a timely manner. At that point, it was up to me to determine what talks were super important for me to see in real time. While pin-pointing the sessions, I considered both the speaker and the subject matter. Also, highly tactical sessions are generally useful to attend regardless of who leads them. However, sessions not directly related to one’s profession can be valuable as well if they’re led by an industry figure one is angling to meet.

On the first day, I jumped out of bed quite early to attend the Speed Mentoring session organized by the Women of OpenStack. It was a great way to get to know experienced people in the OpenStack community. The session comprised of both career and technical mentoring. The amazing experience encouraged me to join the Women of OpenStack session the following day, where we heard lightning talks and broke into small groups to network and discuss the key messages from those talks (big thanks to Emily Hugenbruch, Nithya Ruff and Jessica Murillo for their valuable insights). While heading towards the keynote session, I felt overwhelmed with joy to be a part of the OpenStack community. The speeches were amazing with cool demonstrations of technology, sharing great customer stories as well as insight from vendors about what makes an OpenStack deployment successful. I proceeded to the Marketplace for some snacks and grabbed more swag. Many companies had a “We are Hiring” tag on their hoardings, so I discussed the ins and outs of the respective roles with them and even handed over my resume to stay in touch. The biggest takeaways from the sessions I attended in subsequent days entailed an increased interest in the identity service and gaining practical knowledge in sessions like “Nailing Your Next OpenStack Job Interview” and “Effective Code Review“. I also found the  “Pushing Your QA Upstream” and “I Found a Security Bug: What Happens Next?” sessions quite useful. Attending the hands-on workshop on learning to debug OpenStack code with the Python Debugger and PyCharm was an unmatched experience.

To wrap up, every talk I attended had something to offer, be it an intriguing or fascinating idea or a tactic that comes in handy. During design sessions, we sat together at one table and focused on the most important aspects and plans for the next OpenStack release. The topics ranged from discussions about what can be done to make code more robust and stable to discussions on how we can make the lives of operators easier. Things ran smoothly in these sessions as everybody was in one room, focusing on a single topic.

Lastly, I must plug my own foray into the conference talk schedule. I had a blast talking about OpenStack-Outreachy internships. The talk was intended for people who want to become a better community member along with those willing to start contributing or mentoring. I am deeply grateful to my co-presenters, Victoria and Samuel, whom I have known for nearly two years now.

But it’s not just the formal talks that are the best part of a conference. The best parts of a conference include meeting a new person who might be able to help you out with that sticky problem, catching up with old friends or having the opportunity to just geek out for a little while over a nice meal and maybe a drink or two with the people who once helped you submit a patch (Tip: It’s a good idea to contact those you want to meet before the conference. This way you’ll set yourself apart by engaging that person in a casual meeting which isn’t distracted by throngs of people). Also, don’t forget that, at around 6 p.m., almost all the sessions wind up, so utilize this time to roam around the city. You may even consider extending your trip by a day or two to visit some famous tourist destinations nearby.

Were you also there at the Barcelona Summit? I would love to hear about your experience in the comment section below. Stay tuned for more interesting updates and development in the world of OpenStack. smiley 2

This post first appeared on Nisha Yadav’s blog. Superuser is always interested in community content, email: editor@superuser.org.
Cover Photo // CC BY NC

The post A beginner’s guide to attending an OpenStack Summit appeared first on OpenStack Superuser.

by Nisha Yadav at April 19, 2017 11:03 AM

Opensource.com

The best minds in open source gather at OpenStack Summit Boston

A new event at the upcoming OpenStack Summit in Boston, Open Source Days will provide space for related cloud and infrastructure projects to gather and learn from one another.

by Mark Collier at April 19, 2017 07:00 AM

April 18, 2017

Cisco Cloud Blog

Cloud Unfiltered Episode 04: Cisco’s Dave Lively

Dave Lively is up to his neck in cloud. He’s been with Cisco for 20-ish years, he’s been involved in our cloud efforts for quite a while now, and he’s been involved in the open source community just as long. And the best part is—he really gets it. Gets the inherent value of cloud, as […]

by Ali Amagasu at April 18, 2017 07:58 PM

Red Hat Stack

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring’s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week’s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive’s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 

To learn more about Red Hat’s general sessions, look at the details below. We’ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we’ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.

And in case you haven’t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.

For more details on each session, click on the title below:

Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)
Multi-cloud demo Monty Taylor
Configure your cloud for recovery Walter Bentley
Kubernetes and OpenStack at scale Stephen Gordon
No longer considered an epic spell of transformation: in-place upgrade! Krzysztof Janiszewski and Ken Holden
Fifty shades for enrollment: how to use Certmonger to win OpenStack Ade Lee and Rob Crittenden
OpenStack and OVN – what’s new with OVS 2.7 Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)
Federation with Keycloak and FreeIPA Martin Lopes, Rodrigo Duarte Sousa, and John Dennis
7 “must haves” for highly effective Telco NFV deployments Anita Tragler and Greg Smith (Juniper Networks, Inc.)
Containerizing OpenStack deployments: lessons learned from TripleO Flavio Percoco
Project update – Heat Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances Julien Danjou and Alex Krzos
Mastering and troubleshooting NFV issues Sadique Puthen and Jaison Raju
The Ceph power show – hands-on with Ceph: Episode 2 – ‘The Jewel Story’ Karan Singh, Daniel Messer, and Brent Compton
SmartNICs – paving the way for 25G/40G/100G speed NFV deployments in OpenStack Anita Tragler and Edwin Peer (Netronome)
Scaling NFV – are containers the answer? Azhar Sayeed
Free my organization to pursue cloud native infrastructure! Dave Cain and Steve Black (East Carolina University)
Container networking using Kuryr – a hands-on lab Sudhir Kethamakka and Amol Chobe (Ericsson)
Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack Ali Kafel and Pratik Roychowdhury (OpenContrail)
Don’t fail at scale: how to plan for, build, and operate a successful OpenStack cloud David Costakos and Julio Villarreal Pelegrino
Red Hat OpenStack Certification Program Allessandro Silva
OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV Nir Yechiel and Andre Fredette
Project update – Kuryr Antoni Segura Puimedon and Irena Berezovsky (Huawei)
Barbican workshop – securing the cloud Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)
Bridging the gap between deploying OpenStack as a cloud application and as a traditional application James Slagle
Real time KVM and how it works Eric Lajoie

Wednesday sessions

Projects Update – Sahara Telles Nobrega and Elise Gafford
Project update – Mistral Ryan Brady
Bite off more than you can chew, then chew it: OpenStack consumption models Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)
Hybrid messaging solutions for large scale OpenStack deployments Kenneth Giusti and Andrew Smith
Project update – Nova Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)
Hands-on to configure your cloud to be able to charge your users using official OpenStack components Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)
To OpenStack or not OpenStack; that is the question Frank Wu
Distributed monitoring and analysis for telecom requirements Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)
OVN support for multiple gateways and IPv6 Russell Bryant and Numan Siddique
Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)
Unlocking the performance secrets of Ceph object storage Karan Singh, Kyle Bader, and Brent Compton
OVN hands-on tutorial part 1: introduction Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)
Kuberneterize your baremetal nodes in OpenStack! Ken Savich and Darin Sorrentino
OVN hands-on tutorial part 2: advanced Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)
The Amazon effect on open source cloud business models Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)
Neutron port binding and impact of unbound ports on DVR routers with floatingIP Brian Haley and Swaminathan Vasudevan (HPE)
Upstream contribution – give up or double down? Assaf Muller
Hyper cool infrastructure Randy Robbins
Strategic distributed and multisite OpenStack for business continuity and scalability use cases Rob Young
Per API role-based access control Adam Young and Kristi Nikolla (Massachusetts Open Cloud)
Logging work group BoF Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)
Performance and scale analysis of OpenStack using Browbeat  Alex Krzos, Sai Sindhur Malleni, and Joe Talerico
Scaling Nova: how CellsV2 affects your deployment Dan Smith
Ambassador community report Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source Rob Wilmoth
CephFS backed NFS share service for multi-tenant clouds Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron
Create your VM in a (almost) deterministic way – a hands-on lab Sudhir Kethamakka and Geetika Batra
RDO’s continuous packaging platform Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)
OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)
Ceph snapshots for fun & profit Gregory Farnum
Gnocchi and collectd for faster fault detection and maintenance Julien Danjou and Emma Foley
Project update – TripleO Emillien Macchi, Flavio Percoco, and Steven Hardy
Project update – Telemetry Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)
Turned up to 11: low latency Ceph block storage Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)
Who reads books anymore? Or writes them? Michael Solberg and Ben Silverman (OnX Enterprise Solutions)
Pushing the boundaries of OpenStack – wait, what are they again? Walter Bentley
Multi-site OpenStack – deployment option and challenges for a telco Azhar Sayeed
Ceph project update Sage Weil

 

by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform at April 18, 2017 06:02 PM

OpenStack Superuser

OpenStack works best when devs and operators tailor their workloads

Monica Rodriguez is a technical team lead with Chemical Abstract Service (CAS), a division of the American Chemical Society. She’s also a Certified OpenStack Administrator and a member of Women of OpenStack.

She’s presenting two sessions at the OpenStack Summit Boston, May 8-11, where she will be recounting the triumphs and trials of her work.

Rodriguez tells Superuser more about CAS, the role OpenStack plays in its day-to-day operations, how becoming OpenStack certified has impacted her career and what to expect from her talks at the Summit.

What is your role at the CAS?

I’m a technical team lead for our operations platform, a DevOps organization. Our primary focus is to provide fundamental capabilities that enable rapid, flexible construction and delivery of new products and services.

How has becoming a Certified OpenStack Administrator (COA) impacted your career?

I became a COA about five months after I started working with OpenStack. This was my first involvement with open source technology, DevOps and the world of open source technology. Taking the exam was a major stepping stone for me to show that I had reached a verifiable level of expertise in the field. The COA not only boosted my confidence but also provided a jumping-off point that allowed me to dive deeper into the technology that facilitates CAS’s ability to rapidly advance scientific discovery.  

Tell us about your organization and its customers.

CAS, a division of the American Chemical Society, provides a variety of solutions that support science, engineering, technology, patents, business information and much more across a variety of scientific disciplines. CAS scientists and technologists ensure connections among published scientific data that are unparalleled. Our products and services are designed to accommodate a wide range of information needs, whether you are a student, an infrequent researcher who only needs a few quick answers or a business professional who requires more powerful and comprehensive analyses.

How long has CAS been using OpenStack and in what capacity?

CAS has been using OpenStack for about two years to support our product development initiatives. We have migrated existing products to the cloud environment and are continuously deploying new ones. We have a handful of deployments in production at this time and expect to increase this rapidly going forward.

What will people learn from your presentations?

For “A Series of Unfortunate Deployments”, I’ll be presenting alongside my colleague Scott Coplin and Chris Breu, private cloud architect at Rackspace. We’ll be sharing the story of our team’s efforts deploying a Lambda architecture search engine on OpenStack, showcasing the trials and tribulations we encountered on our journey. We share what worked for us and what didn’t, including how taking advantage of all that Nova compute offers made this deployment a success. We also highlight the importance of the OpenStack operators working closely with developers to ensure maximize performance and stability. OpenStack is excellent for all things, but best when both devs and operators tailor the environment to match the expected workloads.

I’ll also be presenting a lightning talk “Deploy It Like You Mean It: Committing to Continuous Delivery with Jenkins Pipeline in OpenStack.” As a DevOps engineer, I’m constantly looking for ways to improve our workflows through efficiency and automation. It’s been so exciting to see how something that was already good can be made great! I’m looking forward to sharing that at the Summit.

What’s next for your organization?

We plan to expand our OpenStack usage in both functionality and number of products deployed on OpenStack. As an established organization, we have many legacy applications that can be difficult to migrate. With increased knowledge of OpenStack and the functionality available, we can utilize it further. Our goal is to make three to five new applications available to our customers within the next year.

The post OpenStack works best when devs and operators tailor their workloads appeared first on OpenStack Superuser.

by Superuser at April 18, 2017 11:42 AM

Christopher Smart

Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

by Chris at April 18, 2017 10:35 AM

April 17, 2017

Amrith Kumar

Troubleshooting OpenStack gate issues with logstash

From time to time, I have to figure out why the Trove CI failed some job. By “from to time”, I really mean “constantly”, “all the time”, and “everyday”. Very often the issue is some broken change that someone pushed up, easy ones are pep8 or pylint failures, slightly harder are the py27/py35 failures. The … Continue reading "Troubleshooting OpenStack gate issues with logstash"

by amrith at April 17, 2017 04:35 PM

SUSE Conversations

SUSE Expert Days; now is the time!

Need to understand how your organization can adapt its infrastructure to the high demands of customers who are looking for reliable, leading-edge digital services? Our SUSE Expert Days will give you all that, and much more! This very successful global Roadshow has the perfect program, specifically created for your needs. You can expect a full …

+read more

The post SUSE Expert Days; now is the time! appeared first on SUSE Blog. jbakrude

by jbakrude at April 17, 2017 03:31 PM

OpenStack Superuser

In Nordic countries, OpenStack feels the snowball effect

In this series of interviews, Superuser takes you around the world to meet our newest user groups. These busy groups act as liaisons between the Foundation and the general community in their regions. You can find out more about a group in your area — or how to start one — at the User Groups portal.

Here we talk to Morten Fredsoe Olsen, director of unClouded, about the Denmark User Group.

He tells us about about the snowball effect in the region, the vision for the group and what events are on the horizon.

What’s the most important OpenStack debate in your region right now?

OpenStack is still in its infancy here in Denmark, thus the debate revolves around interest with potential stakeholders waiting for others to be the first to put it into production. In my opinion it’s a moot point, with many successful examples already existing in the Nordic region.

What’s the key to closing the talent gap in the OpenStack community?

Hopefully, the meet-up group will have some individuals based in operations take inspiration from what they hear and see the potential of the technology. But the long-term solution has to involve the educational institutions in our region. Right now we are in a “chicken and egg” situation and in my opinion it is up to the educations to prepare the students for what the market will demand in the coming years. Because OpenStack is coming to the Nordic region, in fact it’s already in Sweden in a big way and it will be coming to Denmark in 2017.

What trends have you seen across your region’s user groups?

When approaching hosting providers, try not to position the technology on face value but attach it to a quantifiable product to aid understanding. When pitching the technology, you’re showing it to decision makers like C-levels and operations, they need to trust and understand it.

Many are interested in the conversation about OpenStack in Denmark, they know it’s interesting and up-and-coming for our region but some haven’t heard of it before. They are hesitant to take action, in terms of proof-of- concept. One trend that happened in Sweden is after one group put forward a product a snowball effect was created with adoption in other markets and this happened extremely quickly.

What drives cloud adoption in your region?

It should be cost savings, agility and adaptability arguably. OpenStack can do the same, and more, than your existing infrastructure. While being much cheaper to adopt, implement and operate, and it’s so adaptable to just about anything. I cannot think of a scenario where OpenStack can’t solve the problem.

What’s the biggest obstacle to adoption?

The biggest obstacle is having that first important successful test case occur, which would then enable the snowball effect of high adoption. Moreover, there is also a lack of outside pressure. If the market isn’t pushed, there can be hesitation. Right now it’s a small market, with minimal competition thus other products can still be used to make money, despite their being other potentially better offerings.

What types of clouds/organizations are most active in the community?

Mostly storage technology companies such as NetApp and Tintri but we are also seeing interest from some of the Danish hosting providers like DanDomain. In terms of meet-ups you can get all sorts of people, everybody from C-level to someone who may not know where to get started and is looking for help. It’s great to have a diverse group. Participating really makes you feel like you’re part of something.

What’s your vision for your group?

Increase our attendance rates at meet-ups, ideally growing User Group membership at some point to more than 1,000 people, like our partners Fairbanks have done in the Benelux region. I also hope it can become more self sustaining where we are easily able to have companies proactively come and contribute to share their insight. Hence, getting more people inspired to get involved and take part.

Currently we’re also working to get a Sandbox environment up and running, where new users will be able to have a play with the technology and help them understand how it works. We are working with a number of different potential sponsors, so keep an eye out for that. It will be available to everyone, and will over time showcase a multitude of different kinds of OpenStack integrations.

How can people get involved?

Everybody is welcome to join our Meetup Group, which you join here: https://www.meetup.com/openstackdk/
We’re also participating in OpenStack Nordic Day coming up in October 2017, which you can find out more about it here: http://openstacknordic.orig/

Superuser is always interested in hearing from the community. Want to tell us about your user group? Email editorATopenstack.org

Cover Photo // CC BY NC

The post In Nordic countries, OpenStack feels the snowball effect appeared first on OpenStack Superuser.

by Sonia Ramza at April 17, 2017 09:26 AM

Opensource.com

Summit preparations, Technical Committee elections, and more OpenStack news

Welcome to Opensource.com's monthly look at what's happening in the world of OpenStack, the open source cloud infrastructure project. We bring together a collection of news, events, and happenings from the developer listserv all in one handy package. Do you have a suggestion for something for us to include next month? Let us know in the comments below


OpenStack news and happenings

There's a lot being written about OpenStack. Here are a few highlights.

by Jason Baker at April 17, 2017 05:00 AM

April 16, 2017

Nikhil Kathole

Let’s OpenStack 2017, Pune

Indian OpenStack User Group hosted "Pune, Let's OpenStack 2017" event on 16th April, 2017 at Red Hat, Pune (India). The meetup was attended by over 40 people from varied backgrounds: startups students, InStackers, developers, etc. The day kicked off with introduction, networking and discussion around OpenStack. Then the crowd settled in to listen to Unmesh... Continue Reading →


by Nikhil Kathole at April 16, 2017 07:59 PM

Christopher Smart

Creating an OpenStack Ironic deploy image with Buildroot

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

by Chris at April 16, 2017 07:04 AM

April 15, 2017

Adam Young

Using the OPTIONS Verb for RBAC

Lets say you have  RESTful Web Service.  For any given URL, you might support one or more of the HTTP verbs:  GET, PUT, POST, DELETE and so on.  A user might wonder what they mean, and which you actually support.  One way of reporting that is by using the OPTION Verb.  While this is a relatively unusual verb, using it to describe a resource is a fairly well known mechanism.  I want to take it one step further.

Both OpenStack and Kubernetes support scoped role based access control.  The OPTIONS verb can be used to announce to the world what role is associated with each verb.

Lets use Keystone’s User API as an example.  We have typical CRUD operations on users.

https://developer.openstack.org/api-ref/identity/v3/#list-users

Thus, the call OPTIONS https://hostname:port/v3/users

Could return data like this:

"actions": {
  "POST": {
     "roles": ["admin"]
  },
  "GET": {
     "roles": ["admin", "Member"]
  }
}

 

That would be in addition to any other data you might feel relevant to return there:  JSON-HOME type information on the “POST” would be helpful in creating a new User, for example.

Ideally, the server would even respond to both template and actual URLS.  Both these should return the same response:

/v3/users/{user_id}

/v3/users/DEEDCAFE

Regardless of whether the ID passed was actually a valid ID or not.

 

ADDENDUM:

A few people have asked if it is opening up a security hole.  NOthing I am saying here is proposing a change the existing security approach.  If you want to make sure a user is authenticated before telling them this information, do so.  If you only want to return role information for a user that already has that role, go for it.

There is a flip side here, to protecting the user.  If  user does not know what role is required, and she wants to create a delegation to some other user, she cannot safely do that;  she has to provide the full set of roles she has in that delegation.  Without telling people what key opens the door, they have to try every key they own.

by Adam Young at April 15, 2017 02:41 AM

April 14, 2017

Emilien Macchi

My Journey As An OpenStack PTL

This story explains why I started to stop working as a anarchistic-multi-tasking-schedule-driven and learnt how to become a good team leader.

How it started

March 2015, Puppet OpenStack project just moved under the Big Tent. What a success for our group!

One of the first step was to elect a Project Team Lead. Our group was pretty small (~10 active contributors) so we thought that the PTL would be just a facilitator for the group, and the liaison with other projects that interact with us.
I mean, easy, right?

At that time, I was clearly an unconsciously incompetent PTL. I thought I knew what I was doing to drive the project to success.

But situation evolved. I started to deal with things that I didn’t expect to deal with like making sure our team works together in a way that is efficient and consistent. I also realized nobody knew what
a PTL was really supposed to do (at least in our group), so I took care of more tasks, like release management, organizing Summit design sessions, promoting core reviewers, and welcoming newcomers.
That was the time where I realized I become a consciously incompetent PTL. I was doing things that nobody taught me before.

In fact, there is no book telling you how to lead an OpenStack project so I decided to jump in this black hole and hopefully I would make mistakes so I can learn something.

 

Set your own expectations

I made the mistake of engaging myself into a role where expectations were not cleared with the team. The PTL guide is not enough to clear expectations of what your team will wait from you. This is something you have to figure out with the folks you’re working with. You would be surprised by the diversity of expectations that project contributors have for their PTL.
Talk with your team and ask them what they want you to be and how they see you as a team lead.
I don’t think there is a single rule that works for all projects, because of the different cultures in OpenStack community.

 

Embrace changes

… and accept failures.
There is no project in OpenStack that didn’t had outstanding issues (technical and human).
The first step as a PTL is to acknowledge the problem and share it with your team. Most of the conflicts are self-resolved when everyone agrees that yes, there is a problem. It can be a code design issue or any other technical disagreement but also human complains, like the difficulty to start contributing or the lack of reward for very active contributors who aren’t core yet.
Once a problem is resolved: discuss with your team about how we can avoid the same situation in the future.
Make a retrospective if needed but talk and document the output.

I continuously encourage at welcoming all kind of changes in TripleO so we can adopt new technologies that will make our project better.

Keep in mind it has a cost. Some people will disagree but that’s fine: you might have to pick a rate of acceptance to consider that your team is ready to make this change.

 

Delegate

We are humans and have limits. We can’t be everywhere and do everything.
We have to accept that PTLs are not supposed to be online 24/7. They don’t always have the best ideas and don’t always take the right decisions.
This is fine. Your project will survive.

I learnt that when I started to be PTL of TripleO in 2016.
The TripleO team has become so big that I didn’t realize how many interruptions I would have every day.
So I decided to learn how to delegate.
We worked together and created TripleO Squads where each squad focus on a specific area of TripleO.
Each squad would be autonomous enough to propose their own core reviewers or do their own meetings when needed.
I wanted small teams working together, failing fast and making quick iterations so we could scale the project, accept and share the work load and increase the trust inside the TripleO team.

This is where I started to be a Consciously Competent PTL.

 

Where am I now

I have reached a point where I think that projects wouldn’t need a PTL to run fine if they really wanted.
Instead, I start to believe about some essential things that would actually help to get rid of this role:

  • As a team, define the vision of the project and document it. It will really help to know where we want to
    go and clear all expectations about the project.
  • Establish trust to each individual by default and welcome newcomers.
  • Encourage collective and distributed leadership.
  • Try, Do, Fail, Learn, Teach. and start again. Don’t stale.

This long journey helped me to learn many things in both technical and human areas. It has been awesome to work with such groups so far.
I would like to spend more time on technical work (aka coding) but also in teaching and mentoring new contributors in OpenStack.
Therefore, I won’t be PTL during the next cycle and my hope is to see new leaders in TripleO, who would come up with fresh ideas and help us to keep TripleO rocking.

 

Thanks for reading so far, and also thanks for your trust.

by Emilien at April 14, 2017 08:56 PM

OpenStack Superuser

Kuryr and Kubernetes will knock your socks off

Seeing Kuryr-Kubernetes in action in my “Dr. Octagon NFV laboratory” has got me feeling that barefoot feeling –  completely knocked my socks off. Kuryr-Kubernetes provides Kubernetes integration with OpenStack networking and today we’ll walk through the steps so you can get your own instance up of it up and running so you can check it out for yourself. We’ll spin up Kuryr-Kubernetes with Devstack, create some pods and a VM, inspect Neutron and verify that the networking is working like a charm.

As usual with these blog posts, I’m kind of standing on the shoulders of giants. I was able to get some great exposure to Kuryr-Kubernetes through Luis Tomas’ blog post. And then a lot of the steps here you’ll find familiar from this OpenStack superuser blog post. Additionally, I always wind up finding a good show-stopper or two and Antoni Segura Puimedon was a huge help in diagnosing my setup, which I greatly appreciated.

Requirements

You might be able to do this with a VM, but, you’ll need some kind of nested virtualization because we’re going to spin up a VM, too. In my case, I used bare metal and the machine is likely overpowered (48 GB RAM, 16 cores, 1TB spinning disk). I’d recommend no less than 4-8 GB of RAM and at least a few cores, with maybe 20-40 GB free (which is still overkill).

One requirement is a CentOS 7.3 (or later) install somewhere. I assume you’ve got this setup. Also, make sure it’s pretty fresh because I’ve run into problems with Devstack where I tried to put it on an existing machine and it fought with say an existing Docker install.

That box needs git, and maybe your favorite text editor.

Get your devstack up and kickin’

The gist here is that we’ll clone Devstack, setup the stack user, create a local.conf file and then kick off the stack.sh.

So here’s where we clone Devstack, use it to create a stack user, and move the Devstack clone into the stack user’s home and then assume that user.

[root@droctagon3 ~]# git clone https://git.openstack.org/openstack-dev/devstack
[root@droctagon3 ~]# cd devstack/
[root@droctagon3 devstack]# ./tools/create-stack-user.sh 
[root@droctagon3 devstack]# cd ../
[root@droctagon3 ~]# mv devstack/ /opt/stack/
[root@droctagon3 ~]# chown -R stack:stack /opt/stack/
[root@droctagon3 ~]# su - stack
[stack@droctagon3 ~]$ pwd
/opt/stack

Ok, now that we’re there, let’s create a local.conf to parameterize our devstack deploy. You’ll note that my configuration is a portmanteau of Luis’ and from the Superuser blog post. I’ve left in my comments even so you can check it out and compare against the references. Go ahead and put this in with an echo heredoc or your favorite editor, here’s mine:

[stack@droctagon3 ~]$ cd devstack/
[stack@droctagon3 devstack]$ pwd
/opt/stack/devstack
[stack@droctagon3 devstack]$ cat local.conf 
[[local|localrc]]

LOGFILE=devstack.log
LOG_COLOR=False

# HOST_IP=CHANGEME
# Credentials
ADMIN_PASSWORD=pass
MYSQL_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3

# Q_PLUGIN=ml2
# Q_ML2_TENANT_NETWORK_TYPE=vxlan

# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
 git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes refs/changes/45/376045/12

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes

# [[post-config|/$Q_PLUGIN_CONF_FILE]]
# [securitygroup]
# firewall_driver = openvswitch

Now that we’ve got that set. Let’s just at least take a look at one parameters. The one in question is:

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes refs/changes/45/376045/12

You’ll note that this is version pinned. I ran into a bit of a hitch that Toni helped get me out of. And we’ll use that work-around in a bit. There’s a patch that’s coming along that should fix this up. I didn’t have luck with it yet, but just submitted the evening before this blog post.

Now, let’s run that Devstack deploy, I run mine in a screen, that’s optional for you, but, I don’t want to have connectivity lost during it and wonder “what happened?”

[stack@droctagon3 devstack]$ screen -S devstack
[stack@droctagon3 devstack]$ ./stack.sh 

Now, relax… This takes ~50 minutes on my box.

Verify the install and make sure the kubelet is running

Alright, that should finish up and show you some timing stats and some URLs for your Devstack instances.

Let’s just mildly verify that things work.

[stack@droctagon3 devstack]$ source openrc 
[stack@droctagon3 devstack]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

Great, so we have some stuff running at least. But, what about Kubernetes?

It’s likely almost there.

[stack@droctagon3 devstack]$ kubectl get nodes

That’s going to be empty for now. It’s because the kubelet isn’t running. So, open the Devstack “screens” with:

screen -r

Now, tab through those screens, hit Ctrl+a then n, and it will go to the next screen. Keep going until you get to the kubelet screen. It will be at the lower left hand size and/or have an * next to it.

It will likely be a screen with “just a prompt” and no logging. This is because the kubelet fails to run in this iteration, but, we can work around it.

First off, get your IP address, mine is on my interface enp1s0f1 so I used ip a and got it from there. Now, put that into the below command where I have YOUR_IP_HERE

Issue this command to run the kubelet:

sudo /usr/local/bin/hyperkube kubelet\
        --allow-privileged=true \
        --api-servers=http://YOUR_IP_HERE:8080 \
        --v=2 \
        --address='0.0.0.0' \
        --enable-server \
        --network-plugin=cni \
        --cni-bin-dir=/opt/stack/cni/bin \
        --cni-conf-dir=/opt/stack/cni/conf \
        --cert-dir=/var/lib/hyperkube/kubelet.cert \
        --root-dir=/var/lib/hyperkube/kubelet

Now you can detach from the screen by hitting Ctrl+a then d. You’ll be back to your regular old prompt.

Let’s list the nodes…

[stack@droctagon3 demo]$ kubectl get nodes
NAME         STATUS    AGE
droctagon3   Ready     4s

And you can see it’s ready to rumble.

Build a demo container

So let’s build something to run here. We’ll use the same container in a pod as shown in the superuser article.

Let’s create a python script that runs an http server and will report the hostname of the node it runs on (in this case when we’re finished, it will report the name of the pod in which it resides)

So let’s create those two files, we’ll put them in a “demo” dir.

[stack@droctagon3 demo]$ pwd
/opt/stack/devstack/demo

Now make the Dockerfile:

[stack@droctagon3 demo]$ cat Dockerfile 
FROM alpine
RUN apk add --no-cache python bash openssh-client curl
COPY server.py /server.py
ENTRYPOINT ["python", "server.py"]

And the server.py

[stack@droctagon3 demo]$ cat server.py 
import BaseHTTPServer as http
import platform

class Handler(http.BaseHTTPRequestHandler):
  def do_GET(self):
    self.send_response(200)
    self.send_header('Content-Type', 'text/plain')
    self.end_headers()
    self.wfile.write("%s\n" % platform.node())

if __name__ == '__main__':
  httpd = http.HTTPServer(('', 8080), Handler)
  httpd.serve_forever()

And kick off a Docker build.

[stack@droctagon3 demo]$ docker build -t demo:demo .

Kick up a Pod

Now we can launch a pod given that, we’ll even skip the step of making a YAML pod spec since this is so simple.

[stack@droctagon3 demo]$ kubectl run demo --image=demo:demo

And in a few seconds you should see it running…

[stack@droctagon3 demo]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          45s

Kick up a VM

Cool, that’s kind of awesome. Now, let’s create a VM.

So first, download a Cirros image.

[stack@droctagon3 ~]$ curl -o /tmp/cirros.qcow2 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Now, you can upload it to Glance.

glance image-create --name cirros --disk-format qcow2  --container-format bare  --file /tmp/cirros.qcow2 --progress

And we can kick off a pretty basic Nova instance and we’ll look at it a bit.

[stack@droctagon3 ~]$ nova boot --flavor m1.tiny --image cirros testvm
[stack@droctagon3 ~]$ openstack server list -c Name -c Networks -c 'Image Name'
+--------+---------------------------------------------------------+------------+
| Name   | Networks                                                | Image Name |
+--------+---------------------------------------------------------+------------+
| testvm | private=fdae:9098:19bf:0:f816:3eff:fed5:d769, 10.0.0.13 | cirros     |
+--------+---------------------------------------------------------+------------+

Kuryr magic has happened! Let’s see what it did.

So, now Kuryr has performed some cool stuff, we can see that it created a Neutron port for us.

[stack@droctagon3 ~]$ openstack port list --device-owner kuryr:container -c Name
+-----------------------+
| Name                  |
+-----------------------+
| demo-2945424114-pi2b0 |
+-----------------------+
[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          5m

You can see that the port name is the same as the pod name – cool!

And that pod has an IP address on the same subnet as the nova instance. So let’s inspect that.

[stack@droctagon3 ~]$ pod=$(kubectl get pods -l run=demo -o jsonpath='{.items[].metadata.name}')
[stack@droctagon3 ~]$ pod_ip=$(kubectl get pod $pod -o jsonpath='{.status.podIP}')
[stack@droctagon3 ~]$ echo Pod $pod IP is $pod_ip
Pod demo-2945424114-pi2b0 IP is 10.0.0.4

Expose a service for the pod we launched

Ok, let’s go ahead and expose a service for this pod. We’ll expose it and see what the results are.

[stack@droctagon3 ~]$ kubectl expose deployment demo --port=80 --target-port=8080
service "demo" exposed
[stack@droctagon3 ~]$ kubectl get svc demo
NAME      CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
demo      10.0.0.84    <none>        80/TCP    13s
[stack@droctagon3 ~]$ kubectl get endpoints demo
NAME      ENDPOINTS       AGE
demo      10.0.0.4:8080   1m

And we have an LBaaS (load balancer as a service) which we can inspect with neutron…

[stack@droctagon3 ~]$ neutron lbaas-loadbalancer-list -c name -c vip_address -c provider
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+------------------------+-------------+----------+
| name                   | vip_address | provider |
+------------------------+-------------+----------+
| Endpoints:default/demo | 10.0.0.84   | haproxy  |
+------------------------+-------------+----------+
[stack@droctagon3 ~]$ neutron lbaas-listener-list -c name -c protocol -c protocol_port
[stack@droctagon3 ~]$ neutron lbaas-pool-list -c name -c protocol
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port

Scale up the replicas

You can now scale up the number of replicas of this pod, and Kuryr will follow along in suit. Let’s do that now.

[stack@droctagon3 ~]$ kubectl scale deployment demo --replicas=2
deployment "demo" scaled
[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS              RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running             0          14m
demo-2945424114-rikrg   0/1       ContainerCreating   0          3s

We can see that more ports were created…

[stack@droctagon3 ~]$ openstack port list --device-owner kuryr:container -c Name -c 'Fixed IP Addresses'
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port

Verify connectivity

Now – as if the earlier goodies weren’t fun, this is the REAL fun part. We’re going to enter a pod, e.g. via kubectl exec and we’ll go ahead and check out that we can reach the pod from the pod, and the VM from the pod, and the exposed service (and henceforth both pods) from the VM.

Let’s do it! So go and exec the pod, and we’ll give it a cute prompt so we know where we are since we’re about to enter the rabbit hole.

[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          21m
demo-2945424114-rikrg   1/1       Running   0          6m
[stack@droctagon3 ~]$ kubectl exec -it demo-2945424114-pi2b0 /bin/bash
bash-4.3# export PS1='[user@pod_a]$ '
[user@pod_a]$ 

Before you continue – you might want to note some of the IP addresses we showed earlier in this process. Collect those or chuck ‘em in a note pad and we can use them here.

Now that we have that, we can verify our service locally.

[user@pod_a]$ curl 127.0.0.1:8080
demo-2945424114-pi2b0

And verify it with the pod IP

[user@pod_a]$ curl 10.0.0.4:8080
demo-2945424114-pi2b0

And verify we can reach the other pod

[user@pod_a]$ curl 10.0.0.11:8080
demo-2945424114-rikrg

Now we can verify the service, note how you get different results from each call, as it’s load balanced between pods.

[user@pod_a]$ curl 10.0.0.84
demo-2945424114-pi2b0
[user@pod_a]$ curl 10.0.0.84
demo-2945424114-rikrg

Cool, how about the VM? We should be able to ssh to it since it uses the default security group which is pretty wide open. Let’s ssh to that (reminder, the password is “cubswin:)") and also set the prompt to look cute.

[user@pod_a]$ ssh cirros@10.0.0.13
The authenticity of host '10.0.0.13 (10.0.0.13)' can't be established.
RSA key fingerprint is SHA256:Mhz/s1XnA+bUiCZxVc5vmD1C6NoeCmOmFOlaJh8g9P8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.13' (RSA) to the list of known hosts.
cirros@10.0.0.13's password: 
$ export PS1='[cirros@vm]$ '
[cirros@vm]$ 

Great, so that definitely means we can get to the VM from the pod. But, let’s go and curl that service!

[cirros@vm]$ curl 10.0.0.84
demo-2945424114-pi2b0
[cirros@vm]$ curl 10.0.0.84
demo-2945424114-rikrg

Voila! And that concludes our exploration of kuryr-kubernetes for today. Remember that you can find the Kuryr crew on the Openstack mailing lists, and also in Freenode @ #openstack-kuryr.

 

This post first appeared on Doug Smith’s blog. Superuser is always interested in community content, contact editor AT openstack.org

Cover Photo // CC BY NC

The post Kuryr and Kubernetes will knock your socks off appeared first on OpenStack Superuser.

by Doug Smith at April 14, 2017 11:15 AM

Dean Troyer

Who Are We? Really? Really!

[Most of this originally appeared in a thread on the openstack-dev mailing list but seemed interesting enough to repost here.]

It is TC election season again in OpenStack-land and this time around a few days gap has been included between the self-nomination period and the actual election for those standing for election to be quizzed on various topics. One of these questions[0] was the usual "what is OpenStack?" between One Big Thing and Loosely Familiar Many Things.

I started with a smart-alec response to ttx's comparison of OpenStack to Lego (which I still own more of than I care to admit to my wife): "something something step on them in the dark barefoot". OpenStack really can be a lot like that, you think you are cruising along fine getting it running and BAM, there's that equivalent to a 2x2 brick in the carpet locating your heel in the dark. Why are those the sharpest ones???

Back to the topic at hand: This question comes up over and over, almost like clockwork at election time. This is a signal to me that we (the community overall) still do not have a shared understanding of the answer, or some just don't like the stated answer and their process to change that answer is to repeat the question hoping for a different answer.

In my case, the answer may be changing a bit. We've used the term 'cloud operating system' in various places, but not in our defining documents:

I've never liked the "cloud operating system" term because I felt it mis-represented how we defined ourself and is too generic and used in other places for other things. But I've come to realize it is an easy-to-understand metaphor for what OpenStack does and where we are today. Going forward it is increasingly apparent that hybrid stacks (constellations, etc) will be common that include significant components that are not OpenStack at layers other than "layer 0" (ie, below all OpenStack components: database, message queue, etc). The example commonly given is of course Kubernetes, but there are others.

UNIX caught on as well as it did partly because of its well-defined interfaces between components at user-visible levels, specifically in userspace. The 'everything is a file' metaphor, for all its faults, was simple to understand and use, until it wasn't, but it still serves us well. There was a LOT of 'differentiation' between the eventual commercial implementations of UNIX which caused a lot of pain for many (including me) but the masses have settled on the highly-interoperable GNU/Linux combination. (I am conveniently ignoring the still-present 'differentiation' that Linux distros insist on because that will never go away).

This is where I see OpenStack today. We are in the role of being the cloud for the masses, used by both large (hi CERN!) and small (hi mtreinish's closet!) clouds and largely interoperable. Just as an OS (operating system) is the enabling glue for applications to function and communicate, our OS (OpenStack) is in position to do that for cloud apps. What we are lacking for guidance is a direct lineage to 20 years of history. We have to have our own discipline to keep our interfaces clean and easy to consume and understand, and present a common foundation for applications to build on, including applications that are themselves higher layers of an OS stack.

Phew! Thank you again for reading this far, I know this is not news to a lot of our community, but the assumption that "everyone knows this" is not true, we need to occasionally repeat ourselves to remind ourselves and inform our newer members what we are, where we are headed and why we are all here in the first place: to enable awesome work to build on our foundation, and if we sell a few boxes or service contracts or chips along the way, our sponsors are happy too.

April 14, 2017 09:14 AM

April 13, 2017

Major Hayden

OpenStack-Ansible networking on CentOS 7 with systemd-networkd

Ethernet switchAlthough OpenStack-Ansible doesn’t fully support CentOS 7 yet, the support is almost ready. I have a four node Ocata cloud deployed on CentOS 7, but I decided to change things around a bit and use systemd-networkd instead of NetworkManager or the old rc scripts.

This post will explain how to configure the network for an OpenStack-Ansible cloud on CentOS 7 with systemd-networkd.

Each one of my OpenStack hosts has four network interfaces and each one has a specific task:

  • enp2s0 – regular network interface, carries inter-host LAN traffic
  • enp3s0 – carries br-mgmt bridge for LXC container communication
  • enp4s0 – carries br-vlan bridge for VM public network connectivity
  • enp5s0 – carries br-vxlan bridge for VM private network connectivity

Adjusting services

First off, we need to get systemd-networkd and systemd-resolved ready to take over networking:

systemctl disable network
systemctl disable NetworkManager
systemctl enable systemd-networkd
systemctl enable systemd-resolved
systemctl start systemd-resolved
rm -f /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

LAN interface

My enp2s0 network interface carries traffic between hosts and handles regular internal LAN traffic.

[Match]
Name=enp2s0

[Network]
Address=192.168.250.21/24
Gateway=192.168.250.1
DNS=192.168.250.1
DNS=8.8.8.8
DNS=8.8.4.4
IPForward=yes

This one is quite simple, but the rest get a little more complicated.

Management bridge

The management bridge (br-mgmt) carries traffic between LXC containers. We start by creating the bridge device itself:

[NetDev]
Name=br-mgmt
Kind=bridge

Now we configure the network on the bridge (I use OpenStack-Ansible’s defaults here):

[Match]
Name=br-mgmt

[Network]
Address=172.29.236.21/22

I run the management network on VLAN 10, so I need a network device and network configuration for the VLAN as well. This step adds the br-mgmt bridge to the VLAN 10 interface:

[NetDev]
Name=vlan10
Kind=vlan

[VLAN]
Id=10

[Match]
Name=vlan10

[Network]
Bridge=br-mgmt

Finally, we add the VLAN 10 interface to enp3s0 to tie it all together:

[Match]
Name=enp3s0

[Network]
VLAN=vlan10

Public instance connectivity

My router offers up a few different VLANs for OpenStack instances to use for their public networks. We start by creating a br-vlan network device and its configuration:

[NetDev]
Name=br-vlan
Kind=bridge

[Match]
Name=br-vlan

[Network]
DHCP=no

We can add this bridge onto the enp4s0 physical interface:

[Match]
Name=enp4s0

[Network]
Bridge=br-vlan

VXLAN private instance connectivity

This step is similar to the previous one. We start by defining our br-vxlan bridge:

[NetDev]
Name=br-vxlan
Kind=bridge

[Match]
Name=br-vxlan

[Network]
Address=172.29.240.21/22

My VXLAN traffic runs over VLAN 11, so we need to define that VLAN interface:

[NetDev]
Name=vlan11
Kind=vlan

[VLAN]
Id=11

[Match]
Name=vlan11

[Network]
Bridge=br-vxlan

We can hook this VLAN interface into the enp5s0 interface now:

[Match]
Name=enp5s0

[Network]
VLAN=vlan11

Checking our work

The cleanest way to apply all of these configurations is to reboot. The Adjusting services step from the beginning of this post will ensure that systemd-networkd and systemd-resolved come up after a reboot.

Run networkctl to get a current status of your network interfaces:

# networkctl
IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 enp2s0           ether              routable    configured
  3 enp3s0           ether              degraded    configured
  4 enp4s0           ether              degraded    configured
  5 enp5s0           ether              degraded    configured
  6 lxcbr0           ether              routable    unmanaged 
  7 br-vxlan         ether              routable    configured
  8 br-vlan          ether              degraded    configured
  9 br-mgmt          ether              routable    configured
 10 vlan11           ether              degraded    configured
 11 vlan10           ether              degraded    configured

You should have configured in the SETUP column for all of the interfaces you created. Some interfaces will show as degraded because they are missing an IP address (which is intentional for most of these interfaces).

The post OpenStack-Ansible networking on CentOS 7 with systemd-networkd appeared first on major.io.

by Major Hayden at April 13, 2017 01:18 PM

OpenStack Superuser

Jumpstart your OpenStack know-how at the Upstream Institute

There are plenty of ways to boost your knowledge at the upcoming OpenStack Summit Boston.

The OpenStack Academy is a dedicated space for learning offering of hands-on workshops, day-long intensive training sessions and professional certification exams.

Here we take a look at the OpenStack Upstream Institute, a free, 1.5 day training course held May 6-7, before the Summit.

It’s an intensive program designed to school newcomers in the tools used to contribute code, add new features, write documentation and participate in working groups to OpenStack as they join a community of thousands of developers from hundreds of companies worldwide.

Students will also learn how to use a prepared development environment to test, prepare and upload new code snippets or documentation for review. Lenovo is sponsoring the course this time around.

Organized and run by people embedded in the community, the fast-track course gives students a more accurate taste of what working in the community is really like and the opportunity to ask experienced contributors questions and gain more insight into their work with OpenStack.  There’s help beyond the classroom, too: attendees can join an ongoing mentoring program.
“As the training evolves, our focus continually shifts towards providing a highly interactive course where students can learn about the social norms, modes of communication and variety of possible contributions through experience as opposed to lectures,”  say Ildikó Váncsa and Kendall Nelson, the OpenStack Foundation staffers who run the training.
Are you still debating whether the OpenStack Institute is for you?
“The answer is always ‘YES’!” say Vancsa and Nelson. “The course is designed to be modular to increase flexibility and provide an invaluable experience for all attendees no matter of their role and level: student, junior or established engineer.”
A few reminders: the training is free but participants need to sign up for the class, register with an OpenStack ID and set up their laptops before diving in Saturday afternoon. Complete instructions here.

Get involved!

Upstream courses are occasionally offered outside OpenStack Summits. If you’d like to follow what’s going on with the training team,  which organizes the courses and other OpenStack training materials, you can join one of their weekly meetings here: http://eavesdrop.openstack.org/#Training-guides_Team_Meeting  or reach out to Ildiko Vancsa (IRC on Freenode: ildikov, mail: ildiko@openstack.org) or Kendall Nelson (IRC: diablo_rojo, mail: knelson@openstack.org)

The post Jumpstart your OpenStack know-how at the Upstream Institute appeared first on OpenStack Superuser.

by Superuser at April 13, 2017 11:48 AM

April 12, 2017

OpenStack Superuser

Help your boss understand your upstream project

My previous posts on “open source first” have been more strategy than recipe. You need a clear, easy-to-understand plan in making the case for an upstream project to your manager. To help you with your boss, I have rewritten the “How to use Public Projects to Build Products” article into a list of 10 steps. These steps are comprehensive, covering strategy through implementation. A motivated developer, development manager or open-source director should lead these organizational changes over the many months it will take to implement them. I wish you success on your journey to better, stronger organization.

10 steps to supporting upstream projects

  1. Strategy: When you set out your strategy and objectives for the year, highlight the open source projects you will be working with. This is important for recognizing the risks, dependencies and commitments you are making while working with the external engineering team that is the open source project.
  2. Partners: Treat a public open source project like another engineering organization. If you are going to work with another engineering team, you expect to have a clear understanding of their responsibilities and timelines. You need to have the exact same expectations when working with a public open source project.
  3. Individual contributors: Often, a developer will step forward with the desire to work on an open source project that is not part of the organization’s “plan.” This is exactly what the organization needs. Self motivated engineers are developing leadership skills. Allow the developer to allocate some of her/his time. Set responsibilities and timelines. This is just as much to protect the developer as is it is the company. Encourage open source contributions as technical social good efforts.
  4. Fully support what you start: Middle management needs to make upper management aware of what the upstream open source work’s purpose is and why it’s important to supporting the overall development strategy. Never commit to an open source project that your organization is not willing to fully support.
  5. Reviews: Annual developer performance reviews and headcount updates must include the upstream open source projects. This means that the development managers are ready to defend their open source developers with the facts of their work. Developers that excel at working with downstream and upstream development projects are the people who you want to recognize and promote. These developers are very often your best people and likely team leaders.
  6. Products: Make sure your product managers understand how the public open source work contributes to delivering the product. In your private project management tool, establish an epic for each upstream open source project. Create stories for each upstream feature that is being worked on. Link each upstream feature to a product epic or story that a product manager is responsible for. The upstream developer needs to work with the product manager at least once every month, so that, as the upstream work progresses, there is a tight understanding of how that work will go into the product. A developer advocate like an open source director or development manager can be an alternative to work with the product manager.
  7. Projects: Make sure the scrum master works with the upstream developer just like the rest of the downstream developers. The upstream stories need to be in the backlog along with the rest of the development work. It is likely the upstream schedule of delivering features is different from your downstream product. Adjust your stories for your backlog to match your downstream scrum schedule meaning that, if your downstream team is on a two-week sprint, make sure your upstream stories can also be delivered in that two-week sprint. Treat all of your developers equally for equal results.
  8. Test, build, test, repeat: As the upstream work starts to take shape, get the code into a testable branch on your software pipeline. Make sure you have quality unit and build tests to verify the upstream work, so it can be more easily merged into your code base trunk when the time comes. Get members of your team to comment and help with the CI effort. As interest in what you are working on gets more help and visibility, you probably will get some public commits from your downstream development team.
  9. Report: Track all your development work and report on progress. Highlight the upstream and downstream work as separate, but equal. Update leadership on the upstream project progress as much as the private development. Approach this as updating on the progress of a valued engineering partner.
  10. Schedule: Keep up to date with the public projects release schedule and strategy. Internally, publish both your private release schedule alongside the public projects release schedule. Alignment of schedules is critical for success.

This post first appeared on Sean Robert’s blog. Roberts, director of platform product and open source at WalmartLabs, has been involved with OpenStack since 2012 holding various positions including board member and ambassador. Superuser is always interested in community content, email: editorATsuperuser.org.

Cover Photo // CC BY NC

The post Help your boss understand your upstream project appeared first on OpenStack Superuser.

by Sean Roberts at April 12, 2017 12:34 PM

Aptira

OpenStack Storage Concepts

Aptira Cloud Services - OpenStack Storage & Containers - Docker, Docker Swarm, Kubernetes, Apache Mesos, CoreOs Rkt - Globe

Storage types

Storage is found in many parts of the OpenStack cloud environment. It is important to understand the distinction between ephemeral storage and persistent storage:

  • Ephemeral storage – If you only deploy OpenStack Compute service (nova), by default your users do not have access to any form of persistent storage. The disks associated with VMs are ephemeral, meaning that from the user’s point of view they disappear when a virtual machine is terminated.
  • Persistent storage – Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance.

OpenStack clouds explicitly support three types of persistent storage: Object Storage, Block Storage, and File-based storage.

Block storage

The Block Storage service (cinder) in OpenStack. Because these volumes are persistent, they can be detached from one instance and re-attached to another instance and the data remains intact.

The Block Storage service supports multiple back ends in the form of drivers. Your choice of a storage back end must be supported by a block storage driver.

Most block storage drivers allow the instance to have direct access to the underlying storage hardware’s block device. This helps increase the overall read/write IO. However, support for utilizing files as volumes is also well established, with full support for NFS, GlusterFS and others.

These drivers work a little differently than a traditional block storage driver. On an NFS or GlusterFS file system, a single file is created and then mapped as a virtual volume into the instance. This mapping and translation is similar to how OpenStack utilizes QEMU’s file-based virtual machines stored in /var/lib/nova/instances.

Object Storage

Object storage is implemented in OpenStack by the Object Storage service (swift). Users access binary objects through a REST API. If your intended users need to archive or manage large datasets, you should provide them with Object Storage service. Additional benefits include:

  • OpenStack can store your virtual machine (VM) images inside of an Object Storage system, as an alternative to storing the images on a file system.
  • Integration with OpenStack Identity, and works with the OpenStack Dashboard.
  • Better support for distributed deployments across multiple datacenters through support for asynchronous eventual consistency replication.

You should consider using the OpenStack Object Storage service if you eventually plan on distributing your storage cluster across multiple data centers, if you need unified accounts for your users for both compute and object storage, or if you want to control your object storage with the OpenStack Dashboard. For more information, see the Swift project page.

File-based storage

In multi-tenant OpenStack cloud environment, the Shared File Systems service (manila) provides a set of services for management of shared file systems. File-based storage supports multiple back-ends in the form of drivers, and can be configured to provision shares from one or more back-ends. Share servers are virtual machines that export file shares using different file system protocols such as NFS, CIFS, GlusterFS, or HDFS.

The Shared File Systems service is persistent storage and can be mounted to any number of client machines. File-based storage can also be detached from one instance and attached to another instance without data loss. During this process the data are safe unless the Shared File Systems service itself is changed or removed.

Users interact with the Shared File Systems service by mounting remote file systems on their instances with the following usage of those systems for file storing and exchange. The Shared File Systems service provides shares which is a remote, mountable file system. You can mount a share and access a share from several hosts by several users at a time. With shares, you can also:

  • Create a share specifying its size, shared file system protocol, visibility level.
  • Create a share on either a share server or standalone, depending on the selected back-end mode, with or without using a shared network.
  • Specify access rules and security services for existing shares.
  • Combine several shares in groups to keep data consistency inside the groups for the following safe group operations.
  • Create a snapshot of a selected share or a share group for storing the existing shares consistently or creating new shares from that snapshot in a consistent way.
  • Create a share from a snapshot.
  • Set rate limits and quotas for specific shares and snapshots.
  • View usage of share resources.
  • Remove shares.

Differences between storage types

Ephemeral storage Block storage Object storage File-based storage
Application Run operating system and scratch space Add additional persistent storage to a virtual machine (VM) Store data, including VM images Add additional persistent storage to a virtual machine
Accessed through… A file system A block device that can be partitioned, formatted, and mounted (such as, /dev/vdc) The REST API A Shared File Systems service share (either manila managed or an external one registered in manila) that can be partitioned, formatted and mounted (such as /dev/vdc)
Accessible from… Within a VM Within a VM Anywhere Within a VM
Managed by… OpenStack Compute (nova) OpenStack Block Storage (cinder) OpenStack Object Storage (swift) OpenStack Shared File System Storage (manila)
Persists until… VM is terminated Deleted by user Deleted by user Deleted by user
Sizing determined by… Administrator configuration of size settings, known as flavors User specification in initial request Amount of available physical storage User specification in initial request
Requests for extension
Available user-level quotes
Limitations applied by Administrator
Encryption configuration Parameter in nova.conf Admin establishing encrypted volume type, then user selecting encrypted volume Not yet available Shared File Systems service does not apply any additional encryption above what the share’s back-end storage provides
Example of typical usage… 10 GB first disk, 30 GB second disk 1 TB disk 10s of TBs of dataset storage

Depends completely on the size of back-end storage specified when a share was being created. In case of thin provisioning it can be partial space reservation

File-level storage for live migration

With file-level storage, users access stored data using the operating system’s file system interface. Most users who have used a network storage solution before have encountered this form of networked storage. The most common file system protocol for Unix is NFS, and for Windows, CIFS (previously, SMB).

OpenStack clouds do not present file-level storage to end users. However, it is important to consider file-level storage for storing instances under /var/lib/nova/instances when designing your cloud, since you must have a shared file system if you want to support live migration.

Commodity storage technologies

There are various commodity storage back end technologies available. Depending on your cloud user’s needs, you can implement one or many of these technologies in different combinations.

Ceph

Ceph is a scalable storage solution that replicates data across commodity storage nodes.

Ceph utilises and object storage mechanism for data storage and exposes the data via different types of storage interfaces to the end user it supports interfaces for: – Object storage – Block storage – File-system interfaces

Ceph provides support for the same Object Storage API as swift and can be used as a back end for the Block Storage service (cinder) as well as back-end storage for glance images.

Ceph supports thin provisioning implemented using copy-on-write. This can be useful when booting from volume because a new volume can be provisioned very quickly. Ceph also supports keystone-based authentication (as of version 0.56), so it can be a seamless swap in for the default OpenStack swift implementation.

Ceph’s advantages include:

  • The administrator has more fine-grained control over data distribution and replication strategies.
  • Consolidation of object storage and block storage.
  • Fast provisioning of boot-from-volume instances using thin provisioning.
  • Support for the distributed file-system interface CephFS.

You should consider Ceph if you want to manage your object and block storage within a single system, or if you want to support fast boot-from-volume.

LVM

The Logical Volume Manager (LVM) is a Linux-based system that provides an abstraction layer on top of physical disks to expose logical volumes to the operating system. The LVM back-end implements block storage as LVM logical partitions.

On each host that will house block storage, an administrator must initially create a volume group dedicated to Block Storage volumes. Blocks are created from LVM logical volumes.

Note: LVM does not provide any replication. Typically, administrators configure RAID on nodes that use LVM as block storage to protect against failures of individual hard drives. However, RAID does not protect against a failure of the entire host.

ZFS

The Solaris iSCSI driver for OpenStack Block Storage implements blocks as ZFS entities. ZFS is a file system that also has the functionality of a volume manager. This is unlike on a Linux system, where there is a separation of volume manager (LVM) and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data-integrity checking.

The ZFS back end for OpenStack Block Storage supports only Solaris-based systems, such as Illumos. While there is a Linux port of ZFS, it is not included in any of the standard Linux distributions, and it has not been tested with OpenStack Block Storage. As with LVM, ZFS does not provide replication across hosts on its own, you need to add a replication solution on top of ZFS if your cloud needs to be able to handle storage-node failures.

Gluster

A distributed shared file system. As of Gluster version 3.3, you can use Gluster to consolidate your object storage and file storage into one unified file and object storage solution, which is called Gluster For OpenStack (GFO). GFO uses a customized version of swift that enables Gluster to be used as the back-end storage.

The main reason to use GFO rather than swift is if you also want to support a distributed file system, either to support shared storage live migration or to provide it as a separate service to your end users. If you want to manage your object and file storage within a single system, you should consider GFO.

Sheepdog

Sheepdog is a userspace distributed storage system. Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback and thin provisioning.

It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, Sheepdog provides elastic volume service and http service. Sheepdog does require a specific kernel version and can work nicely with xattr-supported file systems.

Choosing storage back ends

Users will indicate different needs for their cloud architecture. Some may need fast access to many objects that do not change often, or want to set a time-to-live (TTL) value on a file. Others may access only storage that is mounted with the file system itself, but want it to be replicated instantly when starting a new instance. For other systems, ephemeral storage is the preferred choice. When you select storage back ends, consider the following questions from user’s perspective:

  • Do I need block storage?
  • Do I need object storage?
  • Do I need to support live migration?
  • Should my persistent storage drives be contained in my compute nodes, or should I use external storage?
  • What is the platter count I can achieve? Do more spindles result in better I/O despite network access?
  • Which one results in the best cost-performance scenario I’m aiming for?
  • How do I manage the storage operationally?
  • How redundant and distributed is the storage? What happens if a storage node fails? To what extent can it mitigate my data-loss disaster scenarios?

A wide variety of operator-specific requirements dictates the nature of the storage back end. Examples of such requirements are as follows:

  • Public, private or a hybrid cloud, and associated SLA requirements
  • The need for encryption-at-rest, for data on storage nodes
  • Whether live migration will be offered

We recommend that data be encrypted both in transit and at-rest. If you plan to use live migration, a shared storage configuration is highly recommended.

To deploy your storage by using only commodity hardware, you can use a number of open-source packages, as shown in Persistent file-based storage support.

Object Storage Block Storage File-based Storage
Swift  
LVM  
Ceph     Experimental
Gluster      
NFS    
ZFS  
Sheepdog    

This list of open source file-level shared storage solutions is not exhaustive. Your organization may already have deployed a file-level shared storage solution that you can use.

Note: Storage driver support. In addition to the open source technologies, there are a number of proprietary solutions that are officially supported by OpenStack Block Storage. You can find a matrix of the functionality provided by all of the supported Block Storage drivers on the CinderSupportMatrix wiki.

Software Defined Storage and Disaster Recovery with Aptira provide peace of mind for your data storage. Our storage solutions easily integrate with a wide range of enterprise storage platforms. We can also support live migrations and secure your data as part of a Managed Cloud strategy.

You can also learn more about storage with Aptira training. Our 2 day online Ceph Training covers the main concepts and architecture of Ceph storage, its installation and daily operation as well as using Ceph storage in OpenStack environments.

References

The post OpenStack Storage Concepts appeared first on Aptira Cloud Solutions.

by Bharat Kumar at April 12, 2017 10:59 AM

Opensource.com

How OpenStack releases get their names

From A to Z, learn where the names for upstream code releases come from and what they mean.

by Jason Baker at April 12, 2017 07:01 AM

April 11, 2017

Mirantis

A whirlwind tour of open cloud trends in China with UMCloud

Recently I had the pleasure of making my first visit to China to visit Mirantis’ partner, UMCloud, and we visited with several large firms. Several trends caught my attention.

by Randy DeFauw at April 11, 2017 05:22 PM

OpenStack Superuser

OpenStack Community Contributor Awards now open for nominations

So many folks work tirelessly behind the scenes to make OpenStack great, whether they are fixing bugs, contributing code, helping newbies on IRC or just making everyone laugh at the right moment.

You can help them get recognized (with a very shiny medal!) by nominating them for the next Contributor Awards given out at the upcoming OpenStack Summit Boston. These are informal, quirky awards — winners in previous editions included the “Duct Tape” award and the “Don’t Stop Believin’ Cup” — that shine a light on the extremely valuable work that makes OpenStack excel.

There’s so many different areas worthy of celebration, but there are a few kinds of community members who deserve a little extra love:

  • They are undervalued
  • They don’t know they are appreciated
  • They bind the community together
  • They keep it fun
  • They challenge the norm
    Other: (write-in)

As before, rather than starting with a defined set of awards, the community is asked to submit  names in those broad categories. The OpenStack Foundation community team then has a little bit of fun on the back end, massaging the award titles to devise something worthy of their underappreciated efforts.


The submission form is below, so please nominate anyone you think is deserving of an award! Nominations for this edition of the Community Contributor Awards are open until April 23.

https://openstackfoundation.formstack.com/forms/cca_nominations_boston

Awards will be presented during the feedback session on Thursday at the Summit.

Cover Photo // CC BY NC

The post OpenStack Community Contributor Awards now open for nominations appeared first on OpenStack Superuser.

by Superuser at April 11, 2017 11:15 AM

April 10, 2017

IBM OpenTech Team

Docker Containerized OpenStack Heat Translator

Blog authors:
Sahdev P. Zala (spzala@us.ibm.com @sp_zala)
Eduardo Patrocinio (eduardop@us.ibm.com @patrocinio)

The OpenStack Heat Translator is one of the projects under the main OpenStack Heat orchestration project. It facilitates translation of OASIS TOSCA service template to Heat Orchestration Template (HOT) and the deployment of translated template into an OpenStack cloud. To use it, you simply download a stable release through PyPI or by forking the master branch to use the latest source code.

Docker containers are now widely used to consume applications and tools, and such usage are just growing. Building a container is fun and using it can be even more fun and highly productive.

We have created a container using the latest stable release of the Heat Translator available at the PyPI . This blog post will show where the image is located and how it can be used.

Get the image

The Heat Translator Docker container image is available on the Docker hub named patrocinio/h-t-container-stable.

Use the container

You can invoke the Heat Translator at the same time you run the container. The Heat Translator is commonly used to translate TOSCA service template available in the local file system or providing it via an URL. We will walk you through invoking Heat Translator in both ways.

Translate TOSCA service template from local file system
Let’s say you have TOSCA service template called tosca_helloworld.yaml on your local machine. Assume that the tosca_helloworld.yaml is located in your /tmp/tosca_testfiles. To translate it to HOT, run the container as follow:

$ docker run -v /tmp/tosca_testfiles:/tosca patrocinio/h-t-container-stable --template-file /tosca/tosca_helloworld.yaml

The patrocinio/h-t-container-stable will pull the image if it is not already available in your environment. Also make a note that, the container requires that you provide /tmp/tosca_testfiles directory as a volume which will be mapped as the /tosca directory inside the container. You can think of /tmp/tosca_testfiles as a host source directory. It can be an arbitrary name but must be provided as an absolute path.

The rest is simple: you provide the arguments that are understood by the Heat Translator. In this case, we provided the required –template-file argument with your TOSCA service template as a value to the argument. The service template can be a YAML file as provided above or it can be a TOSCA CSAR file. If you are new to CSAR, learn more about it from Sahdev Zala’s other blog post Package Cloud Workloads with TOSCA Cloud Service Archive.

Assuming your tosca_helloworld.yaml looks as below:

 
tosca_definitions_version: tosca_simple_yaml_1_0

description: Template for deploying a single server with predefined properties.

topology_template:
  node_templates:
    my_server:
      type: tosca.nodes.Compute
      capabilities:
        # Host container properties
        host:
         properties:
           num_cpus: 2
           disk_size: 10 GB
           mem_size: 512 MB
        # Guest Operating System properties
        os:
          properties:
            # host Operating System image properties
            architecture: x86_64
            type: Linux
            distribution: RHEL
            version: 6.5

The output of the above command will produce following HOT template,

heat_template_version: 2013-05-23

description: >
  Template for deploying a single server with predefined properties.
parameters: {}
resources:
  my_server:
    type: OS::Nova::Server
    properties:
      flavor: m1.medium
      image: rhel-6.5-test-image
      user_data_format: SOFTWARE_CONFIG
outputs: {}

Translate TOSCA service template via an URL
Translating by providing service template or CSAR as URL is very simple. All you need to do is provide a URL as a value to the –template-file argument. For example, tosca_helloworld.yaml located on the TOSCA Parser github project can be translated using the following command,

$ docker run patrocinio/h-t-container-stable
--template-file https://raw.githubusercontent.com/openstack/tosca-parser/master/toscaparser/tests/data/tosca_helloworld.yaml

Heat Translator help
You can get to the Heat Translator help by simply running,

docker run patrocinio/h-t-container-stable --help

 

Dockerfile

Given below is the content of Dockerfile used to build the h-t-container-stable image. The file is also available in the Heat Translator project.

FROM ubuntu

MAINTAINER Eduardo Patrocinio and Sahdev P. Zala

RUN apt-get -y update && apt-get install -y \ 
    python-pip 

RUN pip install heat-translator

COPY heat_translator_logging.conf /usr/local/lib/python2.7/dist-packages/translator/conf/

# Have some test TOSCA templates in my_tosca directory to copy to the container as an example.
# This is an optional step and can be removed.
COPY my_tosca /tmp/my_tosca

ENTRYPOINT ["heat-translator"]

Conclusion

In this article, we showed you how to use the OpenStack Heat Translator using a Docker image by passing the source file during the docker run command. We have also shared Dockerfile to use as a base to create a modified Docker container of the Heat Translator.

The post Docker Containerized OpenStack Heat Translator appeared first on IBM OpenTech.

by Sahdev Zala at April 10, 2017 10:50 PM

Rich Bowen

3 ways you find the right type of contributor and where to find them

Another one of Stormy’s questions caught my eye:

“What are 3 ways you find the right type of contributor, and where do you find them?”

Thinking back to the last few years of work on RDO, several answers come to mind:

The people asking the questions

Watch the traffic on your mailing list(s) and on the various support forums. The people that are asking the most questions are often great potential contributors. They are using the project, so they are invested in its success. They are experiencing problems with it, so they know where there are problems that need to be addressed. And they are outspoken enough, or brave enough, to talk about their difficulties publicly, so they are likely to be just as willing to talk about their solutions.

These are usually good people to approach ask ask to write about their user experience. This can often be done collaboratively, combining their questions with the eventual answers that they encountered.

If they appear to be the type of person who is implementing solutions to their problems, ask them to bring those solutions back to your community for inclusion in the code.

In RDO, people that participate in the rdo-list mailing list will sometimes end up contributing their solutions to the project, and eventually becoming regular contributors. We probably need to do a better job of encouraging them to do this, rather than just hoping it’s going to happen on its own.

The people answering the questions

In watching the question-and-answer on ask.openstack.org I often see names I don’t recognize, answering questions related to RDO. Sometimes these are Red Hat engineers who have recently joined the project, and I haven’t met yet. But sometimes, it’s people from outside of Red Hat who have developed expertise on RDO and are now starting to pay that back.

These are the people that I then try to approach via the private messaging feature of that website, and ask them what their story is. This occasionally evolves into a conversation that brings them to more active involvement in the project.

Most people like to be asked, and so asking someone specifically if they’d be willing to come hang out in the IRC channel, or answer questions on the mailing list, tends to be fairly effective, and is a good step towards getting them more involved in the project.

The people complaining

This a tricky one. People complain because something is broken, or doesn’t work as they expect it to. The traditional response to this in the free software world is “Patches Welcome.” This is shorthand for “Fix it yourself and stop bugging me.”

The trick here is to recognize that people usually take the trouble to complain because they want it to work and they’re looking for help. This passion to just make things work can often be harnessed into contributions to the project, if these people are treated with patience and respect.

Patience, because they’re already frustrated, and responding with frustration or defensiveness is just going to make them angry.

Respect, because they are trying to solve an actual real world problem, and they’re not just there to hassle you.

It’s important that you fix their problem. Or at least, try to move them towards that solution.

Once the problem is addressed, ask them to stay. Ask them to write about their situation, and the fix to it. Ask them to stick around and answer other questions, since they have demonstrated that they care about the project, at least to the point of getting it working.

When people complain about your project, you also have a great opportunity to brush them off and persuade them that you are uncaring and unwelcoming, which they will then go tell all of their friends and Twitter followers. This can be a very expensive thing to do, for your community. Don’t do that.

When people come to #rdo on Freenode IRC to ask their RDO and OpenStack questions, I frequently (usually) don’t know the answer myself, but I try to make an effort to connect them with the people that do know the answer. Fortunately, we have an awesome community, and once you bring a question to the attention of the right person, they will usually see it through to the right solution.

by rbowen at April 10, 2017 09:00 PM

OpenStack Superuser

5 OpenStack user sessions you can’t miss at the Boston Summit

OpenStack Summits are a whirl of energy—from session rooms with standing room only, all-day trainings to onboard new Stackers and an expo hall with over 100 companies explaining new products and performing live demos.

The OpenStack Summit Boston will keep up that momentum.

From May 8-11, thousands of attendees will gather at the Hynes Convention Center to hear the latest OpenStack use cases, project updates and cross-community opportunities to collaborate. A few weeks ago, the OpenStack Foundation announced the addition of Open Source Days to the schedule where different open source foundations and projects have the opportunity to program a full day’s content. Ten organizations are participating—you can find sessions from Ceph and Open vSwitch on the schedule now.

This morning, we announced new users that will be sharing their use cases in Boston, including GE Healthcare, Verizon, Bloomberg and Gap Inc. With a four-day schedule packed with over 300 sessions, it can be easy to miss some of the big use cases that will be available to hear in Boston, so here is a handful of my personal picks.

Gap Inc.

Speaking at an OpenStack Summit for the first time, Gap Inc.‘s infrastructure architect, Eli Elliott will discuss how OpenStack can be used to make businesses move faster from online sales, to web point of sale in stores, and even pipeline and manufacturing capabilities. He will explain how this approach enabled dev-ops, user self service and a multi-billion dollar omni-pipeline using OpenStack at Gap.

Paddy Power Betfair

I love data and it looks like Superuser Awards nominee, Paddy Power Betfair does too. Paddy Power Betfair started using OpenStack in 2015. Fast forward two years and their team has migrated 25 percent of its production applications onto OpenStack and its team has an end goal of 1,300 KVM compute nodes, split over two data centers which will make up 100,000 cores and 2.08 petabytpes of storage. Now that is scale. Want to learn how? Join one of their four Boston Summit sessions to meet the team and hear their story.

The U.S. Army Cyber School

The U.S. Army Cyber School (USACYS) will have a brief appearance in Monday’s keynotes before going in-depth with Summit attendees in a 40-minute breakout session. Major Julianna Rodriguez and Chris Apsey, director and deputy director at USACYS, will explain how they went from three borrowed servers connected to users via CAT6 cables running through a drop ceiling to a 2,000-core cluster backed by a 4PB Ceph array that is 100 percent code-driven.  They will also discuss their continuous integration pipeline that integrates their Blackboard LMS, Heat and their AsciiDoc-based curriculum-as-code repository.

UK Civil Service

The UK government has recognized that shared services and micro-services on a common open cloud infrastructure have a wide role to play in how individual departments innovate. James Curran and Justin Cook from the UK Civil Service will explain how this approach, on Red Hat OpenStack hosted by UKCloud, is being used to streamline the process for UK companies to import and export licenses.

Adobe Advertising Cloud

After analyzing the cost effectiveness of cloud bursting, Nicolas Brousse, the director of engineering and operations at Adobe Advertising Cloud will discuss how to burst workloads from an OpenStack private cloud to AWS public cloud, the network constraints and challenges while doing cloud bursting and the realities of a multi-cloud environment.

Stay tuned as we continue to preview OpenStack Summit Boston talks.

If you have a talk you would like to recommend, we’d love to hear from you: email editor@openstack.org.

Haven’t registered yet? Hurry, prices increase this Friday, April 14 at 11:59 p.m. Pacific Time Zone.

Cover photo courtesy of the OpenStack Foundation. 

The post 5 OpenStack user sessions you can’t miss at the Boston Summit appeared first on OpenStack Superuser.

by Allison Price at April 10, 2017 03:08 PM

RDO

Recent blog posts

I haven't done an update in a few weeks. Here are some of the blog posts from our community in the last few weeks.

Red Hat joins the DPDK Project by Marcos Garcia - Principal Technical Marketing Manager

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.

Read more at http://redhatstackblog.redhat.com/2017/04/06/red-hat-joins-the-dpdk-project/

What’s new in OpenStack Ocata by rbowen

OpenStack Ocata has now been out for a little over a month – https://releases.openstack.org/ – and we’re about to see the first milestone of the Pike release. Past cycles show that now’s about the time when people start looking at the new release to see if they should consider moving to it. So here’s a quick overview of what’s new in this release.

Read more at http://drbacchus.com/whats-new-in-openstack-ocata/

Steve Hardy: OpenStack TripleO in Ocata, from the OpenStack PTG in Atlanta by Rich Bowen

Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/04/steve-hardy-openstack-tripido-in-ocata-from-the-openstack-ptg-in-atlanta/

Using a standalone Nodepool service to manage cloud instances by tristanC

Nodepool is a service used by the OpenStack CI team to deploy and manage a pool of devstack images on a cloud server for use in OpenStack project testing.

Read more at http://rdoproject.org/blog/2017/03/standalone-nodepool/

Red Hat Summit 2017 – Planning your OpenStack labs by Eric D. Schabell

This year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.

Read more at http://redhatstackblog.redhat.com/2017/04/04/red-hat-summit-2017-planning-your-openstack-labs/

Stephen Finucane - OpenStack Nova - What's new in Ocata by Rich Bowen

At the OpenStack PTG in February, Stephen Finucane speaks about what's new in Nova in the Ocata release of OpenStack.

Read more at http://rdoproject.org/blog/2017/03/stephen-finucane-openstack-nova-whats-new-in-ocata/

Zane Bitter - OpenStack Heat, OpenStack PTG, Atlanta by Rich Bowen

At the OpenStack PTG last month, Zane Bitter speaks about his work on OpenStack Heat in the Ocata cycle, and what comes next.

Read more at http://rdoproject.org/blog/2017/03/zane-bitter-openstack-heat-openstack-ptg-atlanta/

The journey of a new OpenStack service in RDO by amoralej

When new contributors join RDO, they ask for recommendations about how to add new services and help RDO users to adopt it. This post is not a official policy document nor a detailed description about how to carry out some activities, but provides some high level recommendations to newcomers based on what I have learned and observed in the last year working in RDO.

Read more at http://rdoproject.org/blog/2017/03/the-journey-of-a-service-in-rdo/

InfraRed: Deploying and Testing Openstack just made easier! by bregman

Deploying and testing OpenStack is very easy. If you read the headline and your eyebrows raised, you are at the right place. I believe that most of us, who experienced at least one deployment of OpenStack, will agree that deploying OpenStack can be a quite frustrating experience. It doesn’t matter if you are using it for […]

Read more at http://abregman.com/2017/03/20/infrared-deploying-and-testing-openstack-just-made-easier/

by Rich Bowen at April 10, 2017 02:45 PM

April 07, 2017

ICCLab

15th OpenStack Meetup

On the 21st of March we held the 15th OpenStack meetup. As ever, the talks were interesting, relevant and entertaining. It was kindly sponsored by Rackspace and held at their offices in Zürich. Much thanks goes to them and to previous sponsors!

At this meetup there were 2 talks and an interactive and impromptu panel discussion on the recent operator’s meetup in Milan.

The first talk was by Giuseppe Paterno who shared the experience in eBay on the workloads that are running there upon OpenStack.

Next up was Geoff Higginbottom from Rackspace who showed how to use Nagios and StackStorm to automate the recovery of OpenStack services. This was interesting from the lab’s perspective as much of what Geoff talked about was related to our Cloud Incident Management initiative. You can see almost the same talk that Geoff gave at the OpenStack Nordic Days.

The two presentations were followed up by the panel discussion involving those that attended  including our own Seán Murphy and was moderated by Andy Edmonds. Finally, as is now almost a tradition, we had a very nice apero!

Looking forward to the next and 16th OpenStack meetup!

by Andy Edmonds at April 07, 2017 03:30 PM

OpenStack Superuser

Let’s heat things up with Cinder and Heat

So let’s get back to the exciting stuff. If you have no clue about what I’m talking about, then you’ve most likely missed Part 1: The Neighborhood. If you have some idea about the series but are feeling a little lost, then you probably skipped Part 2: Getting to know OpenStack better. I suggest that you skim through them to get a better perspective on what we’re about to do.

Recap of first two posts:

  • You were introduced to OpenStack.
  • You learned the basic survival elements of OpenStack and how to set them up for a basic OpenStack install.
  • You got the keys, took it for a spin, saw money-making potential and figured that it handles well and possesses some serious networking chops.
  • You are revved up!

Confused? Don’t be. The diagram below depicts the current state of OpenStack at the end of the second tutorial.

ose2-7

At this point, we’ve configured the following:

Function Module
User authentication Keystone
Image service Glance
Compute Nova
Networking Neutron
GUI (graphical user interface) Horizon

Is it time to heat things up? No, not yet! Before we can do that, we have got to work our way around the block. So let’s look at one more piece in this puzzle.

What we’re going to set up next is block storage. You may ask, “what’s that?” Well, you’re going to be running some cloud instances for your customers (or tenants) and these customers will need to store the data in some sort of persistent storage (fancy name for storage that can persist beyond reboots and beyond the life of the cloud instance itself.)

Cinder is the module that allows you to provide additional persistent storage to your cloud instances or other cloud services. In simpler terms, it provides additional disks to your customer machines in the cloud.

F. Cinder

ose3-1

In the lab environment, we’re going to set up the Cinder service on the “controller” node. In a production environment, you’d want to have independent Cinder Nodes. Note that this will host the disks for your customers, so the workload on these nodes will be I/O intensive (Disk I/O).

There are multiple ways to handle the back-end storage for the Cinder nodes. For our lab environment, we’re using a local disk on the controller. In a production environment, this could be a disk mounted on your storage node from a storage  area network,  network attached storage or a distributed storage like Ceph. (The specifics of the back-end storage are beyond the scope of this tutorial, however.)

For now, in order to configure Cinder, please perform the following configuration:

Note on the syntax: these are covered in the previous posts, but for the benefit of the new readers please note the following:

  • @<servername> (means that you need to do the configuration that follows, on that server
  • Whenever you see sudo vi <filename> that means that you need to edit that file and the indented text that follows is what needs to be edited in that file.
  • OS means OpenStack

@controller (note that even if your storage node is separate, this configuration still needs to be done on the controller and NOT on the storage node)

Create the database and assign the user appropriate rights:

sudo mysql -u root -p
  CREATE DATABASE cinder;
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'MINE_PASS';
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'MINE_PASS';
  exit

Source the keystone_admin file to get OS command-line access

source ~/keystone_admin

Create the Cinder user

openstack user create --domain default --password-prompt cinder

Add the role for the Cinder user:

openstack role add --project service --user cinder admin

Cinder requires two services for operation. Create the required services:

openstack service create --name cinder   --description "OpenStack Block Storage" volume
openstack service create --name cinderv2   --description "OpenStack Block Storage" volumev2

Create the respective endpoints for each service:

openstack endpoint create --region RegionOne   volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

Perform the Cinder software installation

sudo apt install cinder-api cinder-scheduler

Perform the edit on the Cinder configuration file:

sudo vi /etc/cinder/cinder.conf

 [DEFAULT]
 auth_strategy = keystone
 #Define the URL/credentials to connect to RabbitMQ
 transport_url = rabbit://openstack:RABBIT_PASS@controller
 #This is the Ip of the storage node (in our case it is the controller node)
 my_ip = 10.30.100.215

 [database]
 # Tell cinder how to connect to the database. Comment out any existing connection   lines.
 connection = mysql+pymysql://cinder:MINE_PASS@controller/cinder

 # Tell cinder how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = cinder
 password = MINE_PASS

 [oslo_concurrency]
 lock_path = /var/lib/cinder/tmp

Populate the Cinder database

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

Configure the compute service to use cinder for block storage.

sudo vi /etc/nova/nova.conf
 [cinder]
 os_region_name = RegionOne

Restart Nova for the config changes to take effect.

sudo service nova-api restart

Start the Cinder services on the controller.

sudo service cinder-scheduler restart
sudo service cinder-api restart

Now, there are certain items that warrant some explanation. First of all we are going to use the LVM driver to manage logical volumes for our disks. For our lab environment, note that we have an empty partition on a disk on /dev/vda3. This is the partition that will host all the Cinder volumes that we will provide to our customers. For your environment, please substitute this with the respective name/path of the empty disk/partition you want to use.

@controller (or your storage node if your storage node is a separate one)

First we install the supporting utility for lvm.

sudo apt install lvm2

Now we set up the disk. The first command initializes the partition on our disk (or the whole disk if you are using a separate disk). As stated above, please replace the disk name with the appropriate one in your case.

pvcreate /dev/vda3

The below command creates a volume group on the disk/partition that we initialized above. We use the name ‘cinder-volumes’ for the volume group. This volume group will contain all the cinder volumes (disks for the customer cloud instances).

vgcreate cinder-volumes /dev/vda3

Below is a filter that needs to be defined in order to avoid performance issues and other complications on the storage node (according to official documentation). What happens is that, by default, the LVM scanning tool scans the /dev directory for block storage devices that contain volumes. We only want it to scan the devices that contain the cinder-volumes group (since that contains the volumes for OS).

Configure the LVM configuration file as follows:

sudo vi /etc/lvm/lvm.conf
 devices {
 ...
 filter = [ "a/vda2/", "a/vda3/", "r/.*/"]

a in the filter is for accept and the remaining is a regular expression. The line ends with “r/.*/” which rejects all remaining devices. But wait a minute; the filter is showing vda3, which is fine (since that contains the cinder-volumes), but what is vda2 doing there? According to the OS documentation if the storage node uses LVM on the operating system disk then we must add the associated device to the filter. For the lab /dev/vda2 is the partition that contains the operating system.

ose3-4
Logical depiction of cinder-volumes

Install the volume service software:

sudo apt install cinder-volume

Edit the configuration file for Cinder:

sudo vi /etc/cinder/cinder.conf

 [DEFAULT]
 auth_strategy = keystone
 #Tell cinder how to connect to RabbitMQ
 transport_url = rabbit://openstack:MINE_PASS@controller
 #This is the IP of the storage node (controller for the lab)
 my_ip = 10.30.100.215
 #We are using the lvm backend (This is the name of the section we will define later in the file)
 enabled_backends = lvm
 glance_api_servers = http://controller:9292
 lock_path = /var/lib/cinder/tmp

 #Cinder DB connection. Comment out any existing connection entries.
 [database]
 connection = mysql+pymysql://cinder:MINE_PASS@controller/cinder

 #Tell cinder how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = cinder
 password = MINE_PASS

 #This is the backend subsection
 [lvm]
 #Use the LVM driver
 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
 #This is the name of the volume group we created using the vgcreate command. If you changed the name use the changed name here.
 volume_group = cinder-volumes
 #The volumes are provided to the instances using the ISCSI protocol
 iscsi_protocol = iscsi
 iscsi_helper = tgtadm

Start the block storage service and the dependencies:

sudo service tgt restart
sudo service cinder-volume

Verify Operation

@controller

Source the OS command line:

source ~/keystone_admin

List the cinder services and ensure the status is up:

openstack volume service list
+------------------+---------------------------+------+---------+-------+----------------------------+
| Binary           | Host                      | Zone | Status  | State | Updated At                 |
+------------------+---------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller                | nova | enabled | up    | 2016-12-14T07:24:22.000000 |
| cinder-volume    | controller@lvm            | nova | enabled | up    | 2016-12-14T07:24:22.000000 |
+------------------+---------------------------+------+---------+-------+----------------------------+

Since we worked so hard on this, let’s do further verification. Let’s try and create a 1GB test volume.

openstack volume create --size 1 test-vol
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-12-14T07:28:59.491675           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | f6ec46ca-9ccf-47fb-aaea-cdde4ad9644e |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | test-vol                             |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | 97b1b7d8cb0d473c83094c795282b5cb     |
+---------------------+--------------------------------------+

Now let’s list the volumes in the environment and ensure that the volume you just created appears with Status = available.

openstack volume list
+--------------------------------------+--------------+-----------+------+------------------------------+
| ID                                   | Display Name | Status    | Size | Attached to                  |
+--------------------------------------+--------------+-----------+------+------------------------------+
| f6ec46ca-9ccf-47fb-aaea-cdde4ad9644e | test-vol     | available |    1 |                              |
+--------------------------------------+--------------+-----------+------+------------------------------+

Congratulations! You’ve reached a significant milestone. You have a fully functioning OpenStack all to yourself. If you’re following along these episodes and have successfully verified the operations of the respective modules, then give yourself a pat on the back.

When we started this series, I explained to you that OpenStack is complex, not complicated. It is comprised of a number of parts (services) that work together to bring to us the whole OpenStack experience. We have been introduced to the main characteristics of OpenStack, which ones like to work with which and what the basic functions for each are.

By now you’re wondering, “Where’s the heat?” Finally let me introduce you to one of my favorite characteristics of OpenStack, the orchestration service Heat.

G. Heat

ose3-2

Heat is a service that manages orchestration. What’s that? Let me take you through some examples. Let’s say you have a new customer. This customer will require certain networks, cloud instances, routers, firewall rules, etc. One way to achieve this is to use the OpenStack command line tool or the Horizon GUI.

Both of these are good methods, however they are time consuming, require manual intervention and are prone to human error. What if I told you that there is a way to automate most of these things and standardize them using templates so you can reuse them across customers? That’s what Heat does. It automates the facilitation of services to your guests (cloud customers). Please perform the following configuration to setup heat on the controller node:

@controller

Create the heat database and assign full user rights:

sudo mysql -u root -p
 CREATE DATABASE heat;
 GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
 IDENTIFIED BY 'MINE_PASS';
 GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
 IDENTIFIED BY 'MINE_PASS'
 exit

Source the OS command line:

source ~/keystone_admin

Create the Heat user:

openstack user create --domain default --password-prompt heat

Assign the role to the Heat user:

openstack role add --project service --user heat admin

Create the Heat service (it requires two services):

openstack service create --name heat --description "Orchestration" orchestration
openstack service create --name heat-cfn --description "Orchestration"  cloudformation

Create the respective service endpoints:

openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1

Heat requires a special domain in OpenStack to operate. Create the respective domain:

openstack domain create --description "Stack projects and users" heat

Create the domain admin for the special heat domain:

openstack user create --domain heat --password-prompt heat_domain_admin

Add the role for the heat_domain_admin:

openstack role add --domain heat --user-domain heat --user heat_domain_admin admin

Create this new role. You can add this role to any user in OpenStack who needs manage Heat stacks (Stacks are a way to represent the application of Heat templates and what they are doing in a certain scenario. More on this later.)

openstack role create heat_stack_owner

(Optional) Let’s say you have a user Customer1 in a project customer1_admin. You can use the following command to allow this user to manage Heat stacks.

openstack role add --project Customer1 --user customer1_admin heat_stack_owner

Create the heat_stack_user role:

openstack role create heat_stack_user

NOTE: The Orchestration service automatically assigns the heat_stack_user role to users that it creates during the stack deployment. By default, this role restricts API operations. To avoid conflicts, do not add this role to users with the heat_stack_owner role.

Install the Heat software:

sudo apt-get install heat-api heat-api-cfn heat-engine

Configure the Heat config file:

sudo vi  /etc/heat/heat.conf
 [DEFAULT]
 rpc_backend = rabbit
 heat_metadata_server_url = http://controller:8000
 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
 #This is the domain admin we defined above
 stack_domain_admin = heat_domain_admin
 stack_domain_admin_password = MINE_PASS
 #This is the name of the special domain we defined for heat
 stack_user_domain_name = heat
 #Tell heat how to connect to RabbitMQ
 transport_url = rabbit://openstack:MINE_PASS@controller

 #Heat DB connection. Comment out any existing connection entries
 [database]
 connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat

 #Tell heat how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = heat
 password = MINE_PASS

 #This section is required for identity service access
 [trustee]
 auth_type = password
 auth_url = http://controller:35357
 username = heat
 password = MINE_PASS
 user_domain_name = default

 #This section is required for identity service access
 [clients_keystone]
 auth_uri = http://controller:35357

 #This section is required for identity service access
 [ec2authtoken]
 auth_uri = http://controller:5000

Initialize the Heat DB:

sudo su -s /bin/sh -c "heat-manage db_sync" heat

Start the Heat services:

sudo service heat-api restart
sudo service heat-api-cfn restart
sudo service heat-engine restart

Verify Operation:

Source the OpenStack command line:

source ~/keystone_admin

List the Heat Services and ensure that the status is set to up as show below:

openstack orchestration service list
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+
| hostname              | binary      | engine_id                            | host                  | topic  | updated_at                 | status |
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+
| controller            | heat-engine | de08860a-8d30-483a-acd5-6cfef8cb7d77 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | 859475c8-9b2a-4793-b877-e89a4f0920f8 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | 4ca0a3bb-7c2b-4fe1-8233-82b7e0548b9a | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | d22b36b1-1467-4987-aa30-0ac9787450e1 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+

Fantastic!

Look at how far you and OpenStack have come in only three posts. Take a look at the diagram below:

ose3-3
OpenStack base services (+Heat)

So at the end of Post-3 we have successfully setup most of the common OpenStack base services and, more importantly, understood the various configurations that allow these services to interconnect and work together. As always, I sincerely thank you for reading and your patience. If you have any questions/comments please feel free to share in the comments section below so everyone from different sources can benefit from the discussion.

This post first appeared as part of a Series of posts called “An OpenStack Series” on the WhatCloud blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post Let’s heat things up with Cinder and Heat appeared first on OpenStack Superuser.

by Nooruddin Abbas at April 07, 2017 11:29 AM

April 06, 2017

Mirantis

Mirantis Cloud Platform: The light at the end of the OpenStack tunnel (that’s not another train)

There's no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we're not immune to it. In fact, some would say that we're part of that turmoil.

by Nick Chase at April 06, 2017 09:00 PM

What happens when you inject AppController into the Helm?

The post What happens when you inject AppController into the Helm? appeared first on Mirantis | Pure Play Open Cloud.

Helm is emerging as a standard for Kubernetes application packaging. While researching it we discovered that its orchestration part can be improved. We did just that by injecting AppController right into the Helm orchestration engine. Check out our video from KubeCon EU to get insights into the advanced orchestration capabilities that AppController aims to introduce in Helm.

The post What happens when you inject AppController into the Helm? appeared first on Mirantis | Pure Play Open Cloud.

by Diana Dvorkin at April 06, 2017 08:15 PM

Red Hat Stack

Red Hat joins the DPDK Project

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.

Open source communities continue to be a driving force behind technology innovation, and open networking and NFV are great examples of that. Red Hat believes deeply in the power of open source to help transform the telecommunications industry, enabling service providers to build next generation efficient, flexible and agile networks,” said Chris Wright, Vice President and Chief Technologist, Office of Technology at Red Hat. “DPDK has played an important role in this network transformation, and our contributions to the DPDK community are aimed at helping to continue this innovation.”

DPDK, the Data Plane Development Kit, is a set of libraries and drivers which enable very fast processing of network packets, by handling traffic in user space or on specialized hardware to provide greater throughput and processing performance. The ability to do this is vital to get the maximum performance out of network hardware under dataplane intensive workloads. For this reason, DPDK has become key to the telecommunications industry as part of Network Functions Virtualization (NFV) infrastructure, to enable applications like wireless and wireline packet core, deep packet inspection, video streaming, and voice services.

Open source projects like DPDK have taken a leadership role in driving the transition to NFV and enabling technology innovation in the field of networking by accelerating the datapath for network traffic across virtual switching and routing infrastructure.
It is opportune that this move is announced during the Open Networking Summit, an event which celebrates the role of open source projects and open standards in the networking industry. DPDK is a critical component to enable projects like OPNFV, Open vSwitch and fd.io to accelerate the datapath for network traffic across virtual switching and routing infrastructure, and provide the necessary performance to network operators.

by Marcos Garcia - Principal Technical Marketing Manager at April 06, 2017 05:13 PM

OpenStack Superuser

How to balance corporate and community to build better open source

Open source is playing a larger and larger role in companies but that doesn’t mean they understand how to play nice with outside contributors.
They face the challenge of connecting internal company goals with the external dynamics of an open source community  — a clash that can leave both sides complaining “They just don’t get it!” says veteran community manager Jono Bacon. Bacon outlined a series of recommendations and pitfalls at the Linux Foundation’s Open Source Leadership Summit.
Bacon, also the founder of the upcoming Community Leadership Summit, kicked off the session with a few numbers from Black Duck’s 2016 Future of Open Source Survey: 65 percent of companies polled are contributing to open source projects, 67 percent actively encourage developers to contribute or participate, and 78 percent run some open source — twice as many as in 2010.

jono
Jono Bacon at the Linux Foundation Community Leadership Summit.

That creates an interesting problem when corporate logic meets the open source community ethic, he says. If companies get it right, in return they see great engineering, community relations and growth. Get it wrong and they end up alienating both employees and community members.

“It’s a big change for companies to go from being a traditional waterfall style command and control environment to an open source one,” Bacon says. “It’s all about being permissive, you have to allow people to make mistakes. This is the open source way – no one is perfect, we all fail at times but let’s talk about what we want to do do improve it…”

Bacon outlined a number of recommendations as well as pitfalls to watch out for.

Treat community as part of the product

Companies have engineering, sales and marketing team and they’ll build the thing and then leave the forum or Git hub project over there – there you go. “To me that’s a mistake,” Bacon says. “If you want to embrace open source, it has to be something that everyone in the company plays some kind of role in.”
He cites the example of one of his consulting clients, HackerOne, where the CEO Mårten Mickos gave everyone at the company the “assignment” to know at least one hacker. “Everyone in open source should know at least one community member. The only way to do this is if the community is treated as a strategic component of the product development. It shouldn’t be that you hire someone to just ‘take care of it.’”

Give that “product” an owner

But someone has to be responsible for community just the same, he says. “Ultimately, there should be a single point of contact for ownership of all these ideas and concepts and how we work with communities, someone should be responsible for owning that and getting things done,” Bacon notes. “In a lot of companies it’s a side thing – we set this thing up in the community and when we have time, we’ll do something with it.”
A good example is when companies say they want to get the community involved by writing a bunch of blog posts —then people get busy and no blog posts get written. Companies need a plan in place with someone responsible for executing that plan. He believes that person should have some seniority — not necessarily reporting to the CEO but someone with the clout to have a frank conversation with senior management team to lay down a strategy.

Engage in extreme clarity

This can be as simple as writing a weekly report about what’s going on and sharing that in the community and within the company, Bacon says, adding that general anxiety levels increase in the absence of feedback. The preventative cure for that is openness. “When the company has differences in strategic work, share that with the community. It doesn’t have to be every single detail, but just be open about it,” Bacon says. “And, at the risk of sounding like Tony Robbins, failure really should be embraced. It’s how we learn. It needs to start with the top because the leaders shape the culture, if you have a CEO who is not a nice person, you often get a not-so-nice culture in the company.”

Earn trust

It works both ways. Companies need to earn trust but it’s a partnership, a collaboration. “Get to know each other – have a drink, eat lunch together, be sociable.”  That’s the kind of community interaction that helps members find jobs, fosters partnerships between companies and makes things all around more pleasant, he adds. That includes regular in-person meetings. “It’s amazing how valuable meeting in person is, having regular conversations. Developing friendships really matters. That’s something you can’t out a price tag on,” Bacon adds.

Avoid private development and code dumps

Fear that someone will steal your idea is not a good enough reason to keep your code to yourself, he says. “A lot of companies think they’re going to build this great piece of software and then provide it as a code dump, but don’t do it.” Bacon says he thinks this closed, one-off approach doesn’t work outside communities where there are very modular projects — like a driver, or a plug-in for example. “Try to be part of the process.”

Ignore the community (or companies) at your own peril

What Bacon hears from both companies and open source communities may sound familiar:
“They just don’t get it!”
“They have an unrealistic view of the world.”
“They are constantly complaining!”

And that both sides adopt that’s most likely: ignore the other side and hope it will go away.
“The community is not just a bunch of single-origin tea drinking weirdo hippies,” he says. “These people are part of how we build great things, so build a relationship don’t just write them off.”
It’s a simple concept that can be hard to out into practice, Bacon admits. If you’ve been working in business development in the airline industry, for example, the open source ethos is going to seem “weird” — so it helps to acknowledge that and help people adjust. Companies invest so much money in feature development in a lot of open source projects so it’s important to treat those people with the respect they deserve There’s a group of people in the open source/free software world who are anti-company — viewing companies as both a risk and a threat. “That’s wrong. Companies can play a really valuable role, but they need to be members of the process not dictators.”

Emotive decision making

This is another major pitfall, Bacon says. “When I see conflicts happening, invariably we’re not talking about the problem but the perception of the other person. This is the worst thing that can happen.”
He says the challenges mirror the currently supercharged political climate where it’s difficult to have an objective conversation. What happens often is that people immediately take sides and start firing shots — not a way to improve things, he says.
“The only way we can do this – as communities and companies – is to step back and look at the objectively defined things that we can talk about. There’s a lot of interpretive data and perception going on,” Bacon says. “But I think in those situations where you get that kind of conflict you must step back and focus on outcomes. Then we can solve it.”
Stefano Maffulli is an open-source marketer and community manager currently at DreamHost.

Cover Photo // CC BY NC

The post How to balance corporate and community to build better open source appeared first on OpenStack Superuser.

by Stefano Maffulli at April 06, 2017 11:15 AM

April 05, 2017

Chris Dent

OpenStack Pike TC Candidacy

This is my candidacy statement for the OpenStack Technical Committee, sent as email to the developers list.

I'm once again nominating myself to be your representative on the Technical Committee. I've been around OpenStack for about three years, most recently visible as the guy who writes those weekly updates about the placement API service and talks about the API-WG.

In the past several months we've seen the TC starting to take a more active role in describing and shaping the technical and cultural environment of OpenStack. Initiatives like release goals, TC and OpenStack vision exercises, discussions on how to reasonably constrain growth and increased attention to writing things down are all positives.

Meanwhile the economic environment for cloud technology and for technical contributors has been a roller coaster. Lots of things are changing in the world of OpenStack.

OpenStack must adapt. Doing so without losing the progress that's been made will be hard and requires input from a diversity of voices; people who are willing and able to critique and investigate the status quo but also understand the importance of consensus and value of compromise.

Voting for the TC is weird: people nominate themselves and then a small segment of the electorate places their votes based on some combination of "have I heard of this person before", "have I witnessed some of their work and liked it", and, sometimes, discussion that happens as a result of these candidacy statements. I hope you'll ask me some questions in the week before the election, but in an effort to illustrate the biases and concerns I would bring to the TC here are some opinions I have related to governance:

  • Telling stories that explain what and why are more useful in the long run than listing rules of how because they lead to a more complete understanding.

  • It is always better to over communicate than under communicate and it is best to do so in a written and discoverable fashion. Not just because this helps to keep everyone already involved up to date but because it also enables connections with new people and other communities.

  • The OpenStack ecosystem needs to open up to allow and encourage those connections. Open ecosystems can evolve and benefit from exchange of ideas. So yes, of course, we should use some golang. Of course we should party with kubernetes and trade ideas with them.

  • OpenStack is better when its people and its projects have opinions about lots of things, share those opinions widely, and use them to make better stuff and make better decisions.

  • There are too many boundaries (some real, some perceived) between developers of OpenStack, developers using OpenStack, users, and operators. We're all in this together. All of those people should be encouraged and able to be contributors and all of those people should be users.

  • OpenStack can and should do a lot of complicated stuff for big enterprises (things like NFV and high performance VMs) but the changes required to satisfy those use cases must always be balanced and measured against providing a useful and usable cloud for individual humans.

  • As we move forward on the idea of OpenStack as one platform made with many pieces, we have an opportunity to re-evaluate and refactor our architecture and project structure to make it easier for improvement to happen. We need to ask ourselves if the boundaries we currently maintain, technical and social, are the right ones, and change the ones that are not.

  • For a lot of people, contributing to OpenStack is a job. Working on OpenStack should be a good experience for everyone. I think of being a TC member as something akin to a union representative: striving to keep things sane and positive for the individual contributor in the face of change and conflict.

With the TC positioning itself to take a more active role, these elections could be more important than you've come to expect. The people you choose, the attitudes they have, will shape that new activism. If you feel like I'm talking some sense above, I'd appreciate your vote. If you need some clarification, please ask me some questions. After that, if you're still not convinced, please vote for someone else. But please vote.

by Chris Dent at April 05, 2017 06:00 PM

Major Hayden

RHEL 7 STIG v1 updates for openstack-ansible-security

OpenStack LogoDISA’s final release of the Red Hat Enterprise Linux (RHEL) 7 Security Technical Implementation Guide (STIG) came out a few weeks ago and it has plenty of improvements and changes. The openstack-ansible-security role has already been updated with these changes.

Quite a few duplicated STIG controls were removed and a few new ones were added. Some of the controls in the pre-release were difficult to implement, especially those that changed parameters for PKI-based authentication.

The biggest challenge overall was the renumbering. The pre-release STIG used an unusual numbering convention: RHEL-07-123456. The final version used the more standardized “V” numbers, such as V-72225. This change required a substantial patch to bring the Ansible role inline with the new STIG release.

All of the role’s documentation is now updated to reflect the new numbering scheme and STIG changes. The key thing to remember is that you’ll need to use --skip-tag with the new STIG numbers if you need to skip certain tasks.

Note: These changes won’t be backported to the stable/ocata branch, so you need to use the master branch to get these changes.

Have feedback? Found a bug? Let us know!

The post RHEL 7 STIG v1 updates for openstack-ansible-security appeared first on major.io.

by Major Hayden at April 05, 2017 05:46 PM

NFVPE @ Red Hat

How-to use GlusterFS to back persistent volumes in Kubernetes

A mountain I keep walking around instead of climbing in my Kubernetes lab is storing persistent data, I kept avoiding it. Sure – in a lab, I can just throw it all out most of the time. But, what about when we really need it? I decided I would use GlusterFS to back my persistent volumes and I’ve got to say… My experience with GlusterFS was great, I really enjoyed using it, and it seems rather resilient – and best of all? It was pretty easy to get going and to operate. Today we’ll spin up a Kubernetes cluster using my kube-centos-ansible playbooks, and use some newly included plays that also setup a GlusterFS cluster. With that in hand, our goal will be to setup the persistent volumes and claims to those volumes, and we’ll spin up a MariaDB pod that stores data in a persistent volume, important data that we want to keep – so we’ll make some data about Vermont beer as it’s very very important.

by Doug Smith at April 05, 2017 05:05 PM

koko – Connect Containers together with virtual ethernet connections

Let’s dig into koko created by Tomofumi Hayashi. koko (the project’s namesake comes from “COntainer COnnector”) is a utility written in Go that gives us a way to connect containers together with “veth” (virtual ethernet) devices – a feature available in the Linux kernel. This allows us to specify interfaces that the containers use and link them together – all without using Linux bridges. koko has become a cornerstone of the zebra-pen project, an effort I’m involved in to analyze gaps in containerized NFV workloads, specifically it routes traffic using Quagga, and we setup all the interfaces using koko. The project really took a turn for the better when Tomo came up with koko and we implemented it in zebra-pen. Ready to see koko in action? Let’s jump in the pool!

by Doug Smith at April 05, 2017 05:05 PM

vethcon – Connect Docker containers together with virtual ethernet connections

Let’s dig into vethcon created by Tomofumi Hayashi. vethcon is a utility written in Go that gives us a way to connect containers together with “veth” (virtual ethernet) devices – a feature available in the Linux kernel. This allows us to specify interfaces that the containers use and link them together – all without using Linux bridges. vethcon has become a cornerstone of the zebra-pen project, an effort I’m involved in to analyze gaps in containerized NFV workloads, specifically it routes traffic using Quagga, and we setup all the interfaces using vethcon. The project really took a turn for the better when Tomo came up with vethcon and we implemented it in zebra-pen. Ready to see vethcon in action? Let’s jump in the pool!

by Doug Smith at April 05, 2017 05:04 PM

Matt Dorn

Certified OpenStack Administrator Exam Book

My new book, Preparing for the Certified OpenStack Administrator Exam , will be out in July 2017 on Packt Publishing.

The exam prepares students for the upcoming Newton version of the exam on Ubuntu 16.04. The exam currently tests on Liberty but will be upgraded to Newton in May 2017.

Here is an excerpt from the first chapter:

Benefits of Passing the Exam

Ask anyone about getting started in the IT world and they may suggest looking into industry-recognized technical certifications.  IT certifications measure competency in a number of areas and are a great way to open doors to opportunities.  While they certainly should not be the only determining factor in the hiring process, achieving them can be a measure of your competence and commitment to facing challenges.

If You Pass...

Upon completion of a passing grade, you will receive your certificate. Laminate, frame, or pin it to your home office wall or work cubicle!  It’s proof that you have met all the requirements to become an official OpenStack Administrator.  The certification is valid for three years from the pass date so don't forget to renew!

In addition to the certification, a COA badge will appear next to your name in the OpenStack Foundation Member Directory.

Note

The OpenStack Foundation has put together a great tool for helping employers verify the validity of COA certifications.  Check out the Certified OpenStack Administrator Verification Tool.

7 Steps to Becoming a Certified OpenStack Administrator!

Let's begin by walking through some steps to become a Certified OpenStack Administrator!

Step 1 - Study!

Practice. Practice. Practice. Use this book and the included OpenStack All-in-One Virtual Appliance as a resource as you begin your Certified OpenStack Administrator journey. If you still find yourself struggling with the concepts and objectives in this book, you can always refer to the Official OpenStack Documentation or even seek out a live training class at the OpenStack Training Marketplace.

Step 2 - Purchase!

Once you feel that you’re ready to conquer the exam, head to the Official Certified OpenStack Administrator Homepage and click on Get Started. After signing in, you will be directed to checkout to purchase your exam. The OpenStack Foundation accepts all major credit cards and as of April 2017, costs $300.00 USD but is subject to change so keep an eye out on the website. You can also get a FREE retake within 12 months of the original exam purchase date if you do not pass on the first attempt.

Note

To encourage academia students to get their feet wet with OpenStack technologies, the OpenStack Foundation is offering the exam for $150.00 (50% off the retail price) with a valid student ID. Check out https://www.openstack.org/coa/student/ for more info.

Step 3 - COA Portal Page

Once your order is processed, you will receive an email with access to the COA Portal. Think of the portal as your personal COA website where you can download your exam receipt and keep track of your certification efforts. Once you take the exam, you can come back to the COA Portal to check your exam status, exam score, and even download certificates and badges for displaying on business cards or websites!

Step 4 - Hardware Compatibility Check

The COA exam can be taken from your personal laptop or desktop but you must ensure that your system meets the exam’s minimum system requirements. A link on the COA Portal page will present you with the Compatibility Check Tool which will run a series of tests to ensure you meet the requirements. It will also assist you in downloading a Chrome plugin for taking the exam. At this time, you must use the Chrome or Chromium browser and have access to reliable internet, a webcam, and microphone. Here is a current list of requirements:

Step 5 - Identification

You must be at least 18 years old and have proper identification to take the exam!

Any of the following pieces of identification are acceptable:

  • Passport
  • Government-issued driver's license or permit
  • National identity card
  • State or province-issued identity card

Step 6 - Schedule the Exam

I personally recommend scheduling your exam a few months ahead of time to give yourself a realistic goal. Click on the Schedule Exam link on the COA Portal to be directed and automatically logged into the Exam Proctor Partner website. Once logged into the site, type OpenStack Foundation in the search box and select the COA exam. You will then choose from available dates and times. The latest possible exam date you can schedule will be 30 days out from the current date. Once you have scheduled it, you can cancel or reschedule up to 24 hours before the start time of the exam.

Step 7 - Take the Exam!

Your day has arrived! You've used this book and have practiced day and night to master all of the covered objectives! It's finally time to take the exam!

One of the most important factors determining your success on the exam is the location. You cannot be in a crowded place! This means no coffee shops, work desks, or football games! The testing location policy is very strict, so please consider taking the exam from home or perhaps a private room in the office.

Log into the COA Portal fifteen minutes before your scheduled exam time. You should now see a Take Exam link which will connect to the Exam Proctor Partner website so you can connect to the testing environment.

Once in the exam environment, an Exam Proctor chat window will appear and assist you with starting your exam. You must allow sharing of your entire operating system screen (this includes all applications), webcam, and microphone. t’s time to begin! You have two and a half hours to complete all exam objectives. You're almost on your way to becoming a Certified OpenStack Administrator!

About the Exam Environment

The exam expects its test-takers to be proficient in interacting with OpenStack via the Horizon dashboard and command-line interface. Here is a visual representation of the exam console as outlined in the COA Candidate Handbook:

The exam console is embedded into the browser. It is composed of two primary parts: The Content Panel and the Dashboard/Terminal Panel.

The Content Panel is the section that displays the exam timer and objectives. As per the COA Handbook, exam objectives can only be navigated linearly. You can use the next and back button to move to each objective.

The Dashboard/Terminal Panel gives you full access to an OpenStack environment. Chapter 2 of this book will assist you with getting your practice OpenStack environment environment up and running so you can work through all the objectives.

Note

The exam console terminal is embedded in a browser and you cannot SCP (secure copy) to it from your local system. Within the terminal environment, you are permitted to install a multiplexor such as screen, tmux, or byobu if you think these will assist you but are not necessary for successful completion of all objectives.

You are not permitted to browse websites, email, or notes during the exam but you are free to access the Official OpenStack Documentation. This can be a major waste of time on the exam and shouldn’t be necessary after working through the exam objectives in this book. You can also easily copy and paste from the objective window into the Horizon dashboard or terminal.

If you struggle with a question, move on! Hit the next button and try the next objective. You can always come back and tackle it before time is up.

The exam is scored automatically within 24 hours and you should receive the results via email within 72 hours after exam completion. At this time, the results will be made available on the COA Portal. Please review the Professional Code of Conduct on the OpenStack Foundation Certification Handbook.

The Exam Objectives

Let's now take a look at the objectives you will be responsible for performing on the exam. As of March 2017, these are all the exam objectives published on the Official COA website. These objectives will test your competence and proficiency in the domains listed below. These domains cover multiple core OpenStack services as well as general OpenStack troubleshooting. Together, all of these domains make up 100% of the exam.

Note

Because some of the objectives on the official COA Requirements list overlap, this book utilizes its own convenient strategy to ensure you can fulfill all objectives within all content areas.

Getting To know OpenStack - 3% - Chapter 1

  • Understand the components that make up the cloud
  • Use the OpenStack API/CLI

Keystone - Identity management - 12% - Chapter 3

  • Manage Keystone catalogue services and endpoints
  • Manage/Create domains, groups, projects, users, and roles
  • Create roles for the environment
  • Manage the identity service
  • Verify operation of the Identity service

Glance - Image management - 10% - Chapter 4

  • Deploy a new image to an OpenStack instance
  • Manage image types and backends
  • Manage images (e.g. add, update, remove)
  • Verify operation of the Image Service

Nova - Compute - 15% - Chapter 5

  • Manage flavors
  • Manage compute instance actions (e.g. launch, shutdown, terminate)
  • Manage Nova user keypairs
  • Launch a new instance
  • Shutdown an Instance
  • Terminate Instance
  • Configure an Instance with a Floating IP address
  • Manage project security group rules
  • Assign security group to Instance
  • Assign floating IP address to Instance
  • Detach floating IP address from Instance
  • Manage Nova host consoles (rdp, spice, tty)
  • Access an Instance using a keypair
  • Manage instance snapshots
  • Manage Nova compute servers
  • Manage quotas
  • Get Nova stats (hosts, services, tenants
  • Verify operation of the Compute service

Neutron - Networking - 16% - Chapter 6

  • Manage network resources (e.g., routers, subnets)
  • Create external networks
  • Create project networks
  • Create project routers
  • Manage network services for a virtual environment
  • Manage project security group rules
  • Manage quotas
  • Verify operation of network service
  • Manage network interfaces on compute instances
  • Troubleshoot network issues for a tenant network (enter namespace, run tcpdump, etc)

Cinder - Block Storage - 10% - Chapter 7

  • Manage volume
  • Create volume group for block storage
  • Create a new Block Storage Volume and mount it to a Nova Instance
  • Manage quotas
  • Manage volumes quotas
  • Manage volumes backups
  • Backup and restore volumes
  • Manage volume snapshots (e.g, take, list, recover)
  • Verify that block storage can perform snapshotting function
  • Snapshot volume
  • Manage volumes encryption
  • Set up storage pools
  • Monitor reserve capacity of block storage devices
  • Analyze discrepancies in reported volume sizes

Swift - Object Storage - 10% - Chapter 8

  • Manage access to object storage
  • Manage expiring objects
  • Manage storage policies
  • Monitor space available for object store
  • Verify operation of Object Storage
  • Manage permissions on a container in object storage

Heat - Orchestration - 8% - Chapter 9

  • Launch a stack using a Heat/Orchestration template (e.g., storage, network, and compute
  • Use Heat/Orchestration CLI and Dashboard
  • Verify Heat/Orchestration stack is working
  • Verify operation of Heat/Orchestration
  • Create a Heat/Orchestration template that matches a specific scenario
  • Update a stack
  • Obtain detailed information about a stack

Horizon - Dashboard - 3% - Chapters 3 through 9

  • Verify operation of the Dashboard

Troubleshooting - 13% - Chapter 10

  • Analyze log files
  • Backup the database(s) used by an OpenStack instance
  • Centralize and analyze logs (e.g.,/var/log/COMPONENT_NAME, Database Server, Messaging Server, Web Server, syslog)
  • Analyze database servers
  • Analyze Host/Guest OS and Instance status
  • Analyze messaging servers
  • Analyze metadata servers
  • Analyze network status (physical & virtual)
  • Analyze storage status (local, block & object)
  • Manage OpenStack Services
  • Diagnose service incidents
  • Digest OpenStack environment (Controller, Compute, Storage and Network nodes)
  • Direct logging files through centralized logging system
  • Backup and restore an OpenStack instance
  • Troubleshoot network performance

by Matt Dorn at April 05, 2017 05:00 PM

Rich Bowen

What’s new in OpenStack Ocata

OpenStack Ocata has now been out for a little over a month – https://releases.openstack.org/ – and we’re about to see the first milestone of the Pike release. Past cycles show that now’s about the time when people start looking at the new release to see if they should consider moving to it. So here’s a quick overview of what’s new in this release.

First, it’s important to remember that the Ocata cycle was very short. We usually do a release every six months, but with the rescheduling of the OpenStack Summit and OpenStack PTG (Project Team Gathering) events, Ocata was squeezed into 4 months to realign the releases with these events. So, while some projects squeezed a surprising amount of work into that time, most projects spent the time on smaller features, and finishing up tasks leftover from the previous release.

At a high level, the Ocata release was all about upgrades and containers – themes that I heard from almost every team I interviewed. (For the full interview series, see https://goo.gl/3aCQ2d ) How can we make upgrades smoother, and how can we deploy bits of the infrastructure in containers. These two things are closely related, and there seems to be more cross-project collaboration this time around than I’ve noticed in the past.

And the themes of upgrades and containers will continue to be prominent in the Pike cycle.

Highlights in the Ocata cycle include:

Auto-healing: Work was done in Heat to make it easier to recover from service failure. When an outage is detected, you can have Heat automatically spin up a replacement service, and swap it out without any intervention on the part of the operator.

Composability: Composable roles are a feature whereby you can specify details of how things are deployed, rather than allowing OpenStack to choose. You can, for example, specify that a particular hardware configuration be used for particular services. This is termed Composable Roles. Work was done in Ocata to expand this to composable upgrades, so that these roles are respected across upgrades as well.

Multi-factor auth in Keystone: Work was done in Keystone to improve support of MFA, including OTP (One Time Password) support, and per-user token expiration rules.

NFV: Network Function Virtualization continues to be an area where we’re seeing a lot of activity, and so a lot of the work in Nova, Neutron, and various other projects focuses on these developments. NFV has become more stable in this release, and is more fully integrated into TripleO for ease of deployment. This effort is happening under the Apex project – https://wiki.opnfv.org/display/apex/Apex

Upgrades: Upgrades were a common theme across all projects, with the emphasis being the ability to upgrade from one release to the next with as close to zero downtime as possible. Much of this work centers around TripleO, Heat, and Mistral, for orchestration and automation of the process.

Containers: While centered around the Kolla project, containerization was a theme in many of the projects this cycle. The eventual goal, at least according to some, is that OpenStack services will be deployed in containers by default by the Pike release. This of course poses a real challenge for the Ocata -> Pike upgrade path (migrating from non-container to container in the course of the upgrade), and that’s something that the TripleO people are working hard on.

Security: TLS-everywhere made strides forward in Ocata, with connections between services moving to TLS. This involves changes to Barbican as well, for key management for the shared keys between services, to ensure that your traffic is secure between components of your cloud, which may be located in different data centers around the world.

Collaboration: Something I heard more this year than in previous years was talk of collaboration between projects. This has, of course, always been happening. However at the PTG in Atlanta, it was a major focus, with time set aside for cross-project meetings focusing on the interface between one service and another. I also heard from several people that the PTG allowed a focus, and a camaraderie, that was not possible when the design summit was part of OpenStack Summit. This resulted in fewer interpersonal tensions, and a lot more work getting done.

Everything else: The difficulty with OpenStack is that it’s just so big. While these are the things that stood out to me, someone else is likely to pull out different highlights, depending on their interests. So I encourage you to look at some of the other “What’s new in Ocata” articles out there, including especially “53 new things to look for in OpenStack Ocata” – https://www.mirantis.com/blog/53-new-things-to-look-for-in-openstack-ocata/ and, if you have a lot of time, or have interest in a particular project, check out the official release notes – https://releases.openstack.org/ocata/  And take a moment to watch my interviews with various Red Hat OpenStack engineers, from the Atlanta PTG, here: https://goo.gl/3aCQ2d.

by rbowen at April 05, 2017 03:59 PM

Eran Rom

Machine Learning with Storlets

While working together with the IOStack folks on a paper that shows how storlets can be used to boost spark SQL workloads, two of my colleagues, Raul Gracia Tinedo and Yosef Moatti brought the idea of doing machine learning with Storlets. Out of sheer ignorance I was against the idea. I was wrong and in more than one way.  Now, that I am a little less ignorant about machine learning, I can say that storlets can be useful for machine learning in several ways which I describe in this blog.

by eran at April 05, 2017 01:24 PM

OpenStack Superuser

Tips for surviving the OpenStack review process

Ihar Hrachyshka is a seasoned OpenStack Neutron contributor. He joined the community back in Havana (2013) and is proud to be part of Red Hat OpenStack Networking team.

In the OpenStack community, we often emphasize that reviews are very important and that the best way to get up to speed with the community and to understand how it operates, as well as to get things done, is to join the hordes of reviewers for the project of your interest.

Indeed, reviews from newcomers are of enormous importance: doing reviews gives you a broader picture of what happens beyond the small thing that is most dear to your heart; it naturally aligns team members along parallel efforts, it’s a good learning experience both for authors as well as reviewers themselves and of course it helps to catch bugs before buggy code merges.

Last but not least, good reviewers are eventually promoted to core reviewers and it’s hard to know if you are good at something without trying it, so it makes sense that core reviewers are expected to prove their review record before getting access to +Workflow button. And so, we emphasize the need for everyone to take part in reviews, but we don’t tell people how to do it right.

Of course, people make mistakes, and that’s OK. Core reviewers are no different. More so, falling into the whirlpool of day-to-day reviews sometimes blurs the whole picture of why we do them in the first place (to merge code), with reviewers contesting around how many -1s they can put on others’ patches, however tiny the spotted mistake is. Nit picking and yak-shaving traditions in OpenStack community are well known and can  frustrate and alienate some contributors, especially newcomers.

And so I posted a thread of tweets to remind people that we share our love for OpenStack with the idea that reviews are not the end goal, but quality and modern code shipped on time is.

Superuser is always interested in community content- get in touch at editorATopenstack.org

The post Tips for surviving the OpenStack review process appeared first on OpenStack Superuser.

by Ihar Hrachyshka at April 05, 2017 11:08 AM

April 04, 2017

Mirantis

Let’s Meet At OpenStack Summit In Boston!

The citizens of Cloud City are suffering — Mirantis is here to help! We’re planning to have a super time at summit, and hope that you can join us in the fight against vendor lock-in. Come to booth C1 to power up on the latest technology and our revolutionary Mirantis Cloud Platform. If you’d like … Continued

by Dave Van Everen at April 04, 2017 11:30 PM

RDO

Steve Hardy: OpenStack TripleO in Ocata, from the OpenStack PTG in Atlanta

Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.

Steve: My name is Steve Hardy. I work primarily on the TripleO project, which is an OpenStack deployment project. What makes TripleO interesting is that it uses OpenStack components primarily in order to deploy a production OpenStack cloud. It uses OpenStack Ironic to do bare metal provisioning. It uses Heat orchestration in order to drive the configuration workflow. And we also recently started using Mistral, which is an OpenStack workflow component.

So it's kind of different from some of the other deployment initiatives. And it's a nice feedback loop where we're making use of the OpenStack services in the deployment story, as well as in the deployed cloud.

This last couple of cycles we've been working towards more composability. That basically means allowing operators more flexibility with service placement, and also allowing them to define groups of node in a more flexible way so that you could either specify different configurations - perhaps you have multiple types of hardware for different compute configurations for Nova, or perhaps you want to scale services into particular groups of clusters for particular services.

It's basically about giving more choice and flexibility into how they deploy their architecture.

Rich: Upgrades have long been a pain point. I understand there's some improvement in this cycle there as well?

Steve: Yes. Having delivered composable services and composable roles for the Newton OpenStack release, the next big challenge was giving operators the flexibility to deploy services on arbitrary nodes in your OpenStack environment, you need some way to upgrade, and you can't necessarily make assumptions about which service is running on which group of nodes. So we've implented the new feature which is called composable upgrades. That uses some Heat functionality combined with Ansible tasks, in order to allow very flexible dynamic definition of what upgrade actions need to take place when you're upgrading some specific group of nodes within your environment. That's part of the new Ocata release. It's hopefully going to provide a better upgrade experience, for end-to-end upgrades of all the OpenStack services that TripleO supports.

Rich: It was a very short cycle. Did you get done what you wanted to get done, or are things pushed off to Pike now.

Steve: I think there's a few remaining improvements around operator-driven upgrades, which we'll be looking at during the Pike cycle. It certainly has been a bit of a challenge with the short development timeframe during Ocata. But the architecture has landed, and we've got composable upgrade support for all the services in Heat upstream, so I feel like we've done what we set out to do in this cycle, and there will be further improvements around operator-drive upgrade workflow and also containerization during the Pike timeframe.

Rich: This week we're at the PTG. Have you already had your team meetings, or are they still to come.

Steve: The TripleO team meetings start tomorrow, which is Wednesday. The previous two days have mostly been cross-project discussion. Some of which related to collaborations which may impact TripleO features, some of which was very interesting. But the TripleO schedule starts tomorrow - Wednesday and Thursday. We've got a fairly packed agenda, which is going to focus around - primarily the next steps for upgrades, containerization, and ways that we can potentially collaborate more closely with some of the other deployment projects within the OpenStack community.

Rich: Is Kolla something that TripleO uses to deploy, or is that completely unrelated?

Steve: The two projects are collaborating. Kolla provides a number of components, one of which is container definitions for the OpenStack services themselves, and the containerized TripleO architecture actually consumes those. There are some other pieces which are different between the two projects. We use Heat to orchestrate container deployment, and there's an emphasis on Ansible and Kubernetes on the Kolla side, where we're having discussions around future collaboration.

There's a session planned on our agenda for a meeting between the Kolla Kubernetes folks and TripleO folks to figure out of there's long-term collaboration there. But at the moment there's good collaboration around the container definitions and we just orchestrate deploying those containers.

We'll see what happens in the next couple of days of sessions, and getting on with the work we have planned for Pike.

Rich: Thank you very much.

by Rich Bowen at April 04, 2017 08:45 PM

Mirantis

We installed an OpenStack cluster with close to 1000 nodes on Kubernetes. Here’s what we found out.

We did tests deploying close to 1000 OpenStack nodes on a pre-installed Kubernetes cluster as a way of finding out what problems you might run into, and fixing them, if at all possible.

by Yury Taraday at April 04, 2017 06:19 PM

RDO

Using a standalone Nodepool service to manage cloud instances

Nodepool is a service used by the OpenStack CI team to deploy and manage a pool of devstack images on a cloud server for use in OpenStack project testing.

This article presents how to use Nodepool to manage cloud instances.

Requirements

For the purpose of this demonstration, we'll use a CentOS system and the Software Factory distribution to get all the requirements:

sudo yum install -y --nogpgcheck https://softwarefactory-project.io/repos/sf-release-2.5.rpm
sudo yum install -y nodepoold nodepool-builder gearmand
sudo -u nodepool ssh-keygen -N '' -f /var/lib/nodepool/.ssh/id_rsa

Note that this installs nodepool version 0.4.0, which relies on Gearman and still supports snapshot based images. More recent versions of Nodepool require a Zookeeper service and only support diskimage builder images. Even though the usage is similar and easy to adapt.

Configuration

Configure a cloud provider

Nodepool uses os-client-config to define cloud providers and it needs a clouds.yaml file like this:

cat > /var/lib/nodepool/.config/openstack/clouds.yaml <<EOF
clouds:
  le-cloud:
    auth:
      username: "${OS_USERNAME}"
      password: "${OS_PASSWORD}"
      auth_url: "${OS_AUTH_URL}"
    project_name: "${OS_PROJECT_NAME}"
    regions:
      - "${OS_REGION_NAME}"
EOF

Using the OpenStack client, we can verify that the configuration is correct and get the available network names:

sudo -u nodepool env OS_CLOUD=le-cloud openstack network list

Diskimage builder elements

Nodepool uses disk-image-builder to create images locally so that the exact same image can be used across multiple clouds. For this demonstration we'll use a minimal element to setup basic ssh access:

mkdir -p /etc/nodepool/elements/nodepool-minimal/{extra-data.d,install.d}

In extra-data.d, scripts are executed outside of the image and the one bellow is used to authorize ssh access:

cat > /etc/nodepool/elements/nodepool-minimal/extra-data.d/01-user-key <<'EOF'
#!/bin/sh
set -ex
cat /var/lib/nodepool/.ssh/id_rsa.pub > $TMP_HOOKS_PATH/id_rsa.pub
EOF
chmod +x /etc/nodepool/elements/nodepool-minimal/extra-data.d/01-user-key

In install.d, scripts are executed inside the image and the following is used to create a user and install the authorized_key file:

cat > /etc/nodepool/elements/nodepool-minimal/install.d/50-jenkins <<'EOF'
#!/bin/sh
set -ex
useradd -m -d /home/jenkins jenkins
mkdir /home/jenkins/.ssh
mv /tmp/in_target.d/id_rsa.pub /home/jenkins/.ssh/authorized_keys
chown -R jenkins:jenkins /home/jenkins

# Nodepool expects this dir to exist when it boots slaves.
mkdir /etc/nodepool
chmod 0777 /etc/nodepool
EOF
chmod +x /etc/nodepool/elements/nodepool-minimal/install.d/50-jenkins

Note: all the examples in this articles are available in this repository: sf-elements. More information to create elements is available here.

Nodepool configuration

Nodepool main configuration is /etc/nodepool/nodepool.yaml:

elements-dir: /etc/nodepool/elements
images-dir: /var/lib/nodepool/dib

cron:
  cleanup: '*/30 * * * *'
  check: '*/15 * * * *'

targets:
  - name: default

gearman-servers:
  - host: localhost

diskimages:
  - name: dib-centos-7
    elements:
      - centos-minimal
      - vm
      - dhcp-all-interfaces
      - growroot
      - openssh-server
      - nodepool-minimal

providers:
  - name: default
    cloud: le-cloud
    images:
      - name: centos-7
        diskimage: dib-centos-7
        username: jenkins
        private-key: /var/lib/nodepool/.ssh/id_rsa
        min-ram: 2048
    networks:
      - name: defaultnet
    max-servers: 10
    boot-timeout: 120
    clean-floating-ips: true
    image-type: raw
    pool: nova
    rate: 10.0

labels:
  - name: centos-7
    image: centos-7
    min-ready: 1
    providers:
      - name: default

Nodepool uses a gearman server to get node requests and to dispatch image rebuild jobs. We'll uses a local gearmand server on localhost. Thus, Nodepool will only respect the min-ready value and it won't dynamically start node.

Diskimages define images' names and dib elements. All the elements provided by dib, such as centos-minimal, are available, here is the full list.

Providers define specific cloud provider settings such as the network name or boot timeout. Lastly, labels define generic names for cloud images to be used by jobs definition.

To sum up, labels reference images in providers that are constructed with disk-image-builder.

Create the first node

Start the services:

sudo systemctl start gearmand nodepool nodepool-builder

Nodepool will automatically initiate the image build, as shown in /var/log/nodepool/nodepool.log: WARNING nodepool.NodePool: Missing disk image centos-7. Image building logs are available in /var/log/nodepool/builder-image.log.

Check the building process:

# nodepool dib-image-list
+----+--------------+-----------------------------------------------+------------+----------+-------------+
| ID | Image        | Filename                                      | Version    | State    | Age         |
+----+--------------+-----------------------------------------------+------------+----------+-------------+
| 1  | dib-centos-7 | /var/lib/nodepool/dib/dib-centos-7-1490688700 | 1490702806 | building | 00:00:00:05 |
+----+--------------+-----------------------------------------------+------------+----------+-------------+

Once the dib image is ready, nodepool will upload the image: nodepool.NodePool: Missing image centos-7 on default When the image fails to build, nodepool will try again indefinitely, look for "after-error" in builder-image.log.

Check the upload process:

# nodepool image-list
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+
| ID | Provider | Image    | Hostname | Version    | Image ID | Server ID | State    | Age         |
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+
| 1  | default  | centos-7 | centos-7 | 1490703207 | None     | None      | building | 00:00:00:43 |
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+

Once the image is ready, nodepool will create an instance nodepool.NodePool: Need to launch 1 centos-7 nodes for default on default:

# nodepool list
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+
| ID | Provider | AZ   | Label    | Target  | Manager | Hostname           | NodeName           | Server ID | IP   | State    | Age         |
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+
| 1  | default  | None | centos-7 | default | None    | centos-7-default-1 | centos-7-default-1 | XXX       | None | building | 00:00:01:37 |
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+

Once the node is ready, you have completed the first part of the process described in this article and the Nodepool service should be working properly. If the node goes directly from the building to the delete state, Nodepool will try to recreate the node indefinitely. Look for errors in nodepool.log. One common mistake is to have an incorrect provider network configuration, you need to set a valid network name in nodepool.yaml.

Nodepool operations

Here is a summary of the most common operations:

  • Force the rebuild of an image: nodepool image-build image-name
  • Force the upload of an image: nodepool image-upload provider-name image-name
  • Delete a node: nodepool delete node-id
  • Delete a local dib image: nodepool dib-image-delete image-id
  • Delete a glance image: nodepool image-delete image-id

Nodepool "check" cron periodically verifies that nodes are available. When a node is shutdown, it will automatically recreate it.

Ready to use application deployment with Nodepool

As a Cloud developper, it is convenient to always have access to a fresh OpenStack deployment for testing purpose. It's easy to break things and it takes time to recreate a test environment, so let's use Nodepool.

First we'll add a new elements to pre-install the typical rdo requirements:

diskimages:
  - name: dib-rdo-newton
    elements:
      - centos-minimal
      - nodepool-minimal
      - rdo-requirements
    env-vars:
      RDO_RELEASE: "ocata"

providers:
  - name: default
    images:
      - name: rdo-newton
        diskimage: dib-rdo-newton
        username: jenkins
        min-ram: 8192
        private-key: /var/lib/nodepool/.ssh/id_rsa
        ready-script: run_packstack.sh

Then using a ready-script, we can execute packstack to deploy services after the node has been created:

label:
  - name: rdo-ocata
    image: rdo-ocata
    min-ready: 1
    ready-script: run_packstack.sh
    providers:
      - name: default

Once the node is ready, use nodepool list to get the IP address:

# ssh -i /var/lib/nodepool/.ssh/id_rsa jenkins@node
jenkins$ . keystonerc_admin
jenkins (keystone_admin)$ openstack catalog list
+-----------+-----------+-------------------------------+
| Name      | Type      | Endpoints                     |
+-----------+-----------+-------------------------------+
| keystone  | identity  | RegionOne                     |
|           |           |   public: http://node:5000/v3 |
...

To get a new instance, either terminate the current one, or manually delete it using nodepool delete node-id. A few minutes later you will have a fresh and pristine environment!

by tristanC at April 04, 2017 02:40 PM

Aptira

OpenStack Australia Day Melbourne – 2 months to go!

Aptira - OpenStack Australia Day - Melbourne

We are now just 2 months away from OpenStack Australia Day Melbourne – and we’ve got an exciting day lined up for you so far!

In addition to the unique sponsor and catering area that we mentioned earlier, OpenStack Australia Day Melbourne will have three sessions running: the business track, technical track and also the new education track:

  • Business track
    This track is suitable for all audiences, and will feature business style sessions, case studies and strategies from Enterprises, SME’s and Vendors. Speakers are encouraged to share business oriented user stories to highlight how real world organisations are implementing Open Source software, as well as to share tips, products & services which would be of value to the community.
  • Technical track
    This track is aimed at intermediate to advanced audiences and will cover more in-depth technical subjects, workshops, demonstrations and hands-on labs. If you’re a Sysadmin, Developer, DevOps or any OpenStack User, share your love (or hate!) for working with Open Source software here. Get the audience involved with a hands-on lab or training session.
  • Educational track
    This track is suitable for all audience levels and will include sessions supporting Education, Research and Innovation. We would highly encourage staff and students working in research institutions to submit sessions highlighting how you are using OpenStack to enable faster innovation.

Speaker submissions are still open. If you’d like to present, please submit your talk at australiaday.openstack.org.au. Speaker submissions are due to close on the 8th of May. However, earlier submissions may be added to the agenda prior to this date if approved, so we recommend submitting your talk as soon as possible.

Also – this is the final week to secure an early bird ticket for OpenStack Australia Day. Your ticket will provide full access to all talks and networking areas, so don’t miss out!

The post OpenStack Australia Day Melbourne – 2 months to go! appeared first on Aptira Cloud Solutions.

by Jessica Field at April 04, 2017 11:33 AM

OpenStack Superuser

Users need advocates to navigate the “cloudy” waters of open-source software

Melvin Hillsman is one of the new members of the OpenStack User Committee, which helps increase operator involvement, collects feedback from the community, works with user groups around the globe and parses through user survey data, to name a few of its efforts. Hillsman was recently elected to the five-member group along with MIT’s Jonathan Proulx and Shamail Tahir of athenaHealth.

Hillsman has worked at Rackspace since 2014 and is currently at the OpenStack Innovation Center as a DevOps technical lead. You’ll also find him giving two presentations and a lightning talk at the upcoming OpenStack Boston Summit.

 He talks to Superuser about what users want now, how OpenStack can live up to being the ‘Linux of cloud computing’ and juggling competing priorities.

You’ve been involved with OpenStack since the beginning – how would you describe its evolution?

I would love to take credit for being “involved” since the very beginning but I actually cannot; so sorry for those who think I have been… But I think it’s a testament to how great the OpenStack community is that I have been welcomed with grace.

I first heard of OpenStack around the time of Grizzly, maybe Folsom and it was basically a black box to me. I was working for a shared hosting provider and cloud computing for me at that time was still not a need or concern for my day to day work. I was interested however in OpenStack as I had heard of it during one of many water cooler talks. I began to read more about cloud in general and OpenStack specifically and have to admit, I was one who decided to stay away for a little bit to see how things panned out. I mean, our customers were not, and honestly most have still not, gotten to the point of taking full advantage of the model cloud computing offers, even though this is rapidly changing every day.

Icehouse was my first foray into the world of OpenStack deployment which I rolled by hand on a single node with multiple VMs using CentOS; my makeshift solution following a type of all-in-one method a lot of newcomers use today.

Since Icehouse, OpenStack has had significant changes and continues to evolve. I would say OpenStack has evolved as could be expected from an open-source software project that has landed well; driving hard to stabilize the code base and as popularity increases working just as hard to implement user feedback.

What are some of the most pressing issues facing OpenStack users now? In the next year?

At the recent Operators Meetup it was clear that users are looking for guidance as OpenStack and technology continues to change very rapidly. Users wanting to adopt OpenStack and those who already have continue to face the same challenge of filling the knowledge gap of not getting enough detail and getting too much. Striking a good balance by providing recommended architectures for deployment, logging and monitoring and incorporating adjacent technologies were just a few of the things that stood out.

I believe in the next year users will continue to want this because of the increase in open-source software from OpenStack IaaS, up through SaaS and PaaS solutions. Users will continue to need advocates to assist them in navigating the “cloudy” waters of open-source software aimed at addressing their needs; users are always looking for the most efficient, cost effective and beautifully crafted solution.

You’ve also been involved with a lot of “extracurricular” OpenStack activities, organizing meet-ups etc. What advice do you have for people who want to get more involved? Any thoughts on avoiding burnout?

Burnout? What does that mean, hehe. Joking aside, the best advice I can offer is just to start somewhere. You can join an IRC meeting, attend a local user group meeting, get to the OpenStack Summit, or any other event hosted by any other OSS group. I got involved in OpenStack extracurricular activities because I believed I saw a need.

In regards to time management, I have a philosophy, you cannot manage time, but you have to work to not allow it to manage you. I am quite a transparent person and so I honestly am not the best at managing how I use my time; working on it, don’t judge me. If you love what you are doing as I do, burnout is a word that “does not compute.” You will have peaks and valleys, highs and lows, but such is life right? If you want to achieve something, even if it is just being involved in a local meetup, you have as much time as everyone else, adjust your daily activities to allow room for participation.

You mention on your bio your interest in keeping up with cloud trends – what are the most important forces shaping it now?

Open-source software has been disrupting traditional technology across pretty much every sector and cloud computing has been helping this exponentially. Cloud trends are driven by more than just how many resources can be pooled, or whether someone can offer a service in the cloud versus on premise hosting and management of a service. I would say the most important force shaping “the cloud” is that ever elusive feeling of security. I offer maybe a different perspective on it in that it is more about stewardship of developers and those enabling developers. It is important in my opinion to consider the impact of this explosion and how we are able or unable to handle the implications.

Emerging markets, increases in automation, greater accessibility to tools and data, shorter systems development life cycles and other triggers should cause us all to take a step back and consider the result of our mad dash into the future. Often the benefit of our work and much less the disadvantages created by our work are considered.

Anything else you’d like to add?

I would like to say to the OpenStack community that I truly believe we are in a critical time where OpenStack stands to be the bedrock of a technology world unprecedented in our history; hindsight is 20/20. OpenStack is positioned as the Linux of cloud computing and we should be encouraged as well as increasingly thoughtful of how we can live up to this. What we decide to do, who we decide to listen to and how we decide to respond is extremely important for our future.

Get involved!

If you’re an OpenStack user, take the OpenStack User Survey and join the mailing list to help the User Committee better define user requirements and serve the community. If you’re interested in volunteering to help with User Committee efforts, please complete this form.

Cover Photo // CC BY NC

The post Users need advocates to navigate the “cloudy” waters of open-source software appeared first on OpenStack Superuser.

by Nicole Martinelli at April 04, 2017 11:02 AM

Red Hat Stack

Red Hat Summit 2017 – Planning your OpenStack labs

summit-labsThis year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.

It’s taking place from May 2-4 and is full of interesting sessions, keynotes, and labs.

This year I was part of the process of selecting the labs you are going to experience at Red Hat Summit and wanted to share here some to help you plan your OpenStack labs experience. These labs are for you to spend time with the experts who will teach you hands-on how to get the most out of your Red Hat OpenStack product.

Each lab is a 2-hour session, so planning is essential to getting the most out of your days at Red Hat Summit.

As you might be struggling to find and plan your sessions together with some lab time, here is an overview of the labs you can find in the session catalog for exact room and times. Each entry includes the lab number, title, abstract, instructors and is linked to the session catalog entry:

L103175 – Deploy Ceph Rados Gateway as a replacement for OpenStack Swift

Come learn about these new features in Red Hat OpenStack Platform 10: There is now full support for Ceph Rados Gateway, and “composable roles” let administrators deploy services in a much more flexible way. Ceph capabilities are no longer limited to block only. With a REST object API, you are now able to store and consume your data through a RESTful interface, just like Amazon S3 and OpenStack Swift. Ceph Rados Gateway has a 99.9% API compliance with Amazon S3, and it can communicate with the Swift API. In this lab, you’ll tackle the REST object API use case, and to get the most of your Ceph cluster, you’ll learn how to use Red Hat OpenStack Platform director to deploy Red Hat OpenStack Platform with dedicated Rados Gateways nodes.

Instructors: Sebastien Han, Gregory Charot, Cyril Lopez

 

L104387 – Hands on for the first time with Red Hat OpenStack Platform

In this lab, an instructor will lead you in configuring and running core OpenStack services in a Red Hat OpenStack Platform environment. We’ll also cover authentication, compute, networking, and storage. If you’re new to Red Hat OpenStack Platform, this session is for you.

Instructors: Rhys Oxenham, Jacob Liberman, Guil Barros

 

L102852 – Hands on with Red Hat OpenStack Platform director

Red Hat OpenStack Platform director is a tool set for installing and managing Infrastructure-as-a-Service (IaaS) clouds. In this two-hour instructor-led lab, you will deploy and configure a Red Hat OpenStack Platform cloud using OpenStack Platform director. This will be a self-paced, hands-on lab, and it’ll include both the command line and graphical user interfaces. You’ll also learn, in an interactive session, about the architecture and approach of Red Hat OpenStack Platform director.

Instructors: Rhys Oxenham, Jacob Liberman

 

L104665 – The Ceph power show—hands on with Ceph

Join our Ceph architects and experts for this guided, hands-on lab with Red Hat Ceph Storage. You’ll get an expert introduction to Ceph concepts and features, followed by a series of live interactive modules to gain some experience. This lab is perfect for users of all skills, from beginners to experienced users who want to explore advanced features of OpenStack storage. You’ll get some credits to the Red Hat Ceph Storage Test Drive portal that can be used later to learn and evaluate Red Hat Ceph Storage and Red Hat Gluster Storage. You’ll leave this session having a better understanding of Ceph architecture and concepts, with experience on Red Hat Ceph Storage, and the confidence to install, set up, and provision Ceph in your own environment.

Instructors: Karan Singh, Kyle Bader, Daniel Messer

As you can see, there is plenty of OpenStack in these hands-on labs to get you through the week and hope to welcome you to one or more of the labs!

by Eric D. Schabell at April 04, 2017 09:00 AM

Mirantis

Intelligent NFV performance with OpenContrail

When it comes to NFV, Packets Per Second (PPS) is often overlooked, but it's vitally important. How do you get the most of it over OpenContrail in OpenStack?

by Guest Post at April 04, 2017 12:25 AM

April 03, 2017

OpenStack Superuser

How to organize an OpenStack Operators Meetup

 Mariano Cunietti is the CTO of Enter, a public cloud provider based in Italy. Cunietti has been involved with OpenStack since 2013 and has been lobbying almost since he started to host an event in his native country. He made it happen with the recent Operators Meetup, hosting it along with sponsors including Bloomberg and Switch in the town where he’s based, Milan. Here are his tips and strategies for making an OpenStack event successful — and local.

When we started organizing this operators mid-cycle meetup we had no idea what it meant to gather so many people — especially operators. This last cycle, the two last standing competitors to host the Operators Meetup were Milan and Tokyo. Tokyo had already hosted the Summit last year so it was finally our opportunity to bring part of the global OpenStack community to Italy.
Since the Manchester Operators Meetup, organized last year by our friends at Datacentred.co.uk the event has grown a lot. The operators mailing list is now super-active and involves lots of people, especially in Europe. There was no need to advertise it — but the Operators Meetup was not meant to be a EU-centric meeting, so we targeted anyone interested in coming to share operational experience and -why not?- spend a few days in a warm country with great food and plenty of things to see.
Being an absolute beginner at organizing events, I needed a lot of help from the community and  felt a bit lost initially. Matt Jarvis, who organized the Manchester event, helped me get my bearings by kindly sharing all of his experience on venue, printing, catering, evening events etc.  I found his hints so helpful that I want to share mine here for the next Ops Meetup organizer.

Full house

For starters, you need to understand how many people you can host — target an average of between 100 and 150 max. Whether you’re renting space or running it at your office, you need to provide the right number of seats and rooms. You’ll need a main room that can host all of the people at once (in our case 130 seats), plus some smaller rooms for secondary tracks, say at least three including the plenary room.

Since rooms depend upon the agenda, you’ll be happy to know that the Foundation and the community will be in charge of defining topics, submitting proposals, voting and scheduling, according to the room size you are providing. You’ll end up receiving a spreadsheet with all the information you need to be printed and the rooms to be prepared.

Just remember to help the sponsors understand that talks are not sponsored and they must be strictly technology and open-source related. There is no “Demo Theater” at the Ops Meetup. Talks and discussions are subject to open voting and sponsors can propose and apply (and vote) as anyone else.

Room with a view (or better: outlets and Wi-fi)

Provide a projector, a whiteboard, colored markers and at least two microphones per room.
One mike is for the moderator and the other is for the audience — having just one makes it trickier to hear or to move around. Do not forget to provide proper access to Wi-Fi (with a theme SSID) and plenty of power outlets, as scarcity of either tends to make attendees pretty restless during the sessions.
You may want to hire a professional photographer or video maker to record sessions and give an overall feel for the event and to share this material after or even during the event. We did, but you can also rely upon the attendees to provide their pictures. If this is the case, a clear Twitter hashtag helps a lot in collecting them afterwards. Ah, this may seem obvious but size your restrooms accordingly. You definitely don’t want people spending session time nervously in line for the toilets.

Location, location, location

Once you have the overall attendee total number and the venue, you need to right-size all the number-dependent facilities: accommodation, catering and evening events. For the accommodation, find hotels close to the meetup venue. A good of thumb rule would be no farther than a 15-minute walk. Most of people will want to be within walking distance and on the second morning, after the night event, chances are that hangover/jet lag will make the morning walk to the sponsored coffee feel much longer.

20170316_153909
The Milan Operators meetup with organizer Mariano Cunietti waving far left. Photo: courtesy Melvin Hillsman.
//

Also, the hotels must have decent access to transportation: most attendees come before or stay after to visit the city. You don’t want those folks to get lost in some unknown part of the city and end up wandering in late. Find two or three alternatives, possibly giving at least two pricing options and get a deal on discount codes. We targeted 80 percent of the people coming alone and 20 percent with family, so we arranged single and double room reservations accordingly.

Food matters

We managed to cater two lunches and some local snacks for breakfast and mid-afternoon breaks, and arranged some self-service coffee stations (espresso and American), but we unfortunately didn’t properly address tea-drinking attendees.  This can be easily solved and you can save a lot of money on waiters that’s better spent on food quality.
Being Italian, we knew that people were expecting us to raise the bar on food and we went for the “wow” effect (pasta with fresh burrata mozzarella made in front of people in the largest pan I’ve ever seen, check it out here). Just remember to plan for veg[etari]ans/kosher/etc. and other dietary restrictions when choosing the menu. OpenStack is all about diversity!

photo-2017-03-15-12-07-51_0808
Hard-working OpenStack operators deploy lunch in Milan. Photo courtesy Enter.eu //

Signing bonus

For the event, the Foundation kindly offered to cover registration fees. You only need to tweak the registration fee accordingly, but remember the Ops Meetup are mainly covered by the sponsors and not by attendees. A maximum of $20 was fine for us (it covered beer, wine or the must-have aperitif, Spritz) plus some food. We definitely wanted to give Stackers a taste of the Italian lifestyle and the Navigli location was a perfect match.

You’ll need some print collateral to provide on-site orientation and copies of the agenda. Also, some attendees requested an online version of the agenda; Sched would suit that need but a simple static page on a website would also suffice.

We gave out some paper folders at the registration desk with the Meetup agenda (with Wi-Fi and Twitter hashtag information on top), a venue map, sponsor leaflets and night event directions. Remember that many people are coming from far away and they need to be told exactly how to move around your town safely. We provided three options for reaching the evening event venue, ranging from the fastest one by taxi (with help on carpooling) to a city sightseeing one with a local tram. Also, don’t overlook pinpointing major local tourist attractions – if attendees travel far to come to your event, make sure they don’t miss out.

You will also need some signage at the meetup venue, especially in front of the meeting rooms, to help people not to get lost. Finally, a couple of roll-ups (ideally one per meeting room plus one at the entrance or in the lunch room) with sponsor logos helps compensate the sponsors for their efforts. The main goal is to have the logos visible in every picture or video shot (and shared on social networks) during the talk.
photo-2017-03-16-13-06-49_0842

And always remember: without sponsors the Meetup can’t take place, so pay attention to their needs and show some gratitude!

That’s pretty much it. If you want to hear more about the Milan meetup, check out Robert Starmer’s excellent recap video or read the Etherpad feedback directly from participants!

Get involved!

If you want to bring the next Ops Meetup to your town, pay a visit here. The Ops Meetup Team meets regularly on IRC (Freenode, #openstack-operators channel) to discuss all the Meetup topics there. If you missed previous meetings, don’t worry: you can always eavesdrop on them!

The post How to organize an OpenStack Operators Meetup appeared first on OpenStack Superuser.

by Superuser at April 03, 2017 10:59 AM

March 31, 2017

OpenStack Blog

User Group Newsletter March 2017

User Group Newsletter March 2017

 

BOSTON SUMMIT UPDATE

Exciting news! The schedule for the Boston Summit in May has been released. You can check out all the details on the Summit schedule page.

Travelling to the Summit and need a visa? Follow the steps in this handy guide, 

If you haven’t registered, there is still time! Secure your spot today! 

 

HAVE YOUR SAY IN THE SUPERUSER AWARDS!


The OpenStack Summit kicks off in less than six weeks and seven deserving organizations have been nominated to be recognized during the opening keynotes. For this cycle, the community (that means you!) will review the candidates before the Superuser editorial advisors select the finalists and ultimate winner. See the full list of candidates and have your say here. 

 

COMMUNITY LEADERSHIP CHARTS COURSE FOR OPENSTACK

About 40 people from the OpenStack Technical Committee, User Committee, Board of Directors and Foundation Staff convened in Boston to talk about the future of OpenStack. They discussed the challenges we face as a community, but also why our mission to deliver open infrastructure is more important than ever. Read the comprehensive meeting report here.

 

NEW PROJECT MASCOTS

Fantastic new project mascots were released just before the Project Teams Gathering. Read the the story behind your favourite OpenStack project mascot via this superuser post. 

 

WELCOME TO OUR NEW USER GROUPS

We have some new user groups which have joined the OpenStack community.

Spain- Canary Islands

Mexico City – Mexico

We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve! Looking for your local group? Are you thinking of starting a user group? Head to the groups portal for more information.

 

LOOK OUT FOR YOUR FELLOW STACKERS AT COMMUNITY EVENTS
OpenStack is participating in a series of upcoming Community events this April.

April 3: Open Networking Summit Santa Clara, CA

  • OpenStack is sponsoring the Monday evening Open Source Community Reception at Levi Stadium
  • ldiko Vancsa will be speaking in two sessions:
  • Monday, 9:00-10:30am on “The Interoperability Challenge in Telecom and NFV Environments”, with EANTC Director Carsten Rossenhovel and Chris Price, room 207
  • Thursday, 1:40-3:30pm, OpenStack our Mini-Summit, topic “OpenStack:Networking Roadmap, Collaboration and Contribution” with Armando Migliaccio and Paul Carver from AT&T; Grand Ballroom A&B

 

April 17-19: DockerCon, Austin, TX

  • Openstack will be in booth #S25

 

April 19-20: Global Cloud Computing Open Source Summit, Beijing, China

  • Mike Perez will be delivering an OpenStack keynote

 

OPENSTACK DAYS: DATES FOR YOUR CALENDAR

We have lots of upcoming OpenStack Days coming up:

Upcoming OpenStack Days

June 1: Australia

June 5: Israel

June 7: Budapest

June 26: Germany Enterprise (DOST)

Read further information about OpenStack Days from this website. You’ll find a FAQ, see highlights from previous events and an extensive toolkit for hosting an OpenStack Day in your region. 

 

CONTRIBUTING TO UG NEWSLETTER

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.

This newsletter is published on a monthly basis.

 

 

 

by Sonia Ramza at March 31, 2017 01:11 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
April 23, 2017 01:36 PM
All times are UTC.

Powered by:
Planet