We were pleased to come across an assessment of the Obama campaign, already recognized for its opportunistic use of big data, from a different angle. In a two-part interview with Dylan Richard, Director of Engineering at Obama for America, Logicworks’ Gathering Clouds blog explains the details of the campaign’s automated operational environment and its use of DevOps.
Automation was essential to success. The campaign’s infrastructure had three different environments. The production environment was in Amazon Web Services (AWS) Elastic Cloud 2 (EC2). Testing and build validations happened in an Amazon-hosted staging environment at a smaller scale as well as in an internal testing environment. Release management and mediation between the three environments was through Puppet and an internally built application repository.
DevOps thinking also helped with managing risk. The team had three availability zones in AWS in the Eastern part of the US and had the foresight to bring on a fourth “warm” failover in the western part of the US as Hurricane Sandy approached. The western location was a read-only facility so that the campaign could continue uninterrupted, even if all three Eastern zones had gone down. Within each application, there were multiple failover levels, not just for the applications, but for separate functions within the applications, so that core features for basic operations could be isolated from risk factors.
A true understanding of the interdependencies and a well thought-out plan made the campaign not only electorally successful; it set a new bar for scalability, redundancy, and tech team cohesion.
Here at Puppet Labs, we love Spotify (as evidenced by our PuppetConfplaylists). A few weeks back, we had the pleasure of hosting the “PuppetDB at Spotify” webinar with Spotify Site Reliability Engineer Erik Dalén. Puppet Labs engineers Nick Lewis and Deepak Giridharagopal walked through how and why they created PuppetDB, and Erik described how he’s using it in production, and the custom queries he’s written for it.
For those who might not know, PuppetDB is a storage service for your Puppet-produced data. The webinar covers many frequently asked questions about PuppetDB’s databases, how to write custom queries on top of the storage service, and how to get the most out of your PuppetDB deployment.
CERN, the European Organization for Nuclear Research, has been in the news lately for their recent observation of the Higgs Boson particle. Like any research facility dealing with big data, they face infrastructure automation challenges in scaling their computing power—and are looking to move beyond homegrown scripts. Enter: Puppet.
A few of our lucky employees got the chance to tour CERN and talk with Gavin McCance of their IT department about their computing needs and plans to build out their infrastructure. Hear what he has to say in this short video (<2 minutes) below:
I’m Gavin McCance, I work in the CERN IT department, I look after the grid compute services and the batch compute services that we use to analyze the data from the Large Hadron Collider experiment. We’re here at CERN with the Puppet Labs guys today. So what does CERN do? It’s involved in fundamental research, so we have a large underground collider looking for new fundamental particles. In fact, last week we discovered the Higgs Boson, which has been searched for for the last fifty years.
So, the problem we have today, for the last 10 years at CERN we’ve been managing our compute infrastructure with our own scripts and our own home-developed tools. We’re now in the process of expanding our computing resources because LHC physics requirements are quite heavy on compute time, we need to buy a lot more computers, and we’re also moving to virtualization. So this is increasing the dynamicness of the environment that we’re in, and basically our own, homegrown scripts are becoming a maintenance problem for us.
Six months ago we started looking outside for open source tools to help manage our infrastructure, and we settled on Puppet. We’ve been having really great experiences with Puppet, so we’ve been discussing with the Puppet Labs guys here today, and we’re very very happy with Puppet. What we’re doing now is in 2013, we’re getting a new computing center, we’re expanding our computing resources, and that center will be entirely Puppet-managed, and over the next two years we’ll be migrating our production infrastructure, the computers here, to Puppet as well. So we’re really looking forward to working with Puppet. Thank you.