In Valid Logic

Endlessly expanding technology

OpenSSL Heartbleed patches for Ubuntu 13.04

The heartbleed bug has certainly taken things by storm this week as everyone is patching systems, generating new certs, revoking old, invalidating user tokens, and likely resetting passwords.

At Apcera, we were certainly hard at work getting updates rolled out to ensure we weren’t vulnerable. Phil Pennock even put together an nginx module to log the version of OpenSSL that nginx was using and ensure it errors out if it doesn’t have the correct version.

In the course of rolling out updates though, we still had a need for patching OpenSSL on Ubuntu 13.04 (raring). Raring has been End of Lifed, however Continuum can have multiple base OSes loaded into a cluster, and we still had customer clusters that had the old release and were using it. We also have some systems on it still, as we were holding out for 14.04 which is only a week away.

So we rolled a patched version of OpenSSL for raring using the existing build and grabed the patches from saucy and applied them. We’d seen other people asking about how to patch raring systems flying by on Stack Overflow and other places, and decided it wouldn’t hurt to simply share the packages we’d put up.

These packages were built using brew2deb by Aman Gupta, with these patches added in.

The SHA1 checksums for them are (sorry for the line wrapping):

714fcb1d7944e60c2c44a103bd851f51607b1c56  libssl1.0.0_1.0.1c-4ubuntu8.2apcera1_amd64.deb
6ae5c0041e86829cc301dcead06efd01fb7d9b0c  libssl1.0.0-dbg_1.0.1c-4ubuntu8.2apcera1_amd64.deb
c5e3947832b54a1d1e5d8e8c0163e83b3eb21523  libssl-dev_1.0.1c-4ubuntu8.2apcera1_amd64.deb
0b62f240323ce02d83e84734dadde0bf5fb1b850  libssl-doc_1.0.1c-4ubuntu8.2apcera1_all.deb
0c135bd9e9370d4532c8c2beb51fefffc1a51231  openssl_1.0.1c-4ubuntu8.2apcera1_amd64.deb

Also, we have made them available from a simple apt repository. You can simply add the following to your /etc/apt/sources.list, do an apt-get update, and grab the packages.

deb https://apcera-apt.s3.amazonaws.com public raring-openssl

The apt repo is signed by my own Apcera GPG key. You’d likely a message about not trusting the signing key. You can retrieve my key using the following command:

# apt-key adv --recv-keys --keyserver keyserver.ubuntu.com DB4363B3
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring
--secret-keyring /tmp/tmp.nWjupL3SrM --trustdb-name /etc/apt//trustdb.gpg
--keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg
--recv-keys --keyserver keyserver.ubuntu.com DB4363B3
gpg: requesting key DB4363B3 from hkp server keyserver.ubuntu.com
gpg: key DB4363B3: public key "Ken Robertson <ken@apcera.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

I don’t necessarily expect people to persist a keeping my key there, or wanting the apt repo for anything long term, so you can remove the apt repo afterwards and remove my key with:

# apt-key del DB4363B3
OK

Thursday, April 10, 2014

 

Desktop Environment Makeover

I recently decided to do a complete makeover of my development environment. Developers get pretty comfortable with their local development environment, which can sometimes mean they live with bad habits because it is a habit.

It had dawned on me that when I was working, I generally had too much crap open. On average, I’d have 2 terminal windows open with a total of 7 tabs. Often it would be 3 windows and 10+ tabs. I was using Sublime Text 3, and overall loved it, but would often get a number of windows open and a plethora of tabs. At times, it felt too scattered. I was wasting time tracking what I had open and where rather than getting shit done.

There was another nagging thing. Some months earlier earlier, I had set up Emacs and was trying to make a move to it. I had tried out Emacs a couple times before, but could never get over the productivity slump with it.

I ended up not succeeding and ended up back in Sublime Text. Sometime later, Derek was at my desk looking over some stuff and I made a comment that I had gone back to Sublime. He jokingly said “yeah, I knew that wasn’t going to work out.” While it was a simple poke, it stuck in my mind as a challenge.

I am not really one for New Years resolutions, but last month I was thinking it may be worth challenging myself… and what better way to break any habits you have by doing a complete makeover at once. Make it hard to fall by doing a complete environment change.

My complete environment change has taken me from running OS X to now using Arch Linux. To help undo the clutter and remain focused, I’ve moved to the tiling window manager AwesomeWM.

screenshot

Awesome is incredibly customizable. I had tried out Xmonad as well, however I felt more comfortable customizing things in Lua than Haskell, and Awesome felt like it had a clearer path to my optimal setup.

My layout when coding is now one large, centered Emacs window with 4 small terminals flanking the corners. Everything I am working with is right there. No tabs. No other windows.

Other distractions are out of my way, however they’re also quickly accessible when I need them. When hooked up at my desk, my laptop screen is tiled with HipChat there in case I see something pop up, and I’m a keystroke away from grabbing the window, moving it to my main montior, and another keystroke from putting it back where it was.

The move to Emacs has actually gone fairly well. I still have Sublime Text installed, however with also moving to Linux, I couldn’t really fall back to it and be in my comfort zone. All the key changes between OS X and Linux made it so it wasn’t the same feeling as before. I’ve gotten pretty used to how to do things, and with previous attempts to switch I kept getting gradually closer with finding packages/tools for various things.

Overall, moving to Linux has gone fairly smooth. Setting up Arch wasn’t all that difficult. I ran Linux on my desktop several years ago and have used Debian, Ubuntu, Gentoo, Slackware, and even Red Hat (pre-RHEL). It took some tweaking to get everything on my MacBook Pro working, but pretty much all the drivers work. Some kinks remain mostly with the video card (thanks Nvidia). Otherwise still have Chrome, zsh, and have found equivalents for most other needs.

Some side benefits came up too. Generally find things more responsive in Linux. My whole system is setup in Chef. Linux has a lot of config files, but its easier to manage, track, and version them than GUIs that change internal attributes you can’t automate things well.

I’ve been finding Vagrant with NFS mounts for mounting in my code works far better. The NFS server on OS X must suck, since used to constantly get stale mounts where inside the VM, it would see an outdated version of the file.

Friday, February 14, 2014

 

Golang Tidbit: Defer

A while ago, I did a post on Golang Oddities. I only made one post in what I intended to make a series of, but at any rate, I’d realized “oddity” wasn’t really the right word. I was intending it more as an interesting bit to be aware of than knocking the language.

One interesting one to be aware of is how defer works within the language. A article on how defer, panic, and recover work briefly mentions something:

A deferred function’s arguments are evaluated when the defer statement is evaluated.

They offer up a simple code snippet to highlight the fact:

// try at http://play.golang.org/p/PYaIIMAqHj
package main

import "fmt"

func main() {
	i := 0
	defer fmt.Println(i)
	i++
	return
}

When run, this will print out 0 even though i++ is executed before the call to print. The way defer works is it does everything it needs to do to get ready to execute an expression, except it delays the actual execution. So anything that is an argument to the call is evaluated at the point in the method the defer is at, then executes the actual expression after the return.

The behavior looks innocuous, but can manifest itself in some auspicious ways. For instance:

// try at http://play.golang.org/p/DaJvfDRJ3Y
package main

import "fmt"

type printer struct {
	message string
}

func (p *printer) SetMessage(msg string) {
	p.message = msg
}

func (p *printer) Print() string {
	return p.message
}

func main() {
	p := printer{}
	p.SetMessage("Starting")
	defer fmt.Println(p.Print())
	p.SetMessage("Done")
}

This seems normal enough, but now instead of passing in a variable, the arugment is from a function call on a struct. The same behavior will result. It prints "Starting" instead of "Done".

However, you also have to be aware of what is being passed into anything being evaluated. In the above examples, simple non-pointer types were being passed in. So essentially a copy of the variable was being created and passed to the call that was being deferred.

On the other hand, take the following example:

// try at http://play.golang.org/p/rIFL-dPBrW
package main

import "fmt"

func printStr(str *string) {
	fmt.Println(*str)
}

func main() {
	s := "Starting"
	defer printStr(&s)
	s = "Done"
}

In this example, a pointer to a string is being passed to the printStr function. Because a pointer is being passed in, assignments that happen after the defer statement are carried over.

So how can this be worked around? The simple way is through an inline function. Instead of calling what you want to call directly, create an inline function around it. Evaluating the function at the time is simple, since there are usually no parameters. But when it is run, it is still in scope of the variables within the main function.

// try at http://play.golang.org/p/BXHnCikUQj
package main

import "fmt"

func main() {
	i := 0
	defer func() { fmt.Println(i) }()
	i++
	return
}

It is important to note the () at the end. You can’t defer a function type, you need to defer an expression. So the inline function needs to actually be called. The same is true with the go keyword to execute a statement in another goroutine.

Despite some of the gotchas with how defer works, it is definitely one of my favorite parts of Go. Instead of needing to scatter around cleanup code in a function, it allows you put cleanup right after dirtying. Say you have to do 5 different things which involve opening files, sockets, etc. Instead of mucking with cleaning up if the function fails at step 3 and cleaning up #1 and #2, you simply defer the cleanup after each step.

For example, take the following snippet. This is more psuedo code, not any of our actual code, but in it, we can use a local variable to track if we succeded, and can check it on the way out to see if everything was successful.

func createUserAndDatabase(name string) error {
	// understand if we succeeded
	success := false

	// connect
	db, err := connectToDB()
	if err != nil {
		return err
	}
	defer db.Close()

	// create user
	user, err := db.CreateUser(name)
	if err != nil {
		return err
	}
	defer func() {
		if !success {
			db.DeleteUser(user)
		}
	}()

	// create database
	newdb, err := db.CreateDatabase(name)
	if err != nil {
		return err
	}

	// we're done, mark success
	success = true
    return nil
}

Another way it could be done is with a named return variable. In the function definition, give the error object a name and it can be accessed in the deferred call. If no error is being returned, then function succeeded.

Overall, defer is excellent to work with and hope you find it awesome too.

Monday, July 01, 2013

 

One year at Apcera

We recently passed the one year mark at Apcera and I thought I’d take a brief moment to look back on how far we’ve come and just how fun the past year has been.

My actual one year mark at Apcera was two weeks ago, however Tuesday of last week is what we consider our official one year mark, since June 18th of last year was when we had ourl “kickoff” with everyone from the initial team.

I had started one week earlier, but my first week pretty much consisted of ordering my MacBook (I started the day the retina Macs came out too), attending True University, some small planning, and getting my new MacBook at the end of the week.

In the year since, we’ve had a lot of ups and downs. We spent the first couple months in what I called double stealth mode – nobody knew what we were doing, and we didn’t even know what we were doing. Derek essentially told everyone to design the majority of the product – without coding it – and then sell it to him. What? No coding? I can’t code the cool stuff we talk about? Nope. In the long run though, it paid off. We all learned Go in the process and it allowed us to vet some of the early, fundamental assumptions.

There have been a lot of ups and downs though. In March, just as we were having our first Apcera family weekend skiing in Tahoe, we all came home to find out that Kirill, one of our team members who didn’t go up that weekend, and passed away. It was a pretty big shocker to the team. We are a very small and close knit team, and Kirill had an amazing talent. He was integral to the team. It isn’t like when you come in and find that someone has given notice. In that case, you have 2 weeks to process the transition. In this case, it was going home on Friday and have him not being there on Monday.

But over the past year, Apcera’s culture has truly grown into something amazing. Early on, Derek was asking us what kind of culture we wanted for the company – what are the traits and how do we implement them. I don’t think culture is something you can really define and implement though. It simply happens. It is formed out of the qualities of the initial team, and cemented by the traits they look for in others as the team grows. We didn’t choose our culture, our culture just happened.

And in my opinion, we have an amazing culture. We strive very hard to be a meritocracy. In our technical discussions, we all bring varying backgrounds and, though sometimes rocky, we grit through to come up with the technical designs everyone agrees on. We are strict in our code reviews. Yes, we’ll knit-pick on the grammar in your log lines and usage of index vs pointers in ranges, but that is because we’re all passionate about our product and bettering each other.

But outside of our technical culture, we value one another. At lunch, our main conference room becomes our cafeteria. Our Off-Topic room in HipChat is quite off-topic (but not off-color, as the room topic emphasizes). We’re a startup and busy, but still value balance in our lives. Time off, team lunches, team dinners, and also extended friends and family. We love having friends drop by the office, spouses, and try to plan a couple events that involve our spouses and families as well.

One of the important aspects to me is that after a year in, I feel more connected to everyone I work with than most previous employers. My wife knows everyone I work with, as well as their spouse, and their kids (though the team is growing fast too).

A few months ago we were talking about the one year coming up, and that was when I realized it was about this same point that at my last job I was actively talking to Derek about joining. I left my last job after only being there 11 months. I was 10 months in at Apcera and so far the thought of leaving had never even crossed my mind.

Now we’re just starting to unveil what we’ve been building for the past year. We haven’t really said a whole lot yet, since we’re all still pretty busy writing code to move it even further ahead, but we’ve built some pretty cool stuff so far and looking to doing so for a long time to come.

Monday, June 24, 2013

 

Managing APT repositories on S3 with ease

S3 is a great place to host an APT repository… cheap storage, easily available, no server to run or anything. The annoying thing is that most ways to set it up involve using something like reprepro on a local box and using s3cmd to then sync the local files up to S3. For me, the annoyance is that I use throw away VMs. I don’t want to store them, or I’ll remove the VM without thinking, and then need to redownload everything to just add one one package.

I wanted just a simple tool that would upload the new package, generate the Package file adding in the new file, and update relevant hashes in the Release file. It seems simple, but could find something for the lazy people like me.

Finally decided to scratch my own itch. Over the weekend I wrote deb-s3. Have to upload a package? No problem.

$ deb-s3 upload my-deb-package-1.0.0_amd64.deb --bucket my-bucket
>> Examining package file my-deb-package-1.0.0_amd64.deb
>> Retrieving existing package manifest
>> Uploading package and new manifests to S3
   -- Transferring pool/m/my/my-deb-package-1.0.0_amd64.deb
   -- Transferring dists/stable/main/binary-amd64/Packages
   -- Transferring dists/stable/main/binary-amd64/Packages.gz
   -- Transferring dists/stable/Release
>> Update complete.

If it is your first package, it creates all the necessary files from scratch. On your server, you can then plug it in your /etc/apt/sources.list as:

deb https://my-bucket.s3.amazonaws.com stable main

It features:

  • Specify different components or codenames as you wish. It’ll maintain other components and architectures in the Release file.
  • Options to specify the visibility of the file. Can set to private or authenticated, and then use apt-s3 to supported authenticated requests against S3.
  • Ability to sign the Release file (GPG will prompt for the passphrase)

Check it out. Use it, love it, fork it, extend it, send a pull request.

If you simply want to use it, its available as a gem as well.

$ gem install deb-s3

And you’re set.

Tuesday, February 26, 2013

 

Golang Oddity #1

Every language in existance has its own set of oddities. Since I’ve been working in Go full time for a couple of months now, I have run into some of its nuances and wanted to chronicle some of them.

While I am being critical in these kind of posts, the intent isn’t to bash Go, more it is about educating others. Go isn’t breaking existing convention (too much), however it is an emerging language and there isn’t as much out there to familiarize a newbie with things to be aware of.

So to start off with, give you something simple but very annoying:

Strings cannot be null, only empty

In Go, null (or nil) isn’t covered as heavily as I wish it was. Not every type is nilable, and this can lead to some annoyances.

All strings upon creation are simply an empty string (""). On the surface this doesn’t sound bad, but it can cause a lot of other busy work when dealing with other things that allow string to be null, or where the difference between null and empty are very important.

Most databases have understood for a long time that a null string and an empty string are completely different. Another is with user supplied input. Go is excellent for writing servers and APIs, and a common case with an API is CRUD functionality. You might want to support a partial update, where omitted values (essentially null) or not altered while supplied values (which may be a blank string) are updated.

Take the case of user supplied input, such as over a JSON API:

// try at http://play.golang.org/p/1A7XZva4C1
package main

import (
  "encoding/json"
  "fmt"
)

type Person struct {
  Name string `json:"name"`
  Location string `json:"location"`
}

func main() {
  var p Person;
  json.Unmarshal([]byte(`{"name":"John"}`), &p)
  fmt.Printf("Name: %q\nLocation: %q\n", p.Name, p.Location)
}
Name: "John"
Location: ""

In this case I define a struct and unmarshal some JSON that only specified the name. But then you can see Location is set to "". If they already have Name and Location set, and are doing an update with only Name, I don’t want to blank out Location. Now you got to do hoops.

Pick up that hula hoop… welcome casting

// try at http://play.golang.org/p/VlAJ4N9uGY
package main

import (
  "encoding/json"
  "fmt"
)

type Person struct {
  Name interface{} `json:"name"`
  Location interface{} `json:"location"`
}

func main() {
  var p Person;
  json.Unmarshal([]byte(`{"name":"John","location":"Gotham"}`), &p)

  fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))
}

func lookupZipcode(l string) int {
  switch l {
  case "Gotham": return 1
  case "Metropolis": return 2
  }
  return 3
}

In this mock example, passing the location to another function to look up the zip code, but it expects the location as a string, so now you need to cast it.

Uhh ohh, with interace{}, type enforcement isn’t inherent

  json.Unmarshal([]byte(`{"name":"John","location":1234}`), &p)
  ...
  fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))
panic: interface conversion: interface is float64, not string

goroutine 1 [running]:
main.main()
  /tmpfs/gosandbox-9aac7f9a_0c33fe58_a998effb_2a4a973a_458fb2a3/prog.go:17 +0xcd

However, when using interface{} as our type we lose the inherit type handling within json.Unmarshal and this will result in a panic rather than returning an error. In this case, the panic is on the print line rather than when unmarshaling. So now we need to do our own type validation, which has annoyances of its own.

Type checking #1

  switch p.Location.(type) {
  case string:
    fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))
  default:
    fmt.Println("OMG you didn't enter the right value")
  }

Type checking #2

  if s, ok := p.Location.(string); ok {
    fmt.Printf("Your zipcode is %d\n", lookupZipcode(s))
  } else {
    fmt.Println("OMG you didn't enter the right value")
  }

For type checking you can either go the switch route or check the second parameter in the cast call. If you’re morbid you could write your own typeof() using reflection perhaps.

For me, it is annoying because I need to actually do it and care. Perhaps I’m simply too spoiled by Ruby, however plenty of other languages support null strings as well. And so what if I am spoiled by Ruby… it focuses on developer happiness rather than hoops for performance. There are trade offs to all things, and I’ve accepted Go’s empty strings. I just miss my null strings.

Tuesday, October 16, 2012

 

Talking on the Food Fight Show

Earlier this week I had the pleasure of talking with Bryan and Nathan on the Food Fight Show, a devops focused podcast, talking about PaaS, Cloud Foundry, where PaaS is heading, and also talking about Go and the influence it will have on operations.

Be sure to check it out or look it up on iTunes.

It isn’t really a secret that Apcera has been really looking at Go… just look follow me or Derek on Twitter. We think Go will be growing wildly in cloud services and high scale environments in the future. It is a very elegant language for server/systems development, has a solid base, growing ecosystem, and it will continue to grow and mature in the space. In some ways, I see it as solving the problems people were saying Node solved, but in a better way.

I’m starting to put together a few little mini posts about some of my adventures (or misadventures) with Go. I’ve been really starting to like it a lot, and there are some powerful things you can do with it that are encapsulated in a simple way. But it also has some idiosyncrasies, some of which are just ironing out as the language/ecosystem matures, and others are design decisions.

Wednesday, October 03, 2012

 

Joining Apcera - Building Something From Scratch

This morning I realized I never really updated my blog after changing jobs a few months ago. Back in June, I posted to the PaaS.io blog about joining Apcera, though never actually made a personal post about the change.

Earlier in the year, I had the opportunity to meet Derek Collison. Can read the PaaS.io blog post about why it made sense to merge the two. On a personal point of view, I was also drawn to the opportunity to get back to an early stage, highly motivated, build something fresh kind of startup.

My first profession gig right after college was at Telligent and I started when the company was just a month old. The early years at Telligent are amung the most memoriable of my career and a lot of that stemed from the energy at the company. A combination of the team transending to feeling like a family, building a great product out of nothing, and truly being energetic and dedicated to the goal. All of us ate, slept, and drank the product in those early years.

Since then, I’ve grown and learned a lot, however most of my work since has been on already established products and later-stage startups. Typically when you start needing Operations people, or people to do scaling/performance work, you already have something that is generating that need.

Working with Derek presented itself as a really unqiue chance. The team of people he was gathering together was truly awesome with strong backgrounds. It would be a tremendous chance ot learn from my peers. On top of that, he had the financial backing to be able to carry it through. He had plenty of experience from building Cloud Foundry and talking with clients, and his goals for Apcera’s products were pushing the status quo and appealing as a challenge.

It was almost one of those “if I could do that again knowing what I know now” type chances. Do an early stage startup again, build something incredible from scratch. Two of the biggest motivators for me are fulfillment from building something with my own hands and tackling the kind of challenging problem that your first reaction is “yeah, I think I could do that” followed by you staring into space while starting to think about how.

It has been a littl over two and a half months at Apcera and it as been amazing so far. We’re deep in code building what we’ve been envisioning. I am definitely eager to spill the beans, but until then I can only offer a slight preview.

Tuesday, August 28, 2012