In Valid Logic

Endlessly expanding technology

Dell XPS 13 9350 on Linux

As you can see, I’m quite the prolific blogger when my last post was almost 11 months ago and was supposed to be me moving off of Linux. Yeah, that didn’t even last a week.


To summarize my past year in brief:

  • I had a Thinkpad X240. I fell out of love with it.
  • In December, I bought an MBP 13”. I tried going back to OS X, it lasted a few days.
  • In January, my backpack was stole with both my MBP and X240 in it.
  • I bought another MBP 13”.
  • Around May, I started eyeing the Thinkpad X250, which had reports of working with 16gb ram.
  • In May, I dropped my MBP a few inches and it tapped the bottom corner of the display and cracked it. I bought an X250.
  • In late August, my X250 began slowly frying ram modules. This showed up as the system freezing randomly, often multiple times per day.
  • In October, I was on my 3rd set of RAM, and decided to be an early guinea pig with the Dell XPS 13 9350.

The XPS 13 9350 is a refresh of the 9343, released earlier in the year. It was updated to the new Skylake processors, moved to using USB-C/USB 3.1, and most importantly, added 16gb as a factory option. The XPS 13’s ram is soldered in, so was never upgradable like the X240/X250’s was.

Dell has the XPS 13 Developer Edition, which is the XPS 13 with Ubuntu installed on it from the factory, and with a tested/supported hardware configuration. However, updates always seem to lag behind when the Windows model is updated. So when the new model was announced, there was little information on the timeline for the Developer Edition.

I began watching to see if others had tried Ubuntu on oit or now, but the orders hadn’t started shipping yet. So I braved it an ordered one not knowing whether whether things would work.

I ordered the model with 16gb ram, QHD screen, 512gb drive. At the time, Dell didn’t have any wireless options other than a “Dell Wireless” (aka, rebranded Broadcom). I’ve had to deal with Broadcom chips in Macs long enough to know you don’t use Broadcom if you have other options, so I went ahead an ordered an Intel 7265 off Amazon, same that was in my X240/X250.


The packaging of the laptop is quite nice. Dell has come a long way here. Like with Apple, it makes the unboxing a part of the experience.

photo photo

Once opened up, I booted up Windows only in order to check if there were any BIOS or firmware updates to install. Everything looked good, so I shut it down.

First task was to crack it open and swap out the Broadcom wifi for a far better Intel 7265. Seriously, everyone should just get the Intel. It was only $30 off Amazon, and it by far beats the reception and support issues of Broadcom. I don’t even know why people both asking when the driver will be supported, or try out dev builds, or what not. If time is money, getting the Intel card is cheaper.

For opening it up, the iFixit teardown from the 9343 still matches.

photo photo

The laptop has one of the new NVMe based SSD drives. These are nice since the drive is a PCIe device, and the software stack for it is far simpler and removes decades of legacy cruft. The laptop comes with a Samsung PM951 512gb, but I took the liberty of swapping it out for a Samsung SM951 512gb. The PM is a bit more of the mid-range model, while the SM is the top dog. One benefit is I basically still have the full Windows install, I guess.

Installing Linux

After checking for BIOS updates and finding everything was good, I set right to installing Arch.

There were a few BIOS settings to adjust. Namely:

  • Disable Secure Boot
  • Change the “SATA Operation” setting to disabled. With the NVMe drive, there is no SATA/AHCI stack in play.

During installation, with NVMe, you don’t have /dev/sdX anymore. Those are from the SCSI/SATA disks. The drive will show up as /dev/nvme0n1. Note, this is the drive, not a partition number. So you create partitions on /dev/nvme0n1 and will get /dev/nvme0n1p1 as your first partition.


Arch installed smoothly, I then ran my Chef process and got everything installed. I then fired up X and found it errored saying no display devices found. Uh oh. Luckily, beforehand I saw a link on Twitter about needing a boot parameter for the Intel graphics. I applied the setting, reboot, and had a completely blank display. The system was still responsive over ping/ssh, but the display was blank with just the backlight on.

From what I can tell, the blanking is only with QHD screens. The original guy on Twitter didn’t have any issues, but had an FHD screen.

Ended up tracking down a thread on the Dell Sputnik forum that it was fixed in the latest nightly builds from the Intel DRM team, and they linked to the Ubuntu PPA DEBs. I found a similar linux-drm-intel-nightly in the AUR, so I installed it but had no luck. Upon rebooting, it couldn’t find the NVMe disk at all, and when I got on the rescue console, no keyboard input would work, even from USB.

Luckily, I had a very helpful assistance while debugging all this.


I decided on a whim to just boot he Ubuntu 15.10 LiveCD and see how far it got. To my surprise, it booted straight into Unity, at full resolution, without any issues. Poking around, it didn’t even need the boot parameter at all.

In the end, I decided to install Ubuntu and go forth with it. I couldn’t find a sane path with Arch, and didn’t want to burn a whole ton of time considering I had work to get done, and my alternative was the RAM eater.

Ubuntu worked pretty smooth. I took the chance to completely rewrite my Chef scripts, but found that pretty quick. Mostly “what is this package named in Ubuntu”. I did end up installing the nightly drm-intel kernel, since I found without it, suspend/resume was broken.

Outstanding issues.

There are a few remaining issues on the laptop:

  • Suspend/resume works fine, however hibernation doesn’t seem to. When resuming, it simply boots. And for some reason, feels sluggish when booted if it wasn’t plugged in. Once plugged in, it gets much snappier.
  • VMware Worksation doesn’t seem to work at all. The modules are compiling fine on the 4.3 kernel, but the UI fails to come up. I haven’t looked into this much, since mostly been using vSphere for my VMs lately, or I’ll just use Virtualbox.


Overall, I am very pleased with the XPS 13. In my opinion, the biggest improvement over my Lenovo is the screen. The screen is crisp. Font rendering is so much smooth. I’m unsure if it is in part from the distro switch, or if it is the DPI being much better, but code looks even more beautiful on this machine.

Otherwise, the move to the NVMe SSD is certainly nice. It feels much quicker at times. Definitely no regrets there.

Asthetically, I always liked the matte black look of the Thinkpads and will miss that. The XPS is beautiful, but the palm rest and what not seem to smudge or hold fingerprints. It’ll work though…

The one negative is the battery live. I’ve seen a lot of people say how the XPS has a great battery life and all. But not quite when coming from the X240/X250. Those things had an amazing battery life, largely due to the the massive extended battery. I could pull 10-12 hours on that no problem. I have the power companion for the XPS, but it isn’t quite the same.

The keyboard is taking some getting used to. Mostly, the Ctrl and Fn keys are swapped compared to the X250 and the MBP. Mostly affects text selection and almost every Emacs shortcut. The pipe character is also a fairly narrow key, I’ve noticed.

But in the end, I have no regrets. I look forward to USB-C docks finally coming available (which charge too). The screen will have me drolling for quite a while.

Soon, I’ll follow up by posting my Chef setup on Github. It includes setting various module configs and what not. Some stuff I’m still experimenting with.

Friday, November 13, 2015


Linux Desktop Coming to an End

It has been almost a year since I moved to using Linux as my fulltime desktop, however it looks like that period may be coming to an end. Yesterday I ordered a new 13” MacBook Pro and it should be here early next week.

In June was my hardware refresh and I decided to abandon Apple’s hardware and its vendor choices that were not very Linux friendly. This was mainly Broadcom, since I had regular wifi issues, and Nvidia, where I was plagued with a driver bug breaking X for a while. So, I decided to get a Thinkpad X240. I’ve had a number of Thinkpads before and always loved them. The X240 really is a great laptop. The FHD (1920x1080) screen is amazing on a 12” laptop. The battery life is absolutely amazing. The keyboard is great, even though some people hate it. And its Linux compatibility is far better than Apple’s. The install was relatively pain free. But it isn’t without its faults.

The impetus for moving back is really two things:

First, some Arch update nearly 2 months ago completely broke my ability to use my 4K monitors. This affects the one I use at home and in the office. I can get below 4K, but cannot get actual 4K. And yes, 2 months ago. I’ve been too busy to really dive in deep and get it fixed, and when I have tried to dive in, I’ve some up empty. I miss full resolution.

Second, I simply cannot survive on 8gb of RAM. Using virtual machines is simply a necessity, and it just plain sucks to try and use a VM with a decent memory allocation on 8gb.

My hardware wish list was 3 things: 13” form factor, 1920x1080 base resolution, and 16gb of RAM. Most manifacturers only hit 2/3. Apple is the only one that hits all 3 with the 13” MBP. Lenovo and Dell have missed the mark with this one. There are a growing number of thin+light+power users, in my opinion.

So recently with my two issues and trying to find ways to resolve those 2, I decided to sit down and make a list of my current pain points:


  1. Limited to only 8gb RAM
  2. 4K resolution no longer working
  3. Frequent bluetooth connectivity issues (mouse losing connectivity for 5 seconds)
  4. Touchscreen layer on the FHD screen has ghosting issues
  5. I can’t say I haven’t had wifi issues still on this machine
  6. Keyboard shortcuts for volume, backlight, etc don’t work. They’re too new for the Linux driver.

So in the end I had 6 issues directly related to this system.

But then I found I was also noting general system issues annoyances:


  1. Printing and scanning issues. My Canon printer at home used to work, but now I try to print and CUPS says “printing…done” and the printer doesn’t even more. And it can sort of scan. Sort of. Currently going though the mortgage process and I’ve been using my wife’s MacBook Air to print/scan and using Dropbox to transfer.
  2. Sometimes the clipboard in X or something just decides to brick any app I tried to copy/paste and end up needing to reboot.
  3. My terminal (urxvt) seems to have issues sometimes and just crash.
  4. Photo processing in RAW is rather weak compared to OS X and even just iPhoto. It can be done, but the apps leave much to be desired.
  5. Chrome crashes all the time if I have to bring up a file dialog to upload a file. This sucks with #1, where frequently doing email attachments of even uploading to Slack.
  6. Scrolling in Chrome is unintuitive. It is some interaction between X and Chrome, but scrolling has the acceleration and what not (very used to it on OS X), but if I scroll some, then hit Control to use Page Up/Page Down to switch tabs, I will not have my had on the trackpad, but the scroll motion is continuing and hitting Control causes the page to been zoomed.
  7. I have never gotten it to disable the trackpad while typing and have it re-enable as a smooth process.
  8. Even though this isn’t all that terrible, cannot use WebEx to hope on a sales call and be able to help out. WebEx on Linux needs a 32bit environment.
  9. Presenting on Linux is full of “please work, please work”. Plugging in the external display, trying to configure the resolution and layouts, and doing the presentation isn’t a process that has confidence. LibreOffice and Google Present are options. Keynote compatiblility isn’t there. PowerPoint will open, but not always correct.

And can keep finding more if I keep trying to think.

But to sum up a Linux desktop: stuff just doesn’t work out of the box, and you are limited with external support. Who can I actually go to for help getting 4K after an update? For printing issues? For Chrome crashing?

As I was typing up the list, it started to click that all of these issues we ones I was choosing to live with. I’d use workarounds like needing my wife’s laptop, or I’d just hide behind, like sitting at my desk with a big ass monitor in front of me and using just my laptop.

On the positive side though, I have really become acustom to using a tiling window manager. I find it is just so much more productive to see everything you have open. It forces being less cluttered and more concise. And from what I have tried before, OS X options just pale in comparison to a true tiling window manager. Emacs is definitely now my home. I am not busting out any Lisp, but I know my way around and it is entirely natural now.

So when my new MacBook Pro arrives, I will more than likely not be installing Linux on it. However I may see what I can do to bring back some of what I liked to OS X, whether through Xquartz and trying to go full UNIX, or trying some of the OS X and tricks to make it feel more like a tiled system.

Wednesday, December 24, 2014


OpenSSL Heartbleed patches for Ubuntu 13.04

The heartbleed bug has certainly taken things by storm this week as everyone is patching systems, generating new certs, revoking old, invalidating user tokens, and likely resetting passwords.

At Apcera, we were certainly hard at work getting updates rolled out to ensure we weren’t vulnerable. Phil Pennock even put together an nginx module to log the version of OpenSSL that nginx was using and ensure it errors out if it doesn’t have the correct version.

In the course of rolling out updates though, we still had a need for patching OpenSSL on Ubuntu 13.04 (raring). Raring has been End of Lifed, however Continuum can have multiple base OSes loaded into a cluster, and we still had customer clusters that had the old release and were using it. We also have some systems on it still, as we were holding out for 14.04 which is only a week away.

So we rolled a patched version of OpenSSL for raring using the existing build and grabed the patches from saucy and applied them. We’d seen other people asking about how to patch raring systems flying by on Stack Overflow and other places, and decided it wouldn’t hurt to simply share the packages we’d put up.

These packages were built using brew2deb by Aman Gupta, with these patches added in.

The SHA1 checksums for them are (sorry for the line wrapping):

714fcb1d7944e60c2c44a103bd851f51607b1c56  libssl1.0.0_1.0.1c-4ubuntu8.2apcera1_amd64.deb
6ae5c0041e86829cc301dcead06efd01fb7d9b0c  libssl1.0.0-dbg_1.0.1c-4ubuntu8.2apcera1_amd64.deb
c5e3947832b54a1d1e5d8e8c0163e83b3eb21523  libssl-dev_1.0.1c-4ubuntu8.2apcera1_amd64.deb
0b62f240323ce02d83e84734dadde0bf5fb1b850  libssl-doc_1.0.1c-4ubuntu8.2apcera1_all.deb
0c135bd9e9370d4532c8c2beb51fefffc1a51231  openssl_1.0.1c-4ubuntu8.2apcera1_amd64.deb

Also, we have made them available from a simple apt repository. You can simply add the following to your /etc/apt/sources.list, do an apt-get update, and grab the packages.

deb public raring-openssl

The apt repo is signed by my own Apcera GPG key. You’d likely a message about not trusting the signing key. You can retrieve my key using the following command:

# apt-key adv --recv-keys --keyserver DB4363B3
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring
--secret-keyring /tmp/tmp.nWjupL3SrM --trustdb-name /etc/apt//trustdb.gpg
--keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg
--recv-keys --keyserver DB4363B3
gpg: requesting key DB4363B3 from hkp server
gpg: key DB4363B3: public key "Ken Robertson <>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

I don’t necessarily expect people to persist a keeping my key there, or wanting the apt repo for anything long term, so you can remove the apt repo afterwards and remove my key with:

# apt-key del DB4363B3

Thursday, April 10, 2014


Desktop Environment Makeover

I recently decided to do a complete makeover of my development environment. Developers get pretty comfortable with their local development environment, which can sometimes mean they live with bad habits because it is a habit.

It had dawned on me that when I was working, I generally had too much crap open. On average, I’d have 2 terminal windows open with a total of 7 tabs. Often it would be 3 windows and 10+ tabs. I was using Sublime Text 3, and overall loved it, but would often get a number of windows open and a plethora of tabs. At times, it felt too scattered. I was wasting time tracking what I had open and where rather than getting shit done.

There was another nagging thing. Some months earlier earlier, I had set up Emacs and was trying to make a move to it. I had tried out Emacs a couple times before, but could never get over the productivity slump with it.

I ended up not succeeding and ended up back in Sublime Text. Sometime later, Derek was at my desk looking over some stuff and I made a comment that I had gone back to Sublime. He jokingly said “yeah, I knew that wasn’t going to work out.” While it was a simple poke, it stuck in my mind as a challenge.

I am not really one for New Years resolutions, but last month I was thinking it may be worth challenging myself… and what better way to break any habits you have by doing a complete makeover at once. Make it hard to fall by doing a complete environment change.

My complete environment change has taken me from running OS X to now using Arch Linux. To help undo the clutter and remain focused, I’ve moved to the tiling window manager AwesomeWM.


Awesome is incredibly customizable. I had tried out Xmonad as well, however I felt more comfortable customizing things in Lua than Haskell, and Awesome felt like it had a clearer path to my optimal setup.

My layout when coding is now one large, centered Emacs window with 4 small terminals flanking the corners. Everything I am working with is right there. No tabs. No other windows.

Other distractions are out of my way, however they’re also quickly accessible when I need them. When hooked up at my desk, my laptop screen is tiled with HipChat there in case I see something pop up, and I’m a keystroke away from grabbing the window, moving it to my main montior, and another keystroke from putting it back where it was.

The move to Emacs has actually gone fairly well. I still have Sublime Text installed, however with also moving to Linux, I couldn’t really fall back to it and be in my comfort zone. All the key changes between OS X and Linux made it so it wasn’t the same feeling as before. I’ve gotten pretty used to how to do things, and with previous attempts to switch I kept getting gradually closer with finding packages/tools for various things.

Overall, moving to Linux has gone fairly smooth. Setting up Arch wasn’t all that difficult. I ran Linux on my desktop several years ago and have used Debian, Ubuntu, Gentoo, Slackware, and even Red Hat (pre-RHEL). It took some tweaking to get everything on my MacBook Pro working, but pretty much all the drivers work. Some kinks remain mostly with the video card (thanks Nvidia). Otherwise still have Chrome, zsh, and have found equivalents for most other needs.

Some side benefits came up too. Generally find things more responsive in Linux. My whole system is setup in Chef. Linux has a lot of config files, but its easier to manage, track, and version them than GUIs that change internal attributes you can’t automate things well.

I’ve been finding Vagrant with NFS mounts for mounting in my code works far better. The NFS server on OS X must suck, since used to constantly get stale mounts where inside the VM, it would see an outdated version of the file.

Friday, February 14, 2014


Golang Tidbit: Defer

A while ago, I did a post on Golang Oddities. I only made one post in what I intended to make a series of, but at any rate, I’d realized “oddity” wasn’t really the right word. I was intending it more as an interesting bit to be aware of than knocking the language.

One interesting one to be aware of is how defer works within the language. A article on how defer, panic, and recover work briefly mentions something:

A deferred function’s arguments are evaluated when the defer statement is evaluated.

They offer up a simple code snippet to highlight the fact:

// try at
package main

import "fmt"

func main() {
	i := 0
	defer fmt.Println(i)

When run, this will print out 0 even though i++ is executed before the call to print. The way defer works is it does everything it needs to do to get ready to execute an expression, except it delays the actual execution. So anything that is an argument to the call is evaluated at the point in the method the defer is at, then executes the actual expression after the return.

The behavior looks innocuous, but can manifest itself in some auspicious ways. For instance:

// try at
package main

import "fmt"

type printer struct {
	message string

func (p *printer) SetMessage(msg string) {
	p.message = msg

func (p *printer) Print() string {
	return p.message

func main() {
	p := printer{}
	defer fmt.Println(p.Print())

This seems normal enough, but now instead of passing in a variable, the arugment is from a function call on a struct. The same behavior will result. It prints "Starting" instead of "Done".

However, you also have to be aware of what is being passed into anything being evaluated. In the above examples, simple non-pointer types were being passed in. So essentially a copy of the variable was being created and passed to the call that was being deferred.

On the other hand, take the following example:

// try at
package main

import "fmt"

func printStr(str *string) {

func main() {
	s := "Starting"
	defer printStr(&s)
	s = "Done"

In this example, a pointer to a string is being passed to the printStr function. Because a pointer is being passed in, assignments that happen after the defer statement are carried over.

So how can this be worked around? The simple way is through an inline function. Instead of calling what you want to call directly, create an inline function around it. Evaluating the function at the time is simple, since there are usually no parameters. But when it is run, it is still in scope of the variables within the main function.

// try at
package main

import "fmt"

func main() {
	i := 0
	defer func() { fmt.Println(i) }()

It is important to note the () at the end. You can’t defer a function type, you need to defer an expression. So the inline function needs to actually be called. The same is true with the go keyword to execute a statement in another goroutine.

Despite some of the gotchas with how defer works, it is definitely one of my favorite parts of Go. Instead of needing to scatter around cleanup code in a function, it allows you put cleanup right after dirtying. Say you have to do 5 different things which involve opening files, sockets, etc. Instead of mucking with cleaning up if the function fails at step 3 and cleaning up #1 and #2, you simply defer the cleanup after each step.

For example, take the following snippet. This is more psuedo code, not any of our actual code, but in it, we can use a local variable to track if we succeded, and can check it on the way out to see if everything was successful.

func createUserAndDatabase(name string) error {
	// understand if we succeeded
	success := false

	// connect
	db, err := connectToDB()
	if err != nil {
		return err
	defer db.Close()

	// create user
	user, err := db.CreateUser(name)
	if err != nil {
		return err
	defer func() {
		if !success {

	// create database
	newdb, err := db.CreateDatabase(name)
	if err != nil {
		return err

	// we're done, mark success
	success = true
    return nil

Another way it could be done is with a named return variable. In the function definition, give the error object a name and it can be accessed in the deferred call. If no error is being returned, then function succeeded.

Overall, defer is excellent to work with and hope you find it awesome too.

Monday, July 01, 2013


One year at Apcera

We recently passed the one year mark at Apcera and I thought I’d take a brief moment to look back on how far we’ve come and just how fun the past year has been.

My actual one year mark at Apcera was two weeks ago, however Tuesday of last week is what we consider our official one year mark, since June 18th of last year was when we had ourl “kickoff” with everyone from the initial team.

I had started one week earlier, but my first week pretty much consisted of ordering my MacBook (I started the day the retina Macs came out too), attending True University, some small planning, and getting my new MacBook at the end of the week.

In the year since, we’ve had a lot of ups and downs. We spent the first couple months in what I called double stealth mode – nobody knew what we were doing, and we didn’t even know what we were doing. Derek essentially told everyone to design the majority of the product – without coding it – and then sell it to him. What? No coding? I can’t code the cool stuff we talk about? Nope. In the long run though, it paid off. We all learned Go in the process and it allowed us to vet some of the early, fundamental assumptions.

There have been a lot of ups and downs though. In March, just as we were having our first Apcera family weekend skiing in Tahoe, we all came home to find out that Kirill, one of our team members who didn’t go up that weekend, and passed away. It was a pretty big shocker to the team. We are a very small and close knit team, and Kirill had an amazing talent. He was integral to the team. It isn’t like when you come in and find that someone has given notice. In that case, you have 2 weeks to process the transition. In this case, it was going home on Friday and have him not being there on Monday.

But over the past year, Apcera’s culture has truly grown into something amazing. Early on, Derek was asking us what kind of culture we wanted for the company – what are the traits and how do we implement them. I don’t think culture is something you can really define and implement though. It simply happens. It is formed out of the qualities of the initial team, and cemented by the traits they look for in others as the team grows. We didn’t choose our culture, our culture just happened.

And in my opinion, we have an amazing culture. We strive very hard to be a meritocracy. In our technical discussions, we all bring varying backgrounds and, though sometimes rocky, we grit through to come up with the technical designs everyone agrees on. We are strict in our code reviews. Yes, we’ll knit-pick on the grammar in your log lines and usage of index vs pointers in ranges, but that is because we’re all passionate about our product and bettering each other.

But outside of our technical culture, we value one another. At lunch, our main conference room becomes our cafeteria. Our Off-Topic room in HipChat is quite off-topic (but not off-color, as the room topic emphasizes). We’re a startup and busy, but still value balance in our lives. Time off, team lunches, team dinners, and also extended friends and family. We love having friends drop by the office, spouses, and try to plan a couple events that involve our spouses and families as well.

One of the important aspects to me is that after a year in, I feel more connected to everyone I work with than most previous employers. My wife knows everyone I work with, as well as their spouse, and their kids (though the team is growing fast too).

A few months ago we were talking about the one year coming up, and that was when I realized it was about this same point that at my last job I was actively talking to Derek about joining. I left my last job after only being there 11 months. I was 10 months in at Apcera and so far the thought of leaving had never even crossed my mind.

Now we’re just starting to unveil what we’ve been building for the past year. We haven’t really said a whole lot yet, since we’re all still pretty busy writing code to move it even further ahead, but we’ve built some pretty cool stuff so far and looking to doing so for a long time to come.

Monday, June 24, 2013


Managing APT repositories on S3 with ease

S3 is a great place to host an APT repository… cheap storage, easily available, no server to run or anything. The annoying thing is that most ways to set it up involve using something like reprepro on a local box and using s3cmd to then sync the local files up to S3. For me, the annoyance is that I use throw away VMs. I don’t want to store them, or I’ll remove the VM without thinking, and then need to redownload everything to just add one one package.

I wanted just a simple tool that would upload the new package, generate the Package file adding in the new file, and update relevant hashes in the Release file. It seems simple, but could find something for the lazy people like me.

Finally decided to scratch my own itch. Over the weekend I wrote deb-s3. Have to upload a package? No problem.

$ deb-s3 upload my-deb-package-1.0.0_amd64.deb --bucket my-bucket
>> Examining package file my-deb-package-1.0.0_amd64.deb
>> Retrieving existing package manifest
>> Uploading package and new manifests to S3
   -- Transferring pool/m/my/my-deb-package-1.0.0_amd64.deb
   -- Transferring dists/stable/main/binary-amd64/Packages
   -- Transferring dists/stable/main/binary-amd64/Packages.gz
   -- Transferring dists/stable/Release
>> Update complete.

If it is your first package, it creates all the necessary files from scratch. On your server, you can then plug it in your /etc/apt/sources.list as:

deb stable main

It features:

  • Specify different components or codenames as you wish. It’ll maintain other components and architectures in the Release file.
  • Options to specify the visibility of the file. Can set to private or authenticated, and then use apt-s3 to supported authenticated requests against S3.
  • Ability to sign the Release file (GPG will prompt for the passphrase)

Check it out. Use it, love it, fork it, extend it, send a pull request.

If you simply want to use it, its available as a gem as well.

$ gem install deb-s3

And you’re set.

Tuesday, February 26, 2013


Golang Oddity #1

Every language in existance has its own set of oddities. Since I’ve been working in Go full time for a couple of months now, I have run into some of its nuances and wanted to chronicle some of them.

While I am being critical in these kind of posts, the intent isn’t to bash Go, more it is about educating others. Go isn’t breaking existing convention (too much), however it is an emerging language and there isn’t as much out there to familiarize a newbie with things to be aware of.

So to start off with, give you something simple but very annoying:

Strings cannot be null, only empty

In Go, null (or nil) isn’t covered as heavily as I wish it was. Not every type is nilable, and this can lead to some annoyances.

All strings upon creation are simply an empty string (""). On the surface this doesn’t sound bad, but it can cause a lot of other busy work when dealing with other things that allow string to be null, or where the difference between null and empty are very important.

Most databases have understood for a long time that a null string and an empty string are completely different. Another is with user supplied input. Go is excellent for writing servers and APIs, and a common case with an API is CRUD functionality. You might want to support a partial update, where omitted values (essentially null) or not altered while supplied values (which may be a blank string) are updated.

Take the case of user supplied input, such as over a JSON API:

// try at
package main

import (

type Person struct {
  Name string `json:"name"`
  Location string `json:"location"`

func main() {
  var p Person;
  json.Unmarshal([]byte(`{"name":"John"}`), &p)
  fmt.Printf("Name: %q\nLocation: %q\n", p.Name, p.Location)
Name: "John"
Location: ""

In this case I define a struct and unmarshal some JSON that only specified the name. But then you can see Location is set to "". If they already have Name and Location set, and are doing an update with only Name, I don’t want to blank out Location. Now you got to do hoops.

Pick up that hula hoop… welcome casting

// try at
package main

import (

type Person struct {
  Name interface{} `json:"name"`
  Location interface{} `json:"location"`

func main() {
  var p Person;
  json.Unmarshal([]byte(`{"name":"John","location":"Gotham"}`), &p)

  fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))

func lookupZipcode(l string) int {
  switch l {
  case "Gotham": return 1
  case "Metropolis": return 2
  return 3

In this mock example, passing the location to another function to look up the zip code, but it expects the location as a string, so now you need to cast it.

Uhh ohh, with interace{}, type enforcement isn’t inherent

  json.Unmarshal([]byte(`{"name":"John","location":1234}`), &p)
  fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))
panic: interface conversion: interface is float64, not string

goroutine 1 [running]:
  /tmpfs/gosandbox-9aac7f9a_0c33fe58_a998effb_2a4a973a_458fb2a3/prog.go:17 +0xcd

However, when using interface{} as our type we lose the inherit type handling within json.Unmarshal and this will result in a panic rather than returning an error. In this case, the panic is on the print line rather than when unmarshaling. So now we need to do our own type validation, which has annoyances of its own.

Type checking #1

  switch p.Location.(type) {
  case string:
    fmt.Printf("Your zipcode is %d\n", lookupZipcode(p.Location.(string)))
    fmt.Println("OMG you didn't enter the right value")

Type checking #2

  if s, ok := p.Location.(string); ok {
    fmt.Printf("Your zipcode is %d\n", lookupZipcode(s))
  } else {
    fmt.Println("OMG you didn't enter the right value")

For type checking you can either go the switch route or check the second parameter in the cast call. If you’re morbid you could write your own typeof() using reflection perhaps.

For me, it is annoying because I need to actually do it and care. Perhaps I’m simply too spoiled by Ruby, however plenty of other languages support null strings as well. And so what if I am spoiled by Ruby… it focuses on developer happiness rather than hoops for performance. There are trade offs to all things, and I’ve accepted Go’s empty strings. I just miss my null strings.

Tuesday, October 16, 2012