Scott W. Bradley

in which scottwb thinks out loud

Fork me on GitHub

On Retirement

| Comments

Over the holidays, I had a conversation with my extended family about planning for retirement. We looked at standard retirement savings formulas and some aggressive monthly savings numbers. We computed what starting an aggressive savings plan at age 40, and retiring at age 65 would look like, when you hope to live to 85 years old. That’s 25 years of working and saving enough to live off of for another 20 years. It turns out that, even assuming good investment rates of return, it’s pretty hard to do much better than living for 20 years on a fixed income of half your age-40 salary. That’s pretty scary…

If I Were 22

| Comments

This is a re-post of an article I published on LinkedIn today.

There’s a trending topic going around on LinkedIn, where people talk about what lessons they would impart to their 22-year-old selves. “10 things I wish I had known,” “10 lessons I’ve learned,” “10 pieces of wisdom,” etc. Well, mine goes to 11…

1. Don’t dream someone else’s dream.

It’s very easy to get sucked into pursuing somebody else’s dream. They have passion and vision and can be very convincing. Silicon Valley is built on 22-year-olds breaking their backs and pulling all-nighters to make someone else’s dream a reality. At 22, I used to think, “I have plenty of time to sacrifice everything at startups and start over from scratch when I’m 30.” It doesn’t work that way. When the founder’s dream turns into a nightmare, you’re now living their nightmare with them. Statistically speaking, this is most likely how it will turn out. Pursue your own dreams.

8 Startup Lessons in 6 Months

| Comments

The following is an excerpt from a blog post I wrote for my company, Facet Digital, that was published today. We’ve been in business for six months, and we wanted to share what we have learned so far.

So what have we learned?

Stay committed. The first six months are about grit and hustle. Keep your eye on the ball, and know when to walk away from a bad deal. Remember how valuable you really are.

Doing this has taught us a few key lessons that we’d like to share with our 6-month-younger selves…

Wake Surfing With RubyMotion

| Comments

Supra Boats Swell System

A couple weeks ago, our first Facet Digital client project to go public in 2014 was launched in the Apple App Store: Supra Boats Swell Surf System.

This is a multimedia catalog app designed for reps and resellers on the boat show floor, allowing them to show off their Swell System for wake surfing. It features an interactive look at some of the unqiue features of this system, such as the ability to change the shape of the wave and even which side of the boat the wave is on! I’m no professional wake surfer, but just watching some of the videos embedded in this app make me anxious for summer to arrive…and to participate in some of these fun photo shoots with the great folks at Supra Boats for next year’s app.

Jeremy, Leif, and I had a blast building this app. Part of that was because the content was so cool. A bigger part was because because we decided, for the first time, to build an iOS app using 100% RubyMotion, instead of going the traditional XCode and Objective-C route.

So, I thought I’d share a litte bit about why we enjoyed using RubyMotion…

Defeating the Infamous CHEF-3694 Warning

| Comments

TL;DR: I hate the CHEF-3694 warning, so I made a cookbook to get rid of it. YMMV.

Resource cloning in Chef is a bit of a minefield. They have a ticket known as CHEF-3694 saying that the feature should be removed, and indicating that it will be by the time Chef 12.0.0 comes out. However, a lot of their Opscode-developed community cookbooks use (abuse?) resource cloning. The result is that you get tons of warnings about resource cloning that look like this:

1
2
3
[2014-01-24T16:15:55+00:00] WARN: Cloning resource attributes for package[perl] from prior resource (CHEF-3694)
[2014-01-24T16:15:55+00:00] WARN: Previous package[perl]: /tmp/vagrant-chef-1/chef-solo-1/cookbooks/perl/recipes/default.rb:26:in `block in from_file'
[2014-01-24T16:15:55+00:00] WARN: Current  package[perl]: /tmp/vagrant-chef-1/chef-solo-1/cookbooks/iptables/recipes/default.rb:21:in `from_file'

Where I come from, it’s considered an error to have a warning in your output. Ignorable warnings bury important ones. So…for better or worse, I embarked upon a journey to see what I could do to use resources correctly and avoid these warnings…

Optimistic Locking With Couchbase and Ruby

| Comments

Concurrent modification of shared data can be a problem in any distributed system regardless of what data store you are using. With ACID-compliant relational databases, a common tactic is to use pessimistic locking at the table or row level. Most NoSQL data stores do not have a pessimistic lock operation, and even when they do, it is often considered a performance hazard. So, most applications do not lock objects before writing them to a NoSQL datastore (or they use an external lock of some sort). This can quickly become a problem when you have a distributed system with write contention, as shown in the figure below:

One of the nice features of Couchbase is its “CAS” operation. This provides the ability to do an atomic check-and-set operation. You can set the value of a key, providing the last known version identifier (called a “CAS value”). The write will succeed if the document has not been modified since you read it, or it will fail if it has been modified and now has a different CAS value.

Using this operation, we can easily build a higher-level operation to provide optimistic locking on our documents, using a CAS retry loop. The idea is simple: get the latest version of the document, apply your update(s), and write it back to Couchbase. If there are no conflicts, then all is well, and you can move on. If there is a conflict, you re-get the latest version of the document, fully reapply your modifications, and try again to write the document back to Couchbase. Repeat until the write succeeds.

5 Things Your Code Must Be

| Comments

If you work for me, with me, or near me, there are certain qualities of your code that must be met before I will let you get away with calling it “done”. Developers love to brag about a piece of code being done the second it sort of works for the main “happy path” use case it was intended for…on their development system. Developers are also optimists when it comes to estimation. Some of that has to do with their enthusiasm for achieving the “done” state based on their own notion of just having solved the base problem they set out to solve.

This creates all kinds of schedule problems, expectation/reality mismatch problems with management, marketing, sales, etc.

There are tons of details that go into the definition of “done”, and they vary depending on the project and the organization’s software development lifecycle. Rather than try to rigorously enumerate these – which would be the topic of an entire book – I tend to lean on 5 keystone requirements that we can assess about the code before we can call it “done”. There are many tactical rules for developing good software, but at a strategic level, following these keystone requirements generally leads us to cover most of those details.

So…before you can call it “done”, your code module must be:

Defeating the Infamous Mechanize “Too Many Connection Resets” Bug

| Comments

Have you ever seen this nasty, obnoxious error when using the Mechanize gem to write a screen scraper in Ruby?

1
Net::HTTP::Persistent::Error: too many connection resets (due to Connection reset by peer - Errno::ECONNRESET) after 2 requests on 14759220

This has plagued Mechanize users for years, and it’s never been properly fixed. There are a lot of voodoo suggestions and incantations rumored to address this, but none of them seem to really work. You can read all about it on Mechanize Issue #123.

I believe the root cause is how the underlying Net::HTTP handles reusing persistent connections after a POST – and there is some evidence on the aforementioned github issue that supports this theory. Based on that assumption, I crafted a solution that has been working 100% of the time for me in production for a few months now.

Easy Papertrail Deployment Using Rake

| Comments

Papertrail is a great centralized logging service you can use for distributed systems that have numerous processes creating numerous log files, across numerous hosts. Having all your logs in one place, live tail-able, searchable, and archived is key to debugging such systems in production.

There are a few ways to set it up, as documented on their quick start page, such as a system-wide installation via Chef, or configuring your app to use the syslog protocol. They also provide a convenient Ruby gem called remote_syslog that can be configured to read a configured set of log files and send them to Papertrail.

I’ve found for simple Ruby project structures, it can often be easier to deploy Papertrail by installing this gem with Bundler via your project Gemfile, and then creating a simple set of Rake tasks to manage starting and stopping the service. This way it’s self-contained within your application repository, gets deployed with the same mechanism you deploy your application code, and can be used on your development and staging systems just as easily, without any Chef cookbooks or other configuration hassle.

Always-On HTTPS With Nginx Behind an ELB

| Comments

A while back, I wrote about configuring a Rails app to always enforce HTTPS behind an ELB. The main problem is that it’s easy to setup the blanket requirement for HTTPS, but when you are behind an ELB, where the ELB is acting as the HTTPS endpoint and only sending HTTP traffic to your server, you break the ability to respond with an HTTP 200 OK response for the health check that the ELB needs. This is because your blanket HTTPS enforcement will redirect the ELB’s health check from HTTP to HTTPS – and that redirection is not considered to be a healthy response by the ELB.

The same applies to any server you’re running behind an ELB in this fashion.

This posts discusses how to handle the same issue with Nginx.