Over the holidays, I had a conversation with my extended family about planning for retirement. We looked at standard retirement savings formulas and some aggressive monthly savings numbers. We computed what starting an aggressive savings plan at age 40, and retiring at age 65 would look like, when you hope to live to 85 years old. That’s 25 years of working and saving enough to live off of for another 20 years. It turns out that, even assuming good investment rates of return, it’s pretty hard to do much better than living for 20 years on a fixed income of half your age-40 salary. That’s pretty scary…
My fear is that planning for a fixed retirement income of half what I earn now seems really low today not to mention 25, 35, and 45 years from now..and I don’t want to live with half the means in 25 years. I want to live at double. I want to enjoy a big house where my kids and grandkids can visit, take family vacations, send my kids to college without them having to go $300K into debt, and be able to take care of my aging parents if necessary.
I see people from my parents’ generation who retired on fixed income. While it’s great that they came up in a time where you had pensions and guarantees of a certain amount of fixed income and social security, it still seems really stressful. They have the same income for the rest of their lives, it’s a tight budget from day one, and their spending power decreases dramatically every year under inflation. They can’t afford to fly to a family wedding, go out to eat with friends, etc. It’s a weird thing to look forward to. You have this picture of retirement being a relaxation period where you can do all the things you always wanted to but couldn’t because you were working…and now you have the time but you can’t afford those things. Bleak picture of the twilight years, if you ask me.
The conclusion that I’ve come to accept is that retiring at age 65 is ridiculous. Not only because of what our quick-and-dirty retirement spreadsheet shows, but because of the increases in life expectancy over the last few decades and how the mind and body thrive on having a sense of purpose. I remember when my dad’s dad retired (I was younger than 10, so my memory may be inaccurate) and I heard the adults saying that statistically, men who retire before age 62 die within 5 years. He didn’t, but we saw how he decayed shortly after retirement. Maybe the memories of WWII had something to do with it. Maybe that’s just how everyone is at age 80. I don’t know. But I see a history of US presidents starting a presidency beyond our “normal” retirement age (Reagan was 69) and doing one of the highest-stress, most-demanding jobs on Earth for 8 more years. If the leader of the free world can work until 77 years old…
I think a retirement age of 65 was designed for when we had a life expectancy of roughly 65, and was meant for when you were no longer able to be a productive member of society, to give you a year or two of rest before you die. But that time span has grown from 2 years to 20 years with increased life expectancy, and our financial planning hasn’t. Trying to save enough money to live comfortably for 20 years with no income seems unachievable without a windfall.
Scary indeed. Most industries don’t hire 70-year-olds who expect higher paychecks, better health care benefits, and more vacation than kids just out of college do, especially when those twenty-somethings are seen as better investments for companies in the long run. Our employment rate doesn’t seem like it could keep up with extending everyone’s working years by 20 years. That leaves me with the only conclusion being that I have to OWN something — a business, investments, real-estate, something — that makes money.
Over my Christmas break I read Peter Thiel’s new book Zero to One: Notes on Startups, or How to Build the Future. There’s a chapter about the difference between Definite Optimism and Indefinite Optimism. Indefinite optimism assumes that things will generally always get better, but that we just don’t know how or when, so let’s just invest in things across the board and wander into the future. Definite optimism picks a goal and figures out how to make it happen, assuming we will always find a way. One example of indefinite optimism is the US economy since the 1980s, where all our growth comes from the financial sector, speculation, Wall Street, etc. As opposed to the 1940’s - 1960’s where our definite optimism saw problems and came up with answers — invent a nuke, put a man on the moon, etc. It’s amazing to think how small the budget and timeline was for putting a man on the moon compared to the indefinite expenditures and loose goals of today — decade-long oil wars for some nebulous positive outcome that may eventually happen, $19B acquisitions of messaging apps for sending each other cat pictures with no real value.
This all makes me think that the new way to think about retirement is to just find a problem and fix it. Make it your life’s goal. It’s the only thing I can actually grasp at. I just don’t know yet what that problem is, and it still seems like a gamble…
]]>There’s a trending topic going around on LinkedIn, where people talk about what lessons they would impart to their 22-year-old selves. “10 things I wish I had known,” “10 lessons I’ve learned,” “10 pieces of wisdom,” etc. Well, mine goes to 11…
It’s very easy to get sucked into pursuing somebody else’s dream. They have passion and vision and can be very convincing. Silicon Valley is built on 22-year-olds breaking their backs and pulling all-nighters to make someone else’s dream a reality. At 22, I used to think, “I have plenty of time to sacrifice everything at startups and start over from scratch when I’m 30.” It doesn’t work that way. When the founder’s dream turns into a nightmare, you’re now living their nightmare with them. Statistically speaking, this is most likely how it will turn out. Pursue your own dreams.
I wrote a letter to my grandfather when I was 16. In it, I described how I wanted to write my own story, make an impact, do something daring, and not just be a 9-5 employee the rest of my life. It seems by 22, I had forgotten that…or at least tricked myself into thinking that working for a nice salary at a startup somehow fulfilled that ambition. If it’s comfortable and the only risk is your company going out of business, you’re not really daring to do anything extraordinary with your career. Having started my own company now, I can tell you with certainty that I had all the tools I needed to do it when I was 22…and no wife and kids to make me think twice about the risks.
If you have nothing to risk, you probably have nothing to gain. Wake up young 22-year-old Scott! Just because you are working at a startup that may make it big, or may crash and burn, does not mean you are risking anything as long as you’re getting a comfy salary and good benefits. Sure, you’ll learn a lot and gain great experiences, but you will do that wherever you go. The only real way to elevate to the next level is because you have to. Stay hungry. And more importantly, realize that failure is not the end of the story.
Young women often grow up with body image and self-esteem issues because they compare themselves to the models on the cover of Cosmopolitan magazine. As we all know, those models are all airbrushed and photoshopped. In real life even those models cannot compare to those pictures, because those pictures are not real. The same thing applies to entrepreneurs trying to build the next big product or business. When I was 22, everyone was talking about being “the next Microsoft”. Today it’s Facebook, Instagram, and WhatsApp. These are outliers, and their successes are touted in the media until we come to believe they were overnight successes. Their stories suffer from survivor bias, and they are all “airbrushed” for public consumption. Don’t get star-struck by Gates, Jobs, and Zuckerberg. Don’t longingly dream of working at the perfect workplace with awesome cultures like GitHub, 37signals, Heroku, and Dropbox. Everyone’s public persona is airbrushed. Blindly comparing your behind-the-scenes to their highlight reels is a surefire way to cast self-doubt and anxiety.
At 22, I hadn’t seen Fight Club yet. And when I did, I missed much of the point. Watch it again 20 years later and you’ll see what I mean. One of the many great quotes from this movie goes like this:
“You’re not your job. You’re not how much money you have in the bank. You’re not the car you drive. You’re not the contents of your wallet.” –Tyler Durden
Is Charles Barkley measured by how many championships he won? No. He’s measured by his performance, not by his teams’ outcomes. If he was measured by his championships, he’d be a big fat zero. Instead, he’s a hall-of-fame legend who’s considered to be one of the best to ever play the game. And he was able to parlay that into a many other successes in life.
You’re not your job. You’re not your startup acquisition. You’re not your IPO, your products, or your company. You’re the sum of your experiences, the connections you make, the ethos you uphold, and the way you treat other people.
It’s been said time and time again, “It’s all about who you know.” There’s some truth to that. The saying carries a negative stigma. Perhaps instead it should be, “It’s all about who knows you.” Make your boss look good, network with your peers, mentor your subordinates. Don’t be the superhero, making sure you’re always the one who comes to save the day. Let other people be heroes sometimes. Help them be the hero. Get out there and network, see how everyone else does it – and respect and learn from it. Don’t stay in Plato’s Cave thinking that you’re so smart you can figure out everything for yourself. There are smart people out there that you can learn from. This will pay off tenfold down the road.
If the first conversation with a prospective boss involves bragging about how much of a hard-ass he was when he fired the last wave of employees because they were idiots, that’s a big red flag. He’s either constantly hiring idiots, or he’s the idiot. I’ve known people who have never held a job for more than 6 months, and their reason for leaving is always something about how the company was mismanaged, the boss was a jerk, the team wasn’t smart enough, etc.
Likewise, don’t be that guy. It turns out Mom was right when she said, “If you don’t have anything nice to say, don’t say anything at all.” Ignore this advice at your own peril.
When buying a home, they tell you the three most important factors are location, location, and location. For building your career, think “lifestyle, lifestyle, lifestyle.” It’s not to say that mastery, autonomy, purpose, money, titles, upside, coworkers, etc. aren’t important…in fact, those are all factors in your lifestyle. But consider things like commuting and vacation time. How much better is your life when you can have flex time to eat breakfast with your kids, vacation whenever you want, work from anywhere in the world, and not have to waste an average of 33 hours per month sitting in traffic? Studies show that people give up 20-30% in salary for these kinds of benefits. I’ve made those kinds of decisions, and I believe those studies are bang on.
The older you get, the harder habits are to build, and the harder they are to break. Decide, while you’re young, what habits you want to develop (and which you don’t!) and deliberately design your trigger-routine-reward habit loops to set yourself up for success. In my youth habits formed unintentionally. Then, they were much harder to break when I got older. It seems the negative side-effects of bad habits are masked by youth…but they will catch up with you. Address them while it’s easy.
My generation of software developers pioneered an odd machismo culture of pulling all-nighters, drinking energy drinks, and coding 18-hours-a-day while cranking tunes and shooting each other with nerf guns. We sat in cushy high-back chairs with our feet up, and ate potato chips and fast food all day long from our limitless supplies of free sodas and snacks. The generation before me went to work at Initech 9-to-5, and lived normal work-lives like everyone else. The generation after me has their stand-up treadmill desks and exercise ball chairs. Their ergonomic keyboards, and company-sponsored gym memberships. Be more like the next generation. Bring a healthy lunch. Drink your coffee black. Drink more water. Sit up with better posture. Skip the sugary drinks and snacks. Get at least 8-hours of sleep.
Embrace The Eight-Hour Burn philosophy from the Extreme Programming movement. Work so hard and focused in 8 hours, that you could not possible output any more. Then enjoy the other two thirds of your day. You’ll be WAY more productive than the 18-hour-a-day folks, who’s big unspoken secret is that they still only get about 4 hours of real work done in a day. And if you work for a boss who’s offended by you only working 40-hours a week, no matter how productive you are, then quit. He doesn’t get it. Follow this advice and you’ll be more productive, more creative, and have a longer career.
When you have your first kid and start thinking about how all those get-rich-quick schemes never panned out, and those one-in-a-million startup dreams never hit it big, it’ll largely be too late to start saving money for college and retirement. The beauty of compound interest is that the variable with the biggest impact is time – and you can control it. Start early. This is one of those habits you want to form early.
“Compound interest is the eighth wonder of the world. He who understands it, earns it … he who doesn’t … pays it.” –Albert Einstein
The odds of retiring off of your startup win are like the odds of winning the lottery. You need a better game plan. My dad’s generation retired with millions of dollars in retirement funds because they saved from day one with pension plans. They played it safe, worked hard, earned money, saved regularly, and lived within their means. Emulating that doesn’t mean not playing for the big win. It just means not being ignorant about it. Having a plan to build wealth will free your mind in ways that just might enable you to go get those big wins without worrying about having no Plan B.
(Photo credits: morgueFile, Despair, Inc.)
]]>Stay committed. The first six months are about grit and hustle. Keep your eye on the ball, and know when to walk away from a bad deal. Remember how valuable you really are.
Doing this has taught us a few key lessons that we’d like to share with our 6-month-younger selves…
Know what your customer needs. At first we struggled to succinctly describe what we do as a technology consulting firm. We build apps. We write code. We design things. We know how to manage projects. We handcraft delightful user experiences that…blah blah blah. Those things are not what are customers need. Our customers need their problems solved. You have technology problems. We solve them for you so that you can focus on your business.
Don’t sacrifice quality. One of my favorite quotes as a consultant:
“If you think it’s expensive to hire a professional to do the job, wait until you hire an amateur.” –Red Adair
We’ve seen the reality in this time and time again. In fact, we’ve made a good living cleaning up after amateurs. The cost of hiring an amateur is not immediately realized. It comes back to haunt you again and again over the lifetime of your project. We don’t want our names on that. When potential customers ask us to drop our rates to match some half-priced bargain firm, we politely decline. We know they’ll be back when the excrement hits the proverbial whirling blades.
Never lock in a client. I never sign up for an online service that won’t let me quit on a moment’s notice and export all my data. Why shouldn’t we give our customers the same benefit? In software consulting this means providing full transparency into everything we do. Written records of deliverables, wireframes, ERDs, blueprints, API docs, wiki instructions, etc. We strive to make it 100% easy to replace us, and we let our customers know this. You’re never locked into us. Our clients stick with us because of the value we provide, not because we enact dubious practices to try to force them to be dependent upon us.
Work for payers. Be a payer. ”Payers value their time more than their money.” Amy Hoy nails it with this simple piece. A client that will pay you for your expertise, to get the job done, while they are off solving their own bigger and better problems, is like GOLD. They value their time and will pay you to solve problems and save them time. A client that doesn’t value their time will spend six hours researching the solutions to a problem you’re hired to be the expert in. They’ll impede your ability to provide them value, and they’ll nickel-and-dime you to death. Run away.
By the same token, when running a small business, you should be a payer. We’re way more productive and profitable when we focus on what we do best, and pay someone else to save us time and money on the rest. That’s why after learning the ropes of business accounting, we hired an awesome accountant.
Invest in tools, platforms, and processes. I always take notice of a professional carpenter’s tools and processes, and how those serve as multipliers for their skill. A pro has the right tools and he knows which tools to apply to which problems. Building software is no different. Small teams often try to get by only using free tools, cobbling together pieces of custom code, and trying to build everything in-house to save cash, when buying a solution would move them so much farther ahead. At Facet, we’re not afraid to pay for platforms like AWS and Heroku, buy tools like Adobe Creative Cloud, and outsource commodity technology to SaaS platforms like SendGrid. Likewise, we’re in a great position to identify common problems across multiple clients, and solve them with one solution that we can sell over and over again.
Give back. So much of what our industry builds today is based on the free and open-source software world. When we started Facet Digital, we gave ourselves a goal of contributing back an average of one open-source project per month, no matter how big or small, and we’re right on track. Not only is it rewarding to be a part of that community, it generates real leads and sparks valuable networking.
Trust your co-founders. We’ve all seen the stereotypical co-founder feuds. One founder manipulating the other. Bad-mouthing behind each others’ backs. Countering each others’ every move. Why even go into business together if that’s how you’re going to act? One of Facet Digital’s core principles is to let each other fail and learn. We try things outside of our comfort zones. We make the bold moves, and we trust each other’s motivation and intellect. If we’re always worrying about each other, we’re not putting our energy in the right place.
Audentes fortuna iuvat. Fortune favors the bold. Starting a company is great exposure therapy for fear of failure. The easy, comfortable path is not fulfilling. The quickest way to stunt the growth of a fledgling company is to play it safe. Babies learn to walk by trying to stand up and falling over thousands of times. Their egos don’t get hurt by failure. They just get up and keep trying. At Facet, we have a ton of bold moves on deck. Baby steps…
A couple weeks ago, our first Facet Digital client project to go public in 2014 was launched in the Apple App Store: Supra Boats Swell Surf System.
This is a multimedia catalog app designed for reps and resellers on the boat show floor, allowing them to show off their Swell System for wake surfing. It features an interactive look at some of the unqiue features of this system, such as the ability to change the shape of the wave and even which side of the boat the wave is on! I’m no professional wake surfer, but just watching some of the videos embedded in this app make me anxious for summer to arrive…and to participate in some of these fun photo shoots with the great folks at Supra Boats for next year’s app.
Jeremy, Leif, and I had a blast building this app. Part of that was because the content was so cool. A bigger part was because because we decided, for the first time, to build an iOS app using 100% RubyMotion, instead of going the traditional XCode and Objective-C route.
So, I thought I’d share a litte bit about why we enjoyed using RubyMotion…
Most of the code we’ve written in the last 5 - 10 years has been in higher-level, programmer-pleasing, dynamic languages like Ruby, Python, and JavaScript. While I do have an extensive background in C and C++, sometimes I cringe at the thought of going back there. That’s what Objective-C represents to me. Sure, it has message passing like Smalltalk, and the last few years of improvements like ARC and some of the excellent static analysis tools are great, but for my programmer productivity buck, Objective-C in all its verbosity doesn’t even compare to the expressiveness of Ruby.
Plus, I hate XCode. I’m an Emacs, Vim, and zsh guy. Keep my hands on the keyboard.
Here are a few of the highlights we experienced:
If you ask me, the Objective-C syntax is ugly and cumbersome. Maybe that’s just the view through my Ruby-colored glasses, but just compare this simple Objective-C code:
1 2 3 |
|
to the equivalent RubyMotion code:
1 2 3 |
|
Or how about native types, list comprehension, and block syntax?
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 |
|
Even though RubyMotion is compiled to LLVM just like the Objective-C code is, I am sure one could make a performance argument in favor of Objective-C in examples like this. However, I prefer to optimize for developer productivity, time to market, and ease of correctness. The beauty of RubyMotion is that you can call anything natively written in Objective-C or C anyway, so you always have the option to factor out performance sensitive pieces if you feel it is necessary.
If you’re used to TDD in a language like Ruby, using tools like RSpec, you’ll find the iOS/Objective-C ecosystem lacking in this department. RubyMotion integrates Bacon, a mini RSpec of sorts, that provides a great DSL for unit and integration testing. For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Running these tests is as easy as running the rake spec
command. This builds all your app and library code, yoru test code, downloads a test executable to the simulator, runs it, and prints typical RSpec-style output right to the terminal. The build/test cycle is almost identical to what you’d do for a Ruby on Rails app. You can even run the test suite on the device with rake spec:device
and can filter to specific test cases when you want to focus on one part of the app.
The workflow is all based around rake
. Everything is done via the command line. No XCode required. For an old-school guy like me, this is way it should be. My workflow goes like this:
rake spec
to build the test suite, download it to the simulator, and and run it, seeing the output in my terminal.rake
to build the app, downloaded it to the simulator, and run it.rake device
to build the app for my target device, download it there, and run it, so I can play with it on a real iPhone or iPad.rake testflight notes="Added new features!"
to build an AdHoc distribution, upload it to TestFlight, and push it out to all of my beta-testers.rake archive
to build the whole submission to the app store..ipa
that outputs into the Application Loader, and I’m off to the App Store!Note the part about the REPL! This is one of my favorite parts. I can’t stand working in a langauge without a good REPL. It’s one of the reasons I love working in Ruby and Python. Not only does RubyMotion provide a nice REPL, it is run in the context of the running app. That means you can call methods on live objects, print out their values, etc. This can come in super handy when you just want to tweak the positioning of something until you like it, and then read out its final values.
You can also use puts
to print to stdout
right there in the REPL in your terminal, like you’re used to with your usual Ruby apps.
It’s very cool that out of the box, RubyMotion comes with TestFlight integration. That’s hands-down the best way to get your pre-release app builds in the hands of beta-testers and stakeholders.
If you’re not into Emacs (why wouldn’t you be?), RubyMine by JetBrains is a great IDE for RubyMotion (as well as Ruby on Rails). It integrates with auto-completion of the iOS SDK API, and the rake-based test runner too. And it has support for Emacs key-bindings. ;)
You can’t talk about RubyMotion without mentioning BubbleWrap and SugarCube. Both of these libraries add tremendous value and easy-of-use to RubyMotion. They provide two different perspectives on wrapping CocoaTouch/iOS APIs with more idiomatic Ruby interfaces – with some amount of overlap. These go a long way to making your code even less verbose – particularly when dealing with the direction one-to-one nature of calling Objective-C APIs from RubyMotion.
Consider SugarCube, for example:
1 2 3 |
|
You don’t even want to know what that code looks like in raw RubyMotion, let alone Objective-C. SugarCubes adds some useful tools to the REPL as well, like the tree
command that outputs information about your view hierarchy, and lets you gain access to these easily:
1 2 3 4 |
|
BubbleWrap provides a JSON interface that is identical to the native Ruby JSON API, making your web service consumers from your Rails apps easy to port over. It also gives us some great one-line high-level wrappers for common operations like playing a movie or taking a picture with the camera, for example:
1 2 3 4 |
|
Before you re-invent anything in RubyMotion…look in BubbleWrap and SugarCube first. They’re open source, of course, so even if you need to customize their behavior, you can learn a lot by looking at their source code.
Unlike other non-Objective-C iOS app frameworks like Sencha Touch or PhoneGap, RubyMotion compiles to the same LLVM code that Objective-C does. It uses the same compilers. The secret is that RubyMotion is not Ruby. It is a subset of Ruby, based on MacRuby. There are a few features of Ruby that do not exist, but most of them do. This is because Objective-C and Ruby are very similar, and both attribute their message-passing and object-oriented nature to a Smalltalk heritage.
You can program slick animations with RubyMotion that perform as well as those made in Objective-C, without dealing with the sub-par performance of the JavaScript engine running a bloated jQuery library inside a UIWebView
, like you would with PhoneGap.
Memory Management - you don’t have to. The RubyMotion team has basically built their own ARC-style memory management baked into the language constructs. You use Ruby like you always do, the RubyMotion framework takes care of auto-releasing unreferenced objects, similar to Apple’s ARC. You rarely need to worry about references, but in the rare case that you do (e.g., for cyclical references), the WeakRef class gives you what you need. The RubyMotion docs are pretty good too, and they generally point out very explicitly when you are in danger of needing to use a WeakRef
.
Developing an iOS app with RubyMotion was fast and fun. I believe programming should be fun as much as possible. Anything tedious should be automated or abstracted – and that’s exactly what RubyMotion has done for iOS programming. We’ll definitely be using RubyMotion at Facet Digital for a few more iOS apps coming out in the next few months…
]]>Resource cloning in Chef is a bit of a minefield. They have a ticket known as CHEF-3694 saying that the feature should be removed, and indicating that it will be by the time Chef 12.0.0 comes out. However, a lot of their Opscode-developed community cookbooks use (abuse?) resource cloning. The result is that you get tons of warnings about resource cloning that look like this:
1 2 3 |
|
Where I come from, it’s considered an error to have a warning in your output. Ignorable warnings bury important ones. So…for better or worse, I embarked upon a journey to see what I could do to use resources correctly and avoid these warnings…
The discussion about this this issue is an interesting read. You should be able to, for example, declare a service resource in one spot of your recipe, and later start it. You should also be able to have multiple recipes be able to install the same package resource and have it be idempotent, without having to worry about coordinating between cookbooks. That’s Chef’s job. To support this, Chef uses a technique they call resource cloning, which spews out warning messages because they plan to get rid of it. Proponents of the warning messages argue that if your cookbook relies on resource cloning, then you are doing something incorrectly and you have bigger problems. However, there are popular community cookbooks that won’t work without it.
Here’s an example of stock perl
and iptables
cookbooks causing this problem:
I really wouldn’t want perl
and iptables
to have to coordinate between each other in order to avoid this warning. Perhaps there is a way to re-order them? Not that I could figure out…at least not without either making dangerous assumptions or editing stock community cookbook code.
Even so…there are cookbooks that have this problem by themselves without the help of other cookbooks. For example, one of the most popular cookbooks, apache2
:
Since it was well-argued that cookbooks shouldn’t rely on resource cloning, and that it would be removed in a future version of Chef, I decided to replace it myself with resource duplication. Resource cloning and its associated warning messages are handled in a method called Chef::Resource::load_prior_resources
, so I just monkey-patched out that method to allow the duplicate resource without copying over any of the existing resources’s attributes, using a bit of code like this:
1 2 3 4 5 6 7 8 |
|
NOPE! That doesn’t work. While this works for some of my cookbooks and certain resources, the community apache2
recipes clearly rely on the soon-to-be-deprecated resource cloning behavior. These recipes define the service[apache2]
resource a number of times to do things like enable/start/restart after config changes. Without resource cloning, the apache2::logrotate
recipe, for example, fails to process a restart of the apache2 service because it didn’t inherit the necessary attributes that needed to be cloned from the original service definition, giving errors like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
I think resource reuse is probably the intention in 99% of the use cases. Some commenters on this discussion have suggested making all their recipes look up the resource in the resources collection first, and using the existing one if possible, otherwise handling the not-found exception and creating the new resource. Not a bad suggestion…but there’s no way I’m going to modify every community cookbook to do that.
As an experiment, I tried simply overriding the service
DSL method (which is actually implemented in method_missing
) to test this theory, with some monkey-patching like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
That’s close, but it doesn’t quite work. The most noticeable failure with this is that only the last action
will be run. So for example, say you have something like this:
1 2 3 4 5 6 7 8 9 |
|
Normally, this creates two service[apache2]
resources, each copying its configuration from the previous definition, and overriding the action(s). When executed, you’d end up with both actions being executed (but with a bunch of warnings that you’re using the dreaded resource cloning).
With the reuse technique above, the problem is that, in this simple example, the action: start
overwrites the action: enable
. In the end, you have your service started…but chkconfig
shows that it was never enabled. This can obviously be much worse in more complex scenarios.
My workaround for this takes advantage of internal knowledge of how the action
DSL method works…and it only applies to that one method. We’re in dark magic territory, so I am sure this could potentially break somebody’s cookbooks.
Building on the resource reuse attempt above, I made it so that instead of letting the action
of a resource stomp over the pre-existing resource’s action, it would merge the actions together. In the over-simplified version, this looks like replacing the single instance_eval
line from above with code like this:
1 2 3 4 5 6 |
|
There are a few details I glossed over, such as managing the :nothing
action, different default actions for different types of resources, actions that are Arrays vs Symbols, etc. My final solution was to extend Chef::DSL::Recipe
with a reusable_resource
method that could be used by specific resource DSL overrides as much or as little as you want. Here’s what that looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
With that, if you only wanted to override the default behavior for package
and service
resources, you could monkey-patch those in like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Now all those warnings are gone. My complete initial install works great without complaint. So do my subsequent re-runs.
I’ve packaged this all up as a cookbook that has nothing but a library applying these monkey-patches. You can grab it from GitHub and put it at the front of your run_list with recipe[chef_resource_merging]
.
This technique will probably fail in scenarios where you want to have multiple resources with the same name that have differing attributes other than action
. For example, two different bash
resources in two different places, with two different command
scripts, with the same name. Either resource cloning or resource duplication would work…but resource merging the way I’ve implemented it is going to crash and burn. Of course you can simply name these resources differently, but given that resources share a global namespace, there’s always a risk unless you make sure to prefix your resource names with something uniquely yours.
This is why I factored this technique into a reusable_resource
DSL method. You can use it directly in custom cookbooks if you want. You can override specific types of resources as I have shown in the example, only touching package
and service
. Or, you can override those with additional logic to only do in in narrower cases (e.g., only if there is no block given). That’s up to you. Your Mileage May Vary.
I welcome any and all discussion on this. Especially from someone who knows the internals of Chef much more deeply than I do, who can tell me if I’m getting myself into too much trouble here.
I’m hoping that some day there is a proper mechanism for resource reuse, when that is what is intended, or perhaps some way to detect if two resources internals are the same and make a smart decision about whether to reuse or duplicate. Maybe a real resource merging solution could happen, where the entire blocks are chained and executed? Or perhaps we’ll see some resource namespace solution (though that would not have solved any of the issues I’ve had).
]]>One of the nice features of Couchbase is its “CAS” operation. This provides the ability to do an atomic check-and-set operation. You can set the value of a key, providing the last known version identifier (called a “CAS value”). The write will succeed if the document has not been modified since you read it, or it will fail if it has been modified and now has a different CAS value.
Using this operation, we can easily build a higher-level operation to provide optimistic locking on our documents, using a CAS retry loop. The idea is simple: get the latest version of the document, apply your update(s), and write it back to Couchbase. If there are no conflicts, then all is well, and you can move on. If there is a conflict, you re-get the latest version of the document, fully reapply your modifications, and try again to write the document back to Couchbase. Repeat until the write succeeds.
With this, the figure above would look like this:
There are a few things that are important to note about this technique.
I have created a GitHub repository that implements this technique by extending the Couchbase Ruby client’s Couchbase::Bucket
class on which you normally call get
and set
methods. You can, of course, put this elsewhere so that you don’t need to monkey-patch someone else’s library. Here is a look at the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
With this monkey-patch loaded, you can now do the following:
1 2 3 4 5 6 7 8 |
|
It is important to note that, if your changes are not commutative, like our simple increment example, the code in your modification block will probably want to be smart enough to do some kind of merge logic for conflict resolution. It must recognize that the state of the document before calling update_with_retry
may not actually be the same state that the successful block operates on.
Test code for this method can be seen in the GitHub repository.
Also note: My colleague Jeremy Groh has a similar post with sample code for doing optimistic locking on Couchbase using C#.
UPDATED Nov. 15, 2013: As Sergey Avseyev pointed out, there is a very similar method Couchbase::Bucket#cas
that already exists in the couchbase-ruby-client. The only thing it doesn’t do that I described above, is the retry upon collision. At his suggestion, I’ve extended that method to take a retry
option. This is probably a better solution anyway, since it handles both synchronous and asynchronous modes. Look for it in an upcoming release of the couchbase-ruby-client gem.
This creates all kinds of schedule problems, expectation/reality mismatch problems with management, marketing, sales, etc.
There are tons of details that go into the definition of “done”, and they vary depending on the project and the organization’s software development lifecycle. Rather than try to rigorously enumerate these – which would be the topic of an entire book – I tend to lean on 5 keystone requirements that we can assess about the code before we can call it “done”. There are many tactical rules for developing good software, but at a strategic level, following these keystone requirements generally leads us to cover most of those details.
So…before you can call it “done”, your code module must be:
This is fairly obvious if you follow TDD/BDD methodologies. But this is also important when it comes to things like daemon services that produce/consume messages via queues. We need to keep an eye on making those services fully self-contained such that we can mock the queue system, as well as the collaborator queue producers/consumers, and still fully integration test the service under test. Enforcing this often eliminates other classes of problems such as code coupling. If your code is not testable (and covered with tests!), it is not done.
Maybe small features don’t need this day one, but we often find ourselves with things like constants defined very early on. If the overall system has a configuration framework of some sort, we should be using that from day one, and we should be especially aware of config params that vary between test/dev/stag/prod environments, and have these extracted to a config file. This makes it so that your automated deployment tools can manage environment configuration as well. If your code module cannot be externally configured for different environments as necessary, it is not done.
Deployability often piggybacks on existing deployment mechanisms for small features, but in a larger sense, I mean “your new image upload processing service isn’t done until it has the worker process wrapper, config file, rake/capistrano tasks as necessary, chef cookbook/recipe/role as necessary, etc. that are necessary to deploy and run it in development, staging, and production”. You don’t get away with throwing code over the fence to the operations team these days. One of the core tenets of the “DevOps” culture is that developers are responsible for the operations aspects of their code. If you code module cannot be deployed by the automated continuous deployment system, it is not done.
For some features this just may be logging. Even just at that, it’s important this integrates day one with your centralized logging infrastructure (rsyslog, Papertrail, Loggly, etc). I believe that every unit should emit timing and workload information to a stats collector like statsd
or Cube. In a service-oriented architecture, for example, at the very least, each queue and service should be recording message queue wait times, throughput, busy/idle percentage, processing times, etc. Throw that over to something like nagios, zabbix, Scout, or the new custom monitoring charts at NewRelic, and you should be able to answer those hard ops questions about where something went wrong in your distributed system much more quickly. If we cannot measure the operating parameters of your code, it is not done.
Maybe this isn’t that much different than #4, or maybe this should really be called “alertable”. For the most part, when we release a piece of code, we want to know if it is working, and when it stops working. At the simplest level, this is aimed at letting us sleep at night - ideally this includes “restartable”. We don’t need to manually check if the async mailer workers are down if something like god or monit is watching our processes, and when they die, alerting us and restarting them. (This applies at the cloud instance level too.) Being monitorable doesn’t just have to apply at the process level though. Going hand-in-hand with measurability, this comes down to defining early the operating bands for our measurements from #4, and when to alert that we’ve gone outside of them. For example, your “sign up form” feature could be measured and monitored for signups-per-hour. If you have a steady enough visitor rate, there is some low (perhaps zero) that you don’t think you should ever reach if your sign up form is functioning properly. Alerting on this proxy variable can serve as a canary in the coal mine, letting you know that an underlying problem is developing. If we cannot tell when your code is in trouble, and ideally be able to kill and restart it, then it is not done.
]]>1
|
|
This has plagued Mechanize users for years, and it’s never been properly fixed. There are a lot of voodoo suggestions and incantations rumored to address this, but none of them seem to really work. You can read all about it on Mechanize Issue #123.
I believe the root cause is how the underlying Net::HTTP handles reusing persistent connections after a POST – and there is some evidence on the aforementioned github issue that supports this theory. Based on that assumption, I crafted a solution that has been working 100% of the time for me in production for a few months now.
This is not really a fix for Mechanize or Net::HTTP::Persistent, and there are sure to be some corner cases where you legitimately want this error to be bubbled up, but in practice, I have found that simply handling a persistent connection being reset with the “too many connection resets” error, forcing the connection to be shutdown and recreated, and simply trying again has worked 100% of the time in high-volume production for scrapers that suffered this problem intermittently.
This is done by creating a wrapper for Mechanize::HTTP::Agent#fetch
, the low level HTTP request method that is used to do GETs, PUTs, POSTs, HEADs, etc. This wrapper catches this annoying little exception, and uses the shutdown
method to effectively create a new HTTP connection, and then tries the fetch
again.
Loading the following monkey-patch somewhere in your application ought to shutup this annoying error for you for most use cases:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
There are a few ways to set it up, as documented on their quick start page, such as a system-wide installation via Chef, or configuring your app to use the syslog protocol. They also provide a convenient Ruby gem called remote_syslog that can be configured to read a configured set of log files and send them to Papertrail.
I’ve found for simple Ruby project structures, it can often be easier to deploy Papertrail by installing this gem with Bundler via your project Gemfile, and then creating a simple set of Rake tasks to manage starting and stopping the service. This way it’s self-contained within your application repository, gets deployed with the same mechanism you deploy your application code, and can be used on your development and staging systems just as easily, without any Chef cookbooks or other configuration hassle.
I typically build most of my production deployment with modular Rake tasks. This way your Capistrano/OpsWorks/Chef/whatever deployment tools can invoke Rake tasks – and you can use these same tasks manually on production and development systems alike.
I have a papertrail.rake
in my rake-tasks repository on GitHub that demonstrates how I use this. The contents are shown below, but the rest of the repository demonstrates the other required ingredients, such as a the papertrail config file. With this file in your tasks
directory, and the remote_syslog
gem in your Gemfile
, you now have access to three simple tasks:
1 2 3 4 5 |
|
You can now manually start logging to Papertrail with rake papertrail:start
…or you can hook up the start/stop tasks to your automated deployment tools.
Here are the contents of the main rakefile for this. See the rake-tasks repository for example config file, Gemfile, and directory structure.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
HTTP 200 OK
response for the health check that the ELB needs. This is because your blanket HTTPS enforcement will redirect the ELB’s health check from HTTP to HTTPS – and that redirection is not considered to be a healthy response by the ELB.
The same applies to any server you’re running behind an ELB in this fashion.
This posts discusses how to handle the same issue with Nginx.
In this scenario, we have an ELB accepting HTTPS traffic and proxying it over HTTP in the clear to an Nginx server listening on port 80. We want Nginx to force all requests that were not originally made with HTTPS to redirect to the same URL on HTTPS, except requests for the health check, which the ELB will make directly over HTTP. For this example, we are using Nginx as a reverse proxy to upstream server processes on the same instance, such as a unicorn webserver hosting a Sinatra app. (This would work well for Rails, too).
There are two main components that make up this solution:
location
directive for the health check URL that does not do any HTTPS enforcement.X-Forwarded-Proto: https
header does not exist.For best-practice, we can add HTTP Strict Transport Security with the add_header
directive here too. Below is an example of a simplified nginx config file demonstrating these.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
CampaignMonitor provides the ability to create Webhooks that will drive an HTTP POST callback to your app when subscribe/unsubscribe events happen. Once you dive into this, you’ll realize that you also need a way to deploy and update your webhooks. They only allow you to do this through their API – there is no GUI for it.
I threw together a quick-and-dirty Rakefile using the createsend
gem. First make sure you have either done gem install createsend
or have added the createsend
gem to your Gemfile. Then, you can create a Rakefile that looks something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
With this saved as campaign_monitor.rake
and loaded by rake, you will now have the following tasks you can integrate into your deployment system:
1 2 3 4 5 |
|
This recipe makes two drinks (why would you make just one?).
I use a stainless steel martini shaker, but you can use a regular cup or pitcher.
If you like it spicy like I do, make sure those jalapeños get smashed up and release their juice.
Either shake with your shaker, or pour back and forth a few times with another cup.
But what if you have your own custom background worker mechanism?
It’s fairly simple to get NewRelic working to report your custom background workers, but finding the right combination of setup calls in their docs can be a little tricky. The biggest issue is dealing with background tasks that daemonize and fork child worker processes. This is because the NewRelic agent needs to do unique instrumenting, monitoring, and reporting per process. Setting it up that way can be tricky if you’re using Bundler or another mechanism to load the newrelic_rpm
gem before the child processes are forked.
Assuming you are already familiar with the mechanics of Ruby-based daemon processes, here are the key ingredients you need to integrate the NewRelic Ruby Agent:
newrelic.yml
config file somewhere and make a place for its log file to be written.RUBY_ENV
, NRCONFIG
, and NEW_RELIC_LOG
to take the place of RAILS_ENV
and default config and log paths you may be used to in Rails.newrelic_rpm
gem or add it to your Gemfile
and require it via Bundler.include ::NewRelic::Agent::Instrumentation::ControllerInstrumentation
add_transation_tracer :execute, :category => :task
, in your main job class.::NewRelic::Agent.manual_start
.::NewRelic::Agent.after_fork(:force_reconnection => true)
.This will now make sure that the NewRelic Agent is started correctly for each child process and will report metrics on the execute
method of your job class.
While it’s not my intention to go into detail on how to build out a daemonized forking worker mechanism, below is a very simple worker script that demonstrates all of these pieces together. It assumes the use of Bundler and a directory structure like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
This example worker.rb
script forks 4 worker daemon processes, each of which will report timing metrics to NewRelic for the jobs it runs. Note the comments correlating to the bullet points above.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
config.force_ssl = true
in your initializer, or you use force_ssl
in your controllers. For various reasons having to do with late-binding configuration, I have typically not been able to use the config.force_ssl
method. This means the easiest way to force the whole site to use HTTPS has been to use force_ssl
on the base ApplicationController
, like this:
1 2 3 4 |
|
However…when you deploy this to Amazon EC2 behind an ELB (Elastic Load Balancer), you can run into problems.
Even if you have your ELB configured with your SSL certificate and you have it proxying port 443 to port 80 on your Rails app, you may still have trouble getting the ELB to accept your instance as an upstream server if it cannot get an HTTP 200 OK
from the health check action.
Once you have your Rails app using a global force_ssl
, the ELB HealthCheck will hit your server over HTTP (because you don’t actually have your Rails server setup as an SSL endpoint), and your server will return it a 301 redirect. This causes the ELB to think your instance is unhealthy and won’t proxy any requests to it.
I’ve found the easiest way to deal with this is to create a special action that you use for the health check, and override the force_ssl
for that action. Unfortunately, the stock implementation of ActionController::Base.force_ssl
, when applied globally in the ApplicationController
, does not allow other controllers to override that setting. That means we have to tackle this in two steps.
First, re-implement the force_ssl
method to allow controllers to override it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The above is a direct copy of this method from Rails 3.2, with the additional clause: && !(respond_to?(:allow_http?) && allow_http?)
. That clause allows any controller to implement an allow_http?
method, which is executed in the context of a request’s before_filter
. If this method exists and returns true
for a given request, then it will be allowed to continue over HTTP without being redirected to HTTPS.
For the second part, we need to create an unprotected action that can be used for the health check. The easiest way to do this is with a new controller (and matching route, if necessary):
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 |
|
Now, all you need to do is change your ELB Health Check to use /health_check
instead of /index.html
. This way the ELB will check that your Rails app is responding using HTTP (since that is the appropriate protocol between the ELB and Rails if you are using the ELB as your SSL endpoint). Your instance will register as healthy as long as your Rails app is up, and Rails will redirect all other HTTP traffic to HTTPS.
UPDATED Oct. 28, 2013: If you run your own reverse proxy in front of Rails, you can do this in the reverse proxy without having to modify your Rails app. See my post on doing this with nginx.
]]>You can read a nice explanation of this attack, and how HSTS helps to prevent it here.
The wikipedia page on HSTS provides some examples on how to enable this in your web server (apache, nginx, etc). However, when running behind an ELB in Amazon Web Services, where you cannot configure this at the reverse proxy, you may wish to do this in your application.
Here is how to achieve that in Ruby on Rails, using a before_filter
in your base ApplicationController
:
1 2 3 4 5 6 7 8 |
|
My goal is to provide a little background on Actionable Metrics and how they differ from Vanity Metrics. Understanding this is a central theme of the book Lean Startup, by Eric Reis, and the writings and teachings of a number of other prominent startup/entrepreneur/lean proponents such as Steve Blank, Dave McClure, and Ash Maurya. Fred Wilson of Union Square Ventures, has been quoted saying, “one of our firm’s favorite measurements is the cohort analysis”.
In this post, Ash Maurya, author of Running Lean and creator of Lean Canvas starts right out with the definitions of Actionable Metrics and Vanity Metrics. The key summary being:
Actionable Metric: ties specific and repeatable actions to observed results
Vanity Metric: only serves to document the current state of the product but offers no insight into how we got here or what to do next.
In the Tracking Long LifeCycle Events section of this post, where Maurya talks about cohort analysis, his recommendation is that the first report you implement – the canary in the coal mine – is exactly the kind of reporting we put first on our internal statistics console:
The first report I recommend implementing is a “Weekly Cohort Report by Join Date”. This report functions like a canary in the coal mine and is a great alerting tool for picking up on actions that had overall positive or negative impact.
This is a nice short blog post, by Martin Thomas, founder of Purlem, about using cohort analysis with a simple real-world example. Importantly, he calls out the definition of a cohort:
A cohort is a group of people who share a common characteristic or experience within a defined period
Note the focus on measuring “within a defined period”. As with Maurya’s post, his example is to group cohorts by signup date, and then track what those cohorts have as their initial experience (his is over a month, ours is currently over a 24-hour period – we really want to measure that “Day One Aha!” experience). He quotes Eric Reis on the issue of using vanity metrics:
Before using cohort analysis, I was tracking the cumulative number of paying users. Eric Reis calls this vanity metrics as they give the “rosiest possible picture” of a startup’s progress, but does not track how people are actually interacting with the application.
At the end of the day, using cohort analysis helps you to track the numbers that matter to the progress of your company.
This is a great slide deck about metrics and analytics in a startup. It’s a bit long but I think it is worth looking at the slides and understanding them. If you don’t go through them all, at least check out what I consider to be the highlights:
Slides 4-8: The perfect picture of vanity metrics. I love that Slide 7 calls Google Analytics Realtime Overview a “drug that can kill you”. So true.
Slides 14-15: Good definitions of actionable metrics
Slide 28: Nice visualization of the conversion funnel
Slide 33: Good progression of online marketing. We need to work toward getting solidly into the 3rd Generation territory.
Slides 49-56: Good summary of metrics surrounding user acquisition. This is exactly why I’ve been so adamant about getting the people sharing links to use tracking codes properly: “Use unique urls (tracking parameters) on every url you create/give-out/pay for”
Slides 71-77: Good overview of the “Viral Coefficient”. In our app we track this using AddThis analytics (which is one of the reasons we chose to use AddThis instead of custom Facebook integration)
Slides 78-81: Good overview of the “Net Promoter Score” (NPS). We intend to measure this using Qualaroo (formerly KISSinsights), but we are not doing that yet.
Slide 98: I love this slide – it applies the “OODA Loop” to the Lean Startup. The OODA Loop is a term uses in martial arts, combatics, military, and law-enforcement. It stands for Observe, Orient, Decide, Act. Then repeat. Eric Reis talks about the “Build-Measure-Learn” cycle. They really are the same thing. I love the idea of applying the OODA Loop to startups because it imbues a sense of urgency. This slide has a nice picture merging the two.
Slides 99-100: Innovation Accounting. I’d call it a wake-up-call. “Everything you do should attempt to change a metric”. I.e.: anything we are not doing that is not specifically aimed at changing an actionable metric is something we need to stop doing.
Slide 106: Kanban board. Looks like my Trello boards!
knife-ec2
gem. I’ll assume the reader:
knife
configuredknife-ec2
installed and configured with AWS API credentialsThis is fairly easy to do with Linux instances. Using knife ec2 server create
and a bunch of parameters, a single command provisions a new Linux instance in EC2, waits for it to come up, connects to it over SSH using the specified key pair, installs chef-client
, and bootstraps the node using the specified run_list. Done.
However, things are not so simple for Windows Server instances.
Working with Windows instances in EC2 using Chef presents a few hurdles:
knife ec2 server create
command waits for the instance to accept SSH connections. There is no option to circumvent this.knife-windows
gem provides a knife bootstrap windows winrm
command that can bootstrap an existing Windows instance with Chef, but cannot provision a new instance.knife bootstrap windows winrm
command requires WinRM to be configured on the instance (which it isn’t be default), requires the Administrator password of the instnace (which defaults to a random value), and requires the public IP address of the instance (which we don’t know until the instance is up).Below I’ll provide a simplified example script that demonstrates how we can hack together a few techniques to create an all-in-one solution for bringing up new Windows Server nodes in the Amazon cloud. Other than knife
and all the other pre-requisites mentioned above, you’ll need to make sure you have the following Ruby gems installed:
1 2 |
|
The Ruby script below uses a few nasty tricks to make this all work:
First, we write a temporary “user data” file to pass to the EC2 API. This gets executed by the new instance when it is first provisioned. There are two tricks we need to stick into the user data file:
<script>
that configures WinRM (Windows Remote Management), which is what we’ll use to connect to and bootstrap the instance.<powershell>
script that sets the Administrator password to a value we define. This makes it so we don’t have to wait 15+ minutes for EC2 to generate a password for us, and retrieve it manually through the GUI.Then, we use knife ec2 server create
to provision the Windows instance to specification, passing in that user data file. This works great for provisioning the instance, but since it was not really designed for Windows and WinRM, there are two tricks we have to employ here:
knife
command in a sub-process and read its STDOUT
until we see it output the new instances public IP address. We’ll grab that and save that for the next step.knife ec2 server create
. If you were doing this manually, you’d hit CTRL-C
here, which knife
is saying “Waiting for sshd” (which is never going to come up). We do that by sending the sub-process a SIGTERM
signal.Now, we can’t just move on to bootstrapping the node, because it is still booting up, and WinRM may not be configured yet. The trick here is to create a TCP socket to the WinRM port, using the IP address we aquired in the previous step, and wait for it to connect. If it fails to connect, try again until it does. By the time this succeeds, we know WinRM is up and accepting connections. However, we don’t know if the rest of the system is ready. Moving on to the next bootstrapping step immediately will run into intermittent errors. I’ve seen this manifest as an authentication erorr, presumably because we tried to bootstrap over WinRM before the PowerShell script set the password. There may be other mysteries of the Windows universe lurking here as well. My solution: sleep for two minutes. Lame, I know…but so far it is the only thing that has reliably worked.
Finally, we can bootstrap the new running Windows instance with the knife bootstrap windows winrm
command, using the IP address we acquired, the password we specified in the user data, and the other knife
params we want to use such as the run_list and environment.
Here is a stripped down version of this script demonstrating all these tricks. As you can see, all the custom configuration is hard-coded in constants at the top of the script. You would obviously fill in your own information however you like – via command-line params, interactive prompts, config files, etc.
Big thanks to my colleauge Jeremy Groh who paired through this with me and did the bulk of the heavy lifting on the Windows side, especially with the WinRM and password-reset parts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
I work in a team that is distributed around the country, where everyone works from their home offices. Sounds great, doesn’t it? Well, it’s not for everyone. First, you have to be seriously self-motivated. Personally, I get more done at home than I ever have at an office, because I get to choose when to be distracted and I’m good at staying focused. For some people, choosing when to be distracted is a curse. They choose to be distracted all the time. Facebook, twitter, bathroom, kitchen, twitter, kids, etc.
Still, some of those home distractions can plague even the most focused of us. One of the keys I’ve found to succesfully working from home is to have a dedicated work space. I am lucky enough to have an extra room to use as an office, and the family knows that when I’m in there, “Daddy’s at work”.
But sometimes that’s not enough…
Even when the house is empty, no co-workers are interrupting you, and you have a clear path ahead of you, the familiarity of the home office can be a distraction in itself. Who’s that driving down my street? I should really fix that door. I wonder if there’s anything in the fridge. What time are the wife and kids coming home?
In times like these, I like to just get out and go work from somewhere else.
Everyone has wifi. Go to a coffee shop. Go to a pub. Go to the beach (hurray for MiFi!). Even though there is more noise and more movement, I find that sometimes just varying my surroundings can actually help me to focus. I block out the unfamiliar sounds. I expect the noise so I don’t key in on every little movement. I’ve done some of my best out-of-the-box thinking in these kinds of environemnts.
This actually turns into a great perk of the distrbuted team environment. I can go work from the coffee-shop-with-a-view and it makes no difference to my co-workers. I’m still on Skype and email. I’m still checking in code and answering questions.
So, if I can work from a coffee shop overlooking Lake Washington…what’s the difference if I make it Lake Chelan? Or Miami Beach for that matter?
In July, I worked for 4 days from a remote part of the Olympic Peninsula. In March, to get away from the terrible Seattle weather, I worked for almost a month from South Florida. Add to that the fact that my team likes to travel to get together, and “work from home” actually becomes “work from everywhere”.
I rather enjoy that lifestyle. The bulk of my time I’m cranking away in my home office where I am extremely productive. But every now and then, at just the right intervals, I go somewhere else to mix it up. I’ve even started collecting photos of all these places. It makes for a fun reminder of one of the reasons why I like this job and working in a distributed team.
And now for the shameless plug. My team at Validas is ready to grow. The beauty of this is that we can recruit from anywhere - no moving expenses. Just a generous budget to deck out your home/mobile setup with the latest tech goodies, paid internet and mobile communications, and away we go.
So, if you’re a seasoned architect/engineer, who fits our job description and wants to thrive in this work/life style, send us your résumé today. We’d love to talk to you!
]]>Here is my scenario: I have two repositories. I want to make a new empty repository and move the other two into it as subdirectories. I also want to preserve all the commit history of the original repositories.
Here are the steps involved for the first repository:
You can now delete the clone of the source repository you, so you don’t really keep those file moves around in the original source if you don’t want to.
Here’s what those steps might look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Now you’re done and can delete the source repository clone, and push the destination repository clone upstream. Check the git log
to be sure.
Say I have two repositories on github named homer
and bart
and I want to combine them into a new repository called simpsons
. Here is how that looks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
What if you only want to move a subdirectory of the original source repository into the destination repository? All you need to do is filter out what you copy into that new sub-directory you create in step 4, and make sure everything else gets removed from that source repository. (Don’t worry - remember, you’re just working with a local working copy of that source repo, that you’re going top discard after this operation. You won’t harm anything irreversibly here.)
One way to peform that filtering is by using the git filter-branch
command. For example, to only copy the pranks
subdir from the bart
repo, just before step 4, you would do something like this:
1
|
|
That dumps all the contents of the pranks
dir into top-level dir where you can proceed to move them into your new subdir that you created in step 3.
For example, when using ActiveRecord for your Rails models, you can provide custom attribute accessors, say to serialize a Hash to JSON, using the read_attribute
and write_attribute
methods like this:
1 2 3 4 5 6 7 8 9 10 |
|
With this, you can assign a Hash to the stuff
attribute of User
, and when you access it via User#stuff
, you get a Hash back. All the while, it’s read and written to and from the database as a JSON string.
Ripple is the Ruby modeling layer for the distributed NoSQL store, Riak. It tries very hard to provide a lot of the same interfaces as ActiveRecord. However, this is one of the areas it diverges: Ripple::Document
objects do not support the read_attribute
and write_attribute
methods.
Instead, they implement the []
and []=
methods. Translating the code above to work with Ripple is pretty easy:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Using this tactic, you can easily add some memoization so that your getter doesn’t need to parse the JSON text on every access. To do this, we’ll use an instance variable as a cache that we’ll invalidate in the setter, like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|