Musings 14

PatternCraft - Visitor Pattern

Design Patterns explained with StarCraft. Very cool!

How We Release So Frequently

We have a lot of tests

Forward-only Migrations

We don’t roll back database migrations. every database migration we do results in a schema that’s compatible with the new version of our code and the previous one. If we have to roll back a code release (that does happen sometimes) then the previous version is perfectly happy using the new version of the schema.

How we acheive this isn’t with some magical technical solution, but purely by convention. Take dropping a column as an example; how do you release that change? Easy:

  • Release a version of the code that doesn’t use that column; ensure it is stable / won’t be rolled back.

  • Do a second release that has a migration to remove the column.

New Code != New Features

Customers should never notice a code release, unless perhaps there’s a dramatic improvement in performance. Every new feature is first released in a hidden state, ready to be turned on with a ‘feature toggle'.

Small Releases

With fast builds, lots of tests, less risky database migrations, and feature changes decoupled from code releases: there’s not much standing in the way of us releasing our code often, but there is a feeback loop here that helps us even further: the more often we release, the smaller the releases can be. Smaller releases carry less risk, letting us release even more often. Frequent releases don’t necessarily imply small releases though - it still requires a bit of convention.

Creating Your Code Review Checklist

If we brainstormed for an hour or two, we could probably flesh this out to be a comprehensive list with maybe 100 items on it. And that’s what a lot of code review checklists look like — large lists of things for reviewers to go through and check.

There are two problems with this.

  1. You can’t keep 100+ items in your head as you look at every method or clause in a code base, so you’re going to have to read the code over and over, looking for different things.

  2. None of the checks I listed above actually require human intervention. They can all be handled via static analysis.

That morale boost is empowering, and it leads to an increased sense of professionalism in the approach to software development. “We trust you to handle your business with code formatting” sends a much better message than, “Let’s see here, did you remember to prepend all of your fields with an underscore?”

there should really be two code review checklists: things the reviewee should do prior to submitting for review and things the reviewer should check.

First, here is an example checklist for a code author

  1. Does my code compile without errors and run without exceptions in happy path conditions?

  2. Have I checked this code to see if it triggers compiler or static analysis warnings?

  3. Have I covered this code with appropriate tests, and are those tests currently green?

  4. Have I run our performance/load/smoke tests to make sure nothing I’ve introduced is a performance killer?

  5. Have I run our suite of security tests/checks to make sure I’m not opening vulnerabilities?

Here is an example checklist for a code reviewer

  1. Does this code read like prose?

  2. Do the methods do what the name of the method claims that they’ll do? Same for classes?

  3. Can I get an understanding of the desired behavior just by doing quick scans through unit and acceptance tests?

  4. Does the understanding of the desired behavior match the requirements/stories for this work?

  5. Is this code introducing any new dependencies between classes/components/modules and, if so, is it necessary to do that?

  6. Is this code idiomatic, taking full advantage of the language, frameworks, and tools that we use?

  7. Is anything here a re-implementation of existing functionality the developer may not be aware of?

It’s also very important that the lists remain relatively brief so that the reviews don’t turn into mind-numbing procedures. You can always modify the checklist and add new things as they come up, but be sure to cull things that are solved problems as your team grows. No matter how hard you try, it’s not going to be perfect. The aim is to catch what mistakes you can and to get better – not to attempt perfection.

18 Lessons From 13 Years of Tricky Bugs

CODING

  1. Event order. Can the events arrive in a different order? What if we never receive this event? What if this event happens twice in a row? Even if it would normally never happen, bugs in other parts of the system (or interacting systems) could cause it to happen.

  2. Silent failures.

  3. If. If-statements with several conditions

  4. Else. Several bugs have been caused by not properly considering what should happen if a condition is false.

  5. Changing assumptions. Many of the bugs that were the hardest to prevent in the first place were caused by changing assumptions.

  6. Logging. Make sure to add enough (but not too much) logging, so you can tell why the program does what it does.

TESTING

  1. Zero and null. Make sure to always test with zero and null (when applicable).

  2. Add and remove.

  3. Error handling. The code that handles errors is often hard to test.

  4. Random input. One way of testing that can often reveal bugs is to use random input.

  5. Check what shouldn’t happen. to check that an action that shouldn’t happen actually didn’t happen.

  6. Own tools. Usually I have created my own small tools to make testing easier.

DEBUGGING

  1. Discuss. The debugging technique that has helped me the most in the past is to discuss the problem with a colleague.

  2. Pay close attention. Often when debugging a problem took a long time, it was because I made false assumptions.

  3. Most recent change. When things that used to work stop working, it is often caused by the last thing that was changed.

  4. Believe the user. Sometimes when a user reports a problem, my instinctive reaction is: “That’s impossible. They must have done something wrong.”

  5. Test the fix.

Twitter

"IT is a cost center you say? Ok, let’s shut all the servers down until you figure out what part of revenue we contribute to." - @drunkcod

Your mgrs want data about agile. You try to find some data. But data reinforces your own confirmation bias. @RisingLinda

Aug 29, 2016
comments powered by Disqus

Links

Cool

RSS