Fighting Adult Content with Kafka, Samza, and Google Safe Search by Daniel Ehrman

Originally published on Code Red on August 17, 2018.

Recently, we added the ability to upload photos of your home renovations to Redfin. To get there, we faced the immediate problem of needing to maintain the integrity of our public-facing content — filtering out adult images and the like. Let’s look at how we used Kafka, Samza, and Google Safe Search to put it all together.

First, Why Photos?

 Kitchen photo uploaded by a Redfin homeowner

Kitchen photo uploaded by a Redfin homeowner

User testing showed that owners want a way to show off their homes and the hard work they’d put into recent updates. They also want a more accurate Redfin Estimate. A limiting issue we face is that we can’t see inside their homes, preventing us from fully understanding the home’s true value. With enough training data, photos may help us give a more accurate Estimate.

System Requirements

We had five goals in designing our photo-filtering system:

  1. Accurate: the photo filter should be accurate enough to catch nearly all inappropriate content.
  2. Affordable: we should be able to validate a large volume of photos without breaking the bank.
  3. Non-blocking: homeowners should be able to upload renovation photos without waiting for them to be validated first.
  4. Handles bursts: because we’re likely to see big batches of photos from a single homeowner spread apart by long stretches of inactivity, the system should be able to handle bursts without backing up.
  5. Testable: we should be able to test the system in its entirety with photos that are accessible only from within our VPN.

Our Design

Given the requirements, we elected to use Kafka and Samza to meet our performance and scalability needs. We chose Google Safe Search because of Google’s established reputation with computer vision, and because we’d only be charged a fraction of a cent per image. Here’s the solution we landed on....

Continue reading on Code Red....

How Hibernate’s Lazy Loading Nearly Killed Our Email by Daniel Ehrman

Originally published on Code Red on May 25, 2016.

Our Data Addiction

On a typical Saturday afternoon in the height of the summer home-buying season, Redfin will send millions of listing update emails and push notifications to our users.

One of our biggest advantages is our ability to send these notifications within moments of an event appearing in the Multiple Listing Service (MLS) feed, the data source real estate agents use to put new homes on the market. In hot markets like San Francisco or Seattle, this timing can be critical: putting in a strong offer within hours of the initial listing can make the difference between securing a dream home and losing it in a bidding war.

Many of our customers (hundreds of thousands in fact) opt to receive daily digests rather than instant notifications throughout the day. And when these daily emails — unique to each user — are generated in the morning, our machines can be kept busy for hours trying to fetch all the required data to send them out.

Under extreme conditions, the implications of this excessive workload are twofold:

  1. Other jobs (listing importers, search indexers, etc.) can get backed up while the daily email job hogs resources.
  2. Users’ emails can get delayed, potentially hours after they would normally expect them.

In early 2015 — at a time when our email jobs relied almost entirely on batch-oriented operations — we began to observe exactly these two frightening scenarios....

email slowdown.png

Continue reading on Code Red....

How To Give Your Mac Some Mid-80's Style by Daniel Ehrman

Black and white icon pack + tweaked window stylings = 2.7GHz of 1980's awesomeness

I've never been a huge Mac fan (sorry, guys). But I've long been obsessed with vintage computers, and I keep a collection of 4 old (1980-1997) Apple computers in storage—hopefully some day on display.

So that's why when I was given my first Macbook Pro at my new job, I couldn't wait to make it feel a little more familiar. Fortunately, I found a great start with Ben Vessey's "Mac OS (Old School)" icon pack.

Unfortunately, most of Ben's work just covers some basic Mac app icons, so I extended the look by adding 15 more "engineery" icons—Eclipse, Android Studio, Emacs, etc.—and by building my own window and menu theme to yield a (nearly) complete Macintosh immersion.

For those of you interested in giving it a shot, here are the complete steps:

1. Initial setup

2. Install the icons

Navigate to the icon pack you downloaded earlier, and drag and drop the desired icons over their respective applications in LiteIcon. Apply changes when done.

3. Update the desktop

  1. Right click the desktop and select Change Desktop Background...
  2. Click the + icon to add the folder you extracted from the zip file.
  3. Click on the only image that appears in the main window, and select Tile as the display mode.

4. Tweak the windows

  1. Open Flavours and click Get More...
  2. Search for "macintosh classic" (without the quotes); double click on the Macintosh Classic B&W theme; and click Apply.
  3. Uncheck Menu Image and Desktop, and click Apply.
  4. You will be prompted to log out to apply your changes. Do as it says.

How do I undo my changes (revert to defaults)?

  1. To revert back to the default icons, open LiteIcon, select the Tools drop-down menu, and click Restore All System Icons...
  2. (Desktop background—no explanation needed)
  3. To restore the default windows theme, simply open Flavours and click the on/off toggle at the top right of the window.

Issues (cosmetic only)

  • Chrome, and potentially some other strange programs, use their own custom window borders, and thus their windows look...funky with the theme. I've kind of learned to deal with it, but I'd be lying if I said it didn't annoy me.
  • The classic stripe look you see in the screenshots is achieved in part by placing a white rectangle in the middle of the window. Unfortunately, there's no way to resize this rectangle to fit the window text, so I picked a size of roughly half of the window width to try to work for about 99% of the use cases. Occasionally though, you may have a window with a really long title that simply can't fit in that white box. To accommodate this, I've set the opacity of the title bar stripes to 50% so that you can still read the whole title. But yes, it will be annoying if/when you come across a long title (hopefully rarely).
  • There are some inconsistencies between the icons (e.g. some circles are drawn slightly different than others). I wasn't as careful as I should've been, but if anyone has any serious interest in using the icons, I'd be more than happy to clean them up for you. Again, I'd be lying if I said this didn't bother me.

11 Must-Haves for Every Power Programmer by Daniel Ehrman

As developers, programmers, engineers—whatever you want to call us—we spend a lot of time programming. And seeing as we spend more waking hours with our editors than we do with our families, it's important that we take a close look at that experience and make sure that we're getting the most out of it that we can.

DISCLAIMER: This post is targeted at the Emacs crowd. If you're not an Emacs user, I hope that you can at least still learn a thing or two from this list and maybe find a way of achieving the same functionality in your own editor. If this list happens to provoke an interest in switching over, great. But this post is not intended to persuade readers one way or the other—simply to share some heavily used features that can make those hours away from home a little bit happier.

In Emacs, the customization comes from your ~/.emacs configuration file. Emacs users spend years building up these config files, and a senior developer's .emacs file can be a highly sought-after resource. (One of my last correspondences with a departing engineer consisted of "Good luck," and "Can I get the path to your .emacs?") With that in mind, I've listed some useful key bindings to pass on where applicable.

 Figure 1: Evidence of generational learning through .emacs configuration files.

Figure 1: Evidence of generational learning through .emacs configuration files.

 

1. goto-line

Even the superhuman developer likely spends at least half of his time debugging code. And although some IDEs will highlight the failing line, it's not always a compiler that's flagging a problem. Whether it's grepping through a list of files or simply looking for a problematic section of code that someone brought to your attention, the "goto-line" operation is all too frequent.

(define-key global-map (kbd "M-g l") 'goto-line)

2. revert-buffer

Ever re-run a program after making a bug fix? How about 20 times a day? Having a quick way to reload a run-time log (or any frequently changing file) can be a huge time-saver.

(global-set-key (kbd "C-S-r") 'revert-buffer)

3. The macro

Serious developer know well the power of the macro. Non-power programmers cringe at the thought of it. The concept of the macro tends to evoke a sense of heavy up-front cost that's only useful in the rarest, most tedious circumstances. But in a given day, a few good macros may save you an hour of work. (If your managers knew how much time you waste by not using macros, they would insist on it!)

We could spend pages discussing the use cases of macros, but suffice it to say that for pretty much any task which takes more than 10 seconds and will be repeated at least 10 times in one sitting, you're better off using a macro. In Emacs, this couldn't be easier:

  1. Start recording: C-x (
  2. Do whatever you want to repeat, using regular expressions and generic operations wherever possible.
  3. End recording: C-x )
  4. Run the macro n times: C-u n C-x e

This whole process creates a throwaway macro that you can keep until you create a new one. If, however, you decide that it's generic and useful enough to save for later, this bit of config code will help:

(defun save-macro (name)                  
    "save a macro. Take a name as argument
     and save the last defined macro under 
     this name at the end of your .emacs"
     (interactive "SName of the macro :")  ; ask for the name of the macro    
     (name-last-kbd-macro name)            ; use this name for the macro    
     (find-file user-init-file)            ; open ~/.emacs or other user init file 
     (goto-char (point-max))               ; go to the end of the .emacs
     (newline)                             ; insert a newline
     (insert-kbd-macro name)               ; copy the macro 
     (newline)                             ; insert a newline
     (switch-to-buffer nil))               ; return to the initial buffer

4. increment-number-at-point

The true Emacs user will look at an incredibly tedious task like generating repeating blocks of code with sequentially incrementing numbers with enthusiasm! When recording the macro, simply place the cursor over the applicable number and increment-number-at-point:

(defun increment-number-at-point ()
      (interactive)
      (skip-chars-backward "0123456789")
      (or (looking-at "[0123456789]+")
          (error "No number at point"))
      (replace-match (number-to-string (1+ (string-to-number (match-string 0))))))
(global-set-key (kbd "C-c +") 'increment-number-at-point)

Note that the same can be achieved for hexadecimal numbers as well: http://www.emacswiki.org/emacs/IncrementNumber

5. ansi-term

This one is my personal favorite. Couple the macro with a terminal embedded in your editor, and you're poised to be the most powerful developer in the office. Once your terminal is contained within your editor (and once you've spawned several simultaneous terminals in there too), you'll realize you no longer have reason to live outside the editor (though your spouse may disagree).

My typical development environment is a split window with code on one side and a terminal on the other (all within a single Emacs instance). Using the "C-x o" key binding I can then quickly switch between editing and performing the usual command line operations in Linux.

Like I said, this is where the beauty of Emacs truly starts to come together. Ever had a bunch of errors dumped to the terminal that you need to cross reference with your code to apply a series of fixes? Automate that! Suppose that the compiler is complaining about a bunch of variables in our code not being declared yet. Easy:

  1. In the terminal buffer grep for "Error:"
  2. Use a regex to search for the variable name, and copy it.
  3. Switch over to the code buffer with C-x o.
  4. Declare the variable, and make a new line.
  5. Switch back to the terminal.
  6. Rinse and repeat.

Note that in Emacs, there are a variety of terminal types you can use. I prefer the ansi-term mode due to its smooth support of telnet, but I'm sure there are ways of customizing the other terminals (e.g. shell-mode) to behave in the same way as my beloved ansi-term.

6. rename-buffer

Since I'm typically working on five different tasks at once, it helps to have named buffers to keep track of things. This becomes even more useful when I want to switch to a specific terminal with a specific environment in a specific directory:

M-x rename-buffer

Emacs will then prompt you for a new name for the buffer. Then getting to the file/terminal you need doesn't involve sifting through tens of windows and tabs. Simply…

C-x b <name of buffer>

Note also that if you don't provide the name of the buffer, you can simply press RETURN to view a searchable list of open buffers, which helps if you can't remember what a file was called. This is also useful if you want to, say, have a macro quickly close all buffers matching a given regex.

7. Auto indent

Ever carefully align every block of code to paint your file like a Georges Seurat? If so, you're wasting your time. Even with a preexisting poorly formatted file from another developer, there's no need to do any heavy lifting. Kick back and let your editor do it for you. In Emacs simply highlight the lines you want and…

C-M-\

Done.

8. upcase/downcase-region

Ever need to change a bunch of variables to constants or vice versa? In most circumstances, this involves changing the case, and Emacs makes this very easy for us. Simply select the applicable text and "C-x-u" to convert to uppercase or "C-x-l" to convert to lowercase. You'll also need to add these lines to your .emacs:

(put 'downcase-region 'disabled nil)
(put 'upcase-region 'disabled nil)

One can also imagine how useful this would be if you wanted to convert an entire file from underscore-based variable names to CamelCase or vice versa. Combining this case-conversion functionality with macros would make the task a trivial operation.

9. The Mark Ring

Rarely does a file of code fit on a single page. So it should come as no surprise that a reasonable amount of the average developer's time is spent navigating back and forth between pages in a file. Often this operation consists of either paging up or down or grepping for a phrase near the line of interest (not to mention the dreaded journey from keyboard to mouse!).

Luckily, Emacs remembers where we've been. If you know you're going to come back to your current location in a file, simply "C-<SPC> C-<SPC>" to push the current point onto the "mark ring," and "C-u C-<SPC>" to pop it off. Emacs will take you back to where you left off. (Note that if it's a multi-hop trip, you can push multiple points onto the mark ring to retrace each step on your way back home.)

10. Dired (File Explorer)

If you're still browsing for files in your terminal—or worse, a GUI—you're missing out. Because Emacs serves as an interactive file explorer too. Simply "C-x-f" to open a file, but press RETURN without supplying a name, and Emacs will open a buffer with a listing of the current directory. From there, you can navigate through the directory with cursor keys, regular expressions, or even (horrified gulp) the mouse to find what you're looking for.

11. The regex

Finally, no list would be complete without mention of regular expressions. Although it may go without saying, the regex search within your editor is such a commonplace necessity that most of us may take it as second nature—almost a cyborgial extension of the developer's motoneural system. I would imagine that very few real editors out there lack regex searching, but for the record, it's achieved in Emacs via "C-M-s" for forward/downward search and "C-M-r" for reverse/upward search.

 

Well, I hope that helped. These were just the few Emacs features that I use on a regular enough basis that I couldn't go without them. Please leave comments with suggestions on items I might have forgotten. As is often the case with these sorts of things, it's the actions we take for granted that, by definition, we are most likely to overlook.

The Pipelined Brain by Daniel Ehrman

While at work today I thought to myself, "How marvelous is it that if we as logic designers want more processing power out of a chip, we can simply add more logic?" It seems simple, but it's really quite remarkable when you think about it. (Imagine choosing to giving your child more neurons as needed.) Sure there is an area and power trade-off to consider, but if we need to do more work in parallel, there's nothing stopping us from building a few extra gates to get what we want.

Note that this approach is fundamentally different than software: extra code means extra time. On the other hand, transistors are a dime a dozen these days, and on chips with more than a billion of them, the added cost—for no added processing time—is truly quite trivial.

So then I thought, "What if we could do this with our brains? What if like hardware, we could think in parallel? What if I could pipeline my brain so that while one block of my consciousness were busy processing Problem A, another block could be working on Problem B? What would that look like?"

But then it occurred to me: we already do this every day. Yes, we know that our brains are busy processing thousands of patterns in parallel far beneath our consciousness, but that's not what interests me here. I'm interested in the collective brain.

Complex "intelligent" behaviors have long been observed in relatively simple species like ants via their larger actions as a community. And in humans, it has been shown that while no one of us may be an expert in a particular field, through the power of numbers, we can together achieve highly accurate results. So it really shouldn't be any great leap to think of our brains as individual workers contributing to massive results on a supercomputer scale.

I use that word intentionally—supercomputer—because that's what we are, all 7 billion of us combined. What else could be capable of achievements as miraculous as flying three people to the moon and back with less than a decade of preparation, while simultaneously managing the concerns of everyday life here on Earth?

In fact, from a computer architect's perspective, there is tremendous insight to be gleaned from the ways in which we work together to achieve results beyond the power of any one brain. Perhaps no company better exemplifies the spirit of pipelined processing than Taiwanese news animators Next Media Animations, who now go from "story conception to a finished product in less than 2 1/2 hours." This remarkable pace is achieved only by treating each worker as a distinct, specialized unit capable of solving a problem in isolation from, and in parallel to, the other members of their team, much in the same way that we design microprocessors.

The success of Next Media Animations is a glimpse at what can be accomplished when a production process is constructed to remove as much serialization and dependencies as possible. With advanced tools at our disposal, such as 3D rendering software, which dramatically reduce the pipeline latency of product development, we are poised more and more every day to function like a high speed computer, moving with microprocessors along Moore's curve rather than watching them pass from the sidelines.

It's now up to the entrepreneurs, the managers, and the creative thinkers of tomorrow to match the evolution of computers with equivalent developments in our workflow, growing in tandem with the technology to ensure that we are as efficient and as productive as we can be.

Guitar Synthesizer by Daniel Ehrman

Since purchasing my function generator, I've been doing a lot of thinking about what kinds of creative projects I can accomplish with this seemingly simple device.

One concept that struck me almost immediately was using the Voltage Controlled Frequency (VCF) input of the function generator to vary the sound via some external source, and since I'm a guitar player, naturally I looked to my guitar as the ideal controller.

This isn't a very new idea. In fact, the Roland GR-55 puts to shame pretty much any hypothetical device I could imagine:

Clearly something of that caliber is out of my ballpark, but I might be able to achieve something simpler.

My idea (again, nothing incredibly novel) is to use a simple Frequency Controlled Voltage (FCV) chip to convert my guitar signal to a specific voltage and then provide that to the function generator's VCF input. The ideal result would be a clean synth sound coming straight from my guitar.

The reality of course is much more complicated.

Unfortunately, the function generator is monophonic: it is only capable of playing one "note" at a time. So it's imperative that whatever voltage input I provide it is a clean representation of a single guitar note rather than the complex assortment of frequencies that are present in an actual guitar's sound. This is no trivial problem. Even if I focus on plucking only a single string, a single note is actual comprised of a variety of overtones that pollute the sound and could potentially distract the circuitry from the fundamental frequency:

 Plucking the open D string of a Fender Stratocaster

Plucking the open D string of a Fender Stratocaster

So the next thought is, "Well, that's OK: I'll just pass the signal through a few band pass filters and choose the band with the maximum output as my fundamental frequency." While that may sound like a perfectly simple solution to the problem, this is where things get much harrier. Let's take a look at the spectrum of frequencies that are actually comprising this signal:

Ouch. Take a look at that. The fundamental frequency of our D3 note, approximately 147 Hz, isn't even the peak frequency in the spectrum! To be fair, we have to pay attention to the fact that of all of the partial frequencies, the group of tones around the D3 clearly have the largest area; therefore, that collection of closely aligned frequencies will together cut through the mess to create what we perceive as a single D.

If only band pass filters worked like human ears….Look what happens when we try to cut a slice out of the spectrum that should represent a D3 note (144 to 150 Hz):

 Top: original complete guitar D note. Bottom: Extracted frequencies in the range of 144-150 Hz.

Top: original complete guitar D note. Bottom: Extracted frequencies in the range of 144-150 Hz.

What?? Things certainly aren't looking good here. The extracted range (D3) is of such a low volume, it's hard to believe that we could even categorize the original signal as a D at all. And if we were to expand the band any wider, we'd start to pick up the surrounding notes (C# and D#) and in turn detract from the cleanliness of our signal.

To put it simply, reducing a raw analog guitar signal—even just one note—to a single frequency that can be cleanly converted by an FCV chip may require some more advanced circuitry.

However, there is a bit of hope: if you look carefully at the figure above, you'll notice that the smaller signal at the bottom representing the D3 note has a period that directly aligns with that of the big spikes in the top signal. This could be very good news: if the FCV chip can trigger on peaks, it will accurately detect the fundamental frequency and provide the single voltage we need.

Unfortunately I'll need to do more investigation into the problem before I have a complete answer (for another blog post). But in the meantime, while we're in the business of deconstructing guitar signals, I thought it would be a fun wrap-up to work our way back up to that original dirty signal from scratch.

In referring to the frequency spectrum shown above, I chose the ten loudest frequencies and combined them in Audacity to produce an artificial guitar sound:

 Artificial guitar D3 note: 10 simple sine waves and a single additive composite.

Artificial guitar D3 note: 10 simple sine waves and a single additive composite.

Comparing the resulting composite wave to the original actual guitar sample, we can see that the synthesis isn't that far off:

 Top: artificial guitar. Bottom: actual guitar

Top: artificial guitar. Bottom: actual guitar

The primary sources of difference between the two waves is merely the magnitude of each of the partial frequencies, which for the sake of simplicity, I kept nearly the same across the entire spectrum. This shortcut results in "fuller" bands (louder notes), such as the D3, being underrepresented and "narrower" bands (softer notes) being overrepresented.

Here are the audio samples of the two different waves:

Function Generator Music in 5 Minutes by Daniel Ehrman

Forget oscilloscopes. Ever wondered what it sounds like to play a function generator through a guitar amp?

In my undergrad, while working on the Purdue Solar Racing team back in 2011, I borrowed the team's function generator when it wasn't in use and carried out some "musical experiments" back at home.

I have a Fender Cyber Champ amp, which has a whole host of effects built in: phaser, flange, chorus, various kinds of reverb, and a lot more. So the thought was that if I could combine a box of essentially unlimited sounds with these spacey guitar effects, I could cook up some pretty cool live music, or at the very least, synthesized effects to lay over whatever other music I was working on at the time.

Of course I eventually had to give back it back, but I never stopped thinking about all of the unique sounds I could make with that function generator. I'd once seen a documentary on the making of The Dark Side of the Moon, and the idea of crafting an entire composition from little hand-made sonic components truly lit a fire in my engineer's brain.

Back to the present day.

Last week, I finally ordered my own function generator for $25 on eBay and started right where I left off three years ago.

 Clockwise from top left: (1) GW GFG-8015G function generator, (2) Boss RC-2 Loop Station, (3) Fender Cyber Champ amp, (4) Presonus AudioBox USB interface, (5) headphones, (6) PC.

Clockwise from top left: (1) GW GFG-8015G function generator, (2) Boss RC-2 Loop Station, (3) Fender Cyber Champ amp, (4) Presonus AudioBox USB interface, (5) headphones, (6) PC.

The diagram above shows the final setup with all of the required pieces for recording the music. The loop station (top center) is the key: it lets me loop back what I've previously recorded without the help of a computer so I can compose everything live in one shot. The computer in the diagram, and in fact the entire bottom row, is only present for recording purposes.

Also note that technically, you would want any effects—including the amp—placed before the looper so that different effects could be saved with each track rather than the same effects being applied to the entire composition, but I just wanted a quick and simple setup here. The only effects I actually used were very small amounts of reverb, delay, and chorus.

I start with a 2 Hz square wave with a non-50% duty cycle to create a heart-beat-like bass drum. (Listen to this beginning section of Dark Side for comparison.) Typically, this wouldn't be audible due to its being below the 20 Hz human hearing cutoff, but the quick changes in the line level create some residual precussive frequencies that we can hear quite well.

Then, in remembering the repetitive, but beautiful, two-chord droning of Pink Floyd's "Breathe," I set the frequency knob to A for four beats and E for four beats. (Note: it seems like these notes all came out half a step lower, which I'll have to investigate further.)

With the basic notes down, I start overlaying more of the notes that comprise the A major and E minor chords, with the pleasant surprise of some phaser effect as I double-record the same, but phase-shifted, notes.

Add on top of that a couple "slides" into notes and some (admittedly atonal) quickly changing frequencies, and we're pretty much done.

So here's the final 15-second composition (that loops ad infinitum):

Ultimately, I have much bigger plans for this function generator, but this is a nice kick-off to what will hopefully become a seriously fun audio engineering project.

What Discrete Math and Lisp Can Teach Us About Good Coding Habits by Daniel Ehrman

When I was in my undergrad, I had the pleasure of reading, and to be honest, writing, countless lines of confusing code written in a variety of languages. To be fair, I had inherited some pretty bad habits from when I taught myself BASIC as a kid, but it seems like college courses never did a really great job of pushing clean coding style. Sure, heavy emphasis was always placed on designing scalable, efficient programs, but algorithmic complexity is quite distinct from code complexity, and generally a quality coding style was almost always left as an exercise to the student.

That is until my senior year—Introduction to Artificial Intelligence. Building on a foundation in Discrete Math, where students are educated in the laws and techniques of formal logic, this course sought to remove from our minds the baggage of sequential programming (C, FORTRAN, Python, etc.) and instead see the computer as an executor of logic—returning a decision from a single logical function.

As it turns out, thinking about programs this way has profound effects on the way we write our code: specifically, it enforces a logical structure. Let's take a look at a very simple function in C that computes the length of a linked list:

int length_of_list(t_linked_list *some_list)
{
  int length = 0;
  while(some_list != NULL)
    {
      some_list = some_list->next;
      length++;
    }
  return length;
}

Efficient? Sure. Straightforward? Eh. Honestly, with an algorithm so simple, it would be hard to get lost in this code. But the point here is to think about the result we're trying to achieve and to question whether the structure of the code is representative of that goal. Is it?

We'll come back to that thought in a moment. But for now, let's look at an equivalent function in Lisp:

(defun length-of-list (some-list)
  (if (null some-list)
      0
      (+ 1 (length-of-list (rest some-list)))))

Pay attention to the way this code is structured (like a logical proof):

  1. Base case: Does the list have any elements?
  2. Inductive step: What is the length of the list if we remove an element?

While this is a pretty simple function, thinking about our code this way—as a set of "propositions," if you will—can deeply shape the way we plan and organize our code.

"But wait!" says the code-savvy reader. "That's not fair! You're comparing an iterative algorithm in one language to a recursive (and less efficient) algorithm in another!"

Guilty. The truth is that while recursion can result in some truly beautiful programs, it seldom results in the most space- or time-efficient program. And the truth is that Lisp is designed to work with lists, so I've intentionally chosen an unfair example.

But the point here is not to start a language war or, as I said earlier, discuss algorithmic complexity; the point is to demonstrate what we can learn from a (less-than-popular) language that tends to enforce good behavior.

For kicks, here is the recursive variant of the original C function:

int length_of_list(t_linked_list *some_list)
{
  if(some_list == NULL)
    {
      return 0;
    }
  else
    {
      return (1 + length_of_list(some_list->next));
    }
}

Alright, so again (as I often ask myself at the end of these posts), what's the point? All I see are two versions of the same function, and the one style that I'm supposedly pushing is actually the least efficient of the two.

Well, the key here is to break out of the algorithm design box and instead think of our code like an expository essay. We've got a lot to say; we can choose a thousand different ways to say it; and we want to find the most effective way to make our point.

So when I begin writing a function, like writing an essay, I develop a plan. I ask myself, "What are we trying to achieve here? What are the possible cases? What are the possible results?" What I've found is that thinking of my programs at this higher organizational level yields a body of code that is easier to follow and perhaps more importantly, easier to update.

Give it a shot sometime; I think you'll like what you see.