How To Give Your Mac Some Mid-80's Style by Daniel Ehrman

Black and white icon pack + tweaked window stylings = 2.7GHz of 1980's awesomeness

I've never been a huge Mac fan (sorry, guys). But I've long been obsessed with vintage computers, and I keep a collection of 4 old (1980-1997) Apple computers in storage—hopefully some day on display.

So that's why when I was given my first Macbook Pro at my new job, I couldn't wait to make it feel a little more familiar. Fortunately, I found a great start with Ben Vessey's "Mac OS (Old School)" icon pack.

Unfortunately, most of Ben's work just covers some basic Mac app icons, so I extended the look by adding 15 more "engineery" icons—Eclipse, Android Studio, Emacs, etc.—and by building my own window and menu theme to yield a (nearly) complete Macintosh immersion.

For those of you interested in giving it a shot, here are the complete steps:

1. Initial setup

2. Install the icons

Navigate to the icon pack you downloaded earlier, and drag and drop the desired icons over their respective applications in LiteIcon. Apply changes when done.

3. Update the desktop

  1. Right click the desktop and select Change Desktop Background...
  2. Click the + icon to add the folder you extracted from the zip file.
  3. Click on the only image that appears in the main window, and select Tile as the display mode.

4. Tweak the windows

  1. Open Flavours and click Get More...
  2. Search for "macintosh classic" (without the quotes); double click on the Macintosh Classic B&W theme; and click Apply.
  3. Uncheck Menu Image and Desktop, and click Apply.
  4. You will be prompted to log out to apply your changes. Do as it says.

How do I undo my changes (revert to defaults)?

  1. To revert back to the default icons, open LiteIcon, select the Tools drop-down menu, and click Restore All System Icons...
  2. (Desktop background—no explanation needed)
  3. To restore the default windows theme, simply open Flavours and click the on/off toggle at the top right of the window.

Issues (cosmetic only)

  • Chrome, and potentially some other strange programs, use their own custom window borders, and thus their windows look...funky with the theme. I've kind of learned to deal with it, but I'd be lying if I said it didn't annoy me.
  • The classic stripe look you see in the screenshots is achieved in part by placing a white rectangle in the middle of the window. Unfortunately, there's no way to resize this rectangle to fit the window text, so I picked a size of roughly half of the window width to try to work for about 99% of the use cases. Occasionally though, you may have a window with a really long title that simply can't fit in that white box. To accommodate this, I've set the opacity of the title bar stripes to 50% so that you can still read the whole title. But yes, it will be annoying if/when you come across a long title (hopefully rarely).
  • There are some inconsistencies between the icons (e.g. some circles are drawn slightly different than others). I wasn't as careful as I should've been, but if anyone has any serious interest in using the icons, I'd be more than happy to clean them up for you. Again, I'd be lying if I said this didn't bother me.

11 Must-Haves for Every Power Programmer by Daniel Ehrman

As developers, programmers, engineers—whatever you want to call us—we spend a lot of time programming. And seeing as we spend more waking hours with our editors than we do with our families, it's important that we take a close look at that experience and make sure that we're getting the most out of it that we can.

DISCLAIMER: This post is targeted at the Emacs crowd. If you're not an Emacs user, I hope that you can at least still learn a thing or two from this list and maybe find a way of achieving the same functionality in your own editor. If this list happens to provoke an interest in switching over, great. But this post is not intended to persuade readers one way or the other—simply to share some heavily used features that can make those hours away from home a little bit happier.

In Emacs, the customization comes from your ~/.emacs configuration file. Emacs users spend years building up these config files, and a senior developer's .emacs file can be a highly sought-after resource. (One of my last correspondences with a departing engineer consisted of "Good luck," and "Can I get the path to your .emacs?") With that in mind, I've listed some useful key bindings to pass on where applicable.

 Figure 1: Evidence of generational learning through .emacs configuration files.

Figure 1: Evidence of generational learning through .emacs configuration files.


1. goto-line

Even the superhuman developer likely spends at least half of his time debugging code. And although some IDEs will highlight the failing line, it's not always a compiler that's flagging a problem. Whether it's grepping through a list of files or simply looking for a problematic section of code that someone brought to your attention, the "goto-line" operation is all too frequent.

(define-key global-map (kbd "M-g l") 'goto-line)

2. revert-buffer

Ever re-run a program after making a bug fix? How about 20 times a day? Having a quick way to reload a run-time log (or any frequently changing file) can be a huge time-saver.

(global-set-key (kbd "C-S-r") 'revert-buffer)

3. The macro

Serious developer know well the power of the macro. Non-power programmers cringe at the thought of it. The concept of the macro tends to evoke a sense of heavy up-front cost that's only useful in the rarest, most tedious circumstances. But in a given day, a few good macros may save you an hour of work. (If your managers knew how much time you waste by not using macros, they would insist on it!)

We could spend pages discussing the use cases of macros, but suffice it to say that for pretty much any task which takes more than 10 seconds and will be repeated at least 10 times in one sitting, you're better off using a macro. In Emacs, this couldn't be easier:

  1. Start recording: C-x (
  2. Do whatever you want to repeat, using regular expressions and generic operations wherever possible.
  3. End recording: C-x )
  4. Run the macro n times: C-u n C-x e

This whole process creates a throwaway macro that you can keep until you create a new one. If, however, you decide that it's generic and useful enough to save for later, this bit of config code will help:

(defun save-macro (name)                  
    "save a macro. Take a name as argument
     and save the last defined macro under 
     this name at the end of your .emacs"
     (interactive "SName of the macro :")  ; ask for the name of the macro    
     (name-last-kbd-macro name)            ; use this name for the macro    
     (find-file user-init-file)            ; open ~/.emacs or other user init file 
     (goto-char (point-max))               ; go to the end of the .emacs
     (newline)                             ; insert a newline
     (insert-kbd-macro name)               ; copy the macro 
     (newline)                             ; insert a newline
     (switch-to-buffer nil))               ; return to the initial buffer

4. increment-number-at-point

The true Emacs user will look at an incredibly tedious task like generating repeating blocks of code with sequentially incrementing numbers with enthusiasm! When recording the macro, simply place the cursor over the applicable number and increment-number-at-point:

(defun increment-number-at-point ()
      (skip-chars-backward "0123456789")
      (or (looking-at "[0123456789]+")
          (error "No number at point"))
      (replace-match (number-to-string (1+ (string-to-number (match-string 0))))))
(global-set-key (kbd "C-c +") 'increment-number-at-point)

Note that the same can be achieved for hexadecimal numbers as well:

5. ansi-term

This one is my personal favorite. Couple the macro with a terminal embedded in your editor, and you're poised to be the most powerful developer in the office. Once your terminal is contained within your editor (and once you've spawned several simultaneous terminals in there too), you'll realize you no longer have reason to live outside the editor (though your spouse may disagree).

My typical development environment is a split window with code on one side and a terminal on the other (all within a single Emacs instance). Using the "C-x o" key binding I can then quickly switch between editing and performing the usual command line operations in Linux.

Like I said, this is where the beauty of Emacs truly starts to come together. Ever had a bunch of errors dumped to the terminal that you need to cross reference with your code to apply a series of fixes? Automate that! Suppose that the compiler is complaining about a bunch of variables in our code not being declared yet. Easy:

  1. In the terminal buffer grep for "Error:"
  2. Use a regex to search for the variable name, and copy it.
  3. Switch over to the code buffer with C-x o.
  4. Declare the variable, and make a new line.
  5. Switch back to the terminal.
  6. Rinse and repeat.

Note that in Emacs, there are a variety of terminal types you can use. I prefer the ansi-term mode due to its smooth support of telnet, but I'm sure there are ways of customizing the other terminals (e.g. shell-mode) to behave in the same way as my beloved ansi-term.

6. rename-buffer

Since I'm typically working on five different tasks at once, it helps to have named buffers to keep track of things. This becomes even more useful when I want to switch to a specific terminal with a specific environment in a specific directory:

M-x rename-buffer

Emacs will then prompt you for a new name for the buffer. Then getting to the file/terminal you need doesn't involve sifting through tens of windows and tabs. Simply…

C-x b <name of buffer>

Note also that if you don't provide the name of the buffer, you can simply press RETURN to view a searchable list of open buffers, which helps if you can't remember what a file was called. This is also useful if you want to, say, have a macro quickly close all buffers matching a given regex.

7. Auto indent

Ever carefully align every block of code to paint your file like a Georges Seurat? If so, you're wasting your time. Even with a preexisting poorly formatted file from another developer, there's no need to do any heavy lifting. Kick back and let your editor do it for you. In Emacs simply highlight the lines you want and…



8. upcase/downcase-region

Ever need to change a bunch of variables to constants or vice versa? In most circumstances, this involves changing the case, and Emacs makes this very easy for us. Simply select the applicable text and "C-x-u" to convert to uppercase or "C-x-l" to convert to lowercase. You'll also need to add these lines to your .emacs:

(put 'downcase-region 'disabled nil)
(put 'upcase-region 'disabled nil)

One can also imagine how useful this would be if you wanted to convert an entire file from underscore-based variable names to CamelCase or vice versa. Combining this case-conversion functionality with macros would make the task a trivial operation.

9. The Mark Ring

Rarely does a file of code fit on a single page. So it should come as no surprise that a reasonable amount of the average developer's time is spent navigating back and forth between pages in a file. Often this operation consists of either paging up or down or grepping for a phrase near the line of interest (not to mention the dreaded journey from keyboard to mouse!).

Luckily, Emacs remembers where we've been. If you know you're going to come back to your current location in a file, simply "C-<SPC> C-<SPC>" to push the current point onto the "mark ring," and "C-u C-<SPC>" to pop it off. Emacs will take you back to where you left off. (Note that if it's a multi-hop trip, you can push multiple points onto the mark ring to retrace each step on your way back home.)

10. Dired (File Explorer)

If you're still browsing for files in your terminal—or worse, a GUI—you're missing out. Because Emacs serves as an interactive file explorer too. Simply "C-x-f" to open a file, but press RETURN without supplying a name, and Emacs will open a buffer with a listing of the current directory. From there, you can navigate through the directory with cursor keys, regular expressions, or even (horrified gulp) the mouse to find what you're looking for.

11. The regex

Finally, no list would be complete without mention of regular expressions. Although it may go without saying, the regex search within your editor is such a commonplace necessity that most of us may take it as second nature—almost a cyborgial extension of the developer's motoneural system. I would imagine that very few real editors out there lack regex searching, but for the record, it's achieved in Emacs via "C-M-s" for forward/downward search and "C-M-r" for reverse/upward search.


Well, I hope that helped. These were just the few Emacs features that I use on a regular enough basis that I couldn't go without them. Please leave comments with suggestions on items I might have forgotten. As is often the case with these sorts of things, it's the actions we take for granted that, by definition, we are most likely to overlook.

The Pipelined Brain by Daniel Ehrman

While at work today I thought to myself, "How marvelous is it that if we as logic designers want more processing power out of a chip, we can simply add more logic?" It seems simple, but it's really quite remarkable when you think about it. (Imagine choosing to giving your child more neurons as needed.) Sure there is an area and power trade-off to consider, but if we need to do more work in parallel, there's nothing stopping us from building a few extra gates to get what we want.

Note that this approach is fundamentally different than software: extra code means extra time. On the other hand, transistors are a dime a dozen these days, and on chips with more than a billion of them, the added cost—for no added processing time—is truly quite trivial.

So then I thought, "What if we could do this with our brains? What if like hardware, we could think in parallel? What if I could pipeline my brain so that while one block of my consciousness were busy processing Problem A, another block could be working on Problem B? What would that look like?"

But then it occurred to me: we already do this every day. Yes, we know that our brains are busy processing thousands of patterns in parallel far beneath our consciousness, but that's not what interests me here. I'm interested in the collective brain.

Complex "intelligent" behaviors have long been observed in relatively simple species like ants via their larger actions as a community. And in humans, it has been shown that while no one of us may be an expert in a particular field, through the power of numbers, we can together achieve highly accurate results. So it really shouldn't be any great leap to think of our brains as individual workers contributing to massive results on a supercomputer scale.

I use that word intentionally—supercomputer—because that's what we are, all 7 billion of us combined. What else could be capable of achievements as miraculous as flying three people to the moon and back with less than a decade of preparation, while simultaneously managing the concerns of everyday life here on Earth?

In fact, from a computer architect's perspective, there is tremendous insight to be gleaned from the ways in which we work together to achieve results beyond the power of any one brain. Perhaps no company better exemplifies the spirit of pipelined processing than Taiwanese news animators Next Media Animations, who now go from "story conception to a finished product in less than 2 1/2 hours." This remarkable pace is achieved only by treating each worker as a distinct, specialized unit capable of solving a problem in isolation from, and in parallel to, the other members of their team, much in the same way that we design microprocessors.

The success of Next Media Animations is a glimpse at what can be accomplished when a production process is constructed to remove as much serialization and dependencies as possible. With advanced tools at our disposal, such as 3D rendering software, which dramatically reduce the pipeline latency of product development, we are poised more and more every day to function like a high speed computer, moving with microprocessors along Moore's curve rather than watching them pass from the sidelines.

It's now up to the entrepreneurs, the managers, and the creative thinkers of tomorrow to match the evolution of computers with equivalent developments in our workflow, growing in tandem with the technology to ensure that we are as efficient and as productive as we can be.

Guitar Synthesizer by Daniel Ehrman

Since purchasing my function generator, I've been doing a lot of thinking about what kinds of creative projects I can accomplish with this seemingly simple device.

One concept that struck me almost immediately was using the Voltage Controlled Frequency (VCF) input of the function generator to vary the sound via some external source, and since I'm a guitar player, naturally I looked to my guitar as the ideal controller.

This isn't a very new idea. In fact, the Roland GR-55 puts to shame pretty much any hypothetical device I could imagine:

Clearly something of that caliber is out of my ballpark, but I might be able to achieve something simpler.

My idea (again, nothing incredibly novel) is to use a simple Frequency Controlled Voltage (FCV) chip to convert my guitar signal to a specific voltage and then provide that to the function generator's VCF input. The ideal result would be a clean synth sound coming straight from my guitar.

The reality of course is much more complicated.

Unfortunately, the function generator is monophonic: it is only capable of playing one "note" at a time. So it's imperative that whatever voltage input I provide it is a clean representation of a single guitar note rather than the complex assortment of frequencies that are present in an actual guitar's sound. This is no trivial problem. Even if I focus on plucking only a single string, a single note is actual comprised of a variety of overtones that pollute the sound and could potentially distract the circuitry from the fundamental frequency:

 Plucking the open D string of a Fender Stratocaster

Plucking the open D string of a Fender Stratocaster

So the next thought is, "Well, that's OK: I'll just pass the signal through a few band pass filters and choose the band with the maximum output as my fundamental frequency." While that may sound like a perfectly simple solution to the problem, this is where things get much harrier. Let's take a look at the spectrum of frequencies that are actually comprising this signal:

Ouch. Take a look at that. The fundamental frequency of our D3 note, approximately 147 Hz, isn't even the peak frequency in the spectrum! To be fair, we have to pay attention to the fact that of all of the partial frequencies, the group of tones around the D3 clearly have the largest area; therefore, that collection of closely aligned frequencies will together cut through the mess to create what we perceive as a single D.

If only band pass filters worked like human ears….Look what happens when we try to cut a slice out of the spectrum that should represent a D3 note (144 to 150 Hz):

 Top: original complete guitar D note. Bottom: Extracted frequencies in the range of 144-150 Hz.

Top: original complete guitar D note. Bottom: Extracted frequencies in the range of 144-150 Hz.

What?? Things certainly aren't looking good here. The extracted range (D3) is of such a low volume, it's hard to believe that we could even categorize the original signal as a D at all. And if we were to expand the band any wider, we'd start to pick up the surrounding notes (C# and D#) and in turn detract from the cleanliness of our signal.

To put it simply, reducing a raw analog guitar signal—even just one note—to a single frequency that can be cleanly converted by an FCV chip may require some more advanced circuitry.

However, there is a bit of hope: if you look carefully at the figure above, you'll notice that the smaller signal at the bottom representing the D3 note has a period that directly aligns with that of the big spikes in the top signal. This could be very good news: if the FCV chip can trigger on peaks, it will accurately detect the fundamental frequency and provide the single voltage we need.

Unfortunately I'll need to do more investigation into the problem before I have a complete answer (for another blog post). But in the meantime, while we're in the business of deconstructing guitar signals, I thought it would be a fun wrap-up to work our way back up to that original dirty signal from scratch.

In referring to the frequency spectrum shown above, I chose the ten loudest frequencies and combined them in Audacity to produce an artificial guitar sound:

 Artificial guitar D3 note: 10 simple sine waves and a single additive composite.

Artificial guitar D3 note: 10 simple sine waves and a single additive composite.

Comparing the resulting composite wave to the original actual guitar sample, we can see that the synthesis isn't that far off:

 Top: artificial guitar. Bottom: actual guitar

Top: artificial guitar. Bottom: actual guitar

The primary sources of difference between the two waves is merely the magnitude of each of the partial frequencies, which for the sake of simplicity, I kept nearly the same across the entire spectrum. This shortcut results in "fuller" bands (louder notes), such as the D3, being underrepresented and "narrower" bands (softer notes) being overrepresented.

Here are the audio samples of the two different waves:

Function Generator Music in 5 Minutes by Daniel Ehrman

Forget oscilloscopes. Ever wondered what it sounds like to play a function generator through a guitar amp?

In my undergrad, while working on the Purdue Solar Racing team back in 2011, I borrowed the team's function generator when it wasn't in use and carried out some "musical experiments" back at home.

I have a Fender Cyber Champ amp, which has a whole host of effects built in: phaser, flange, chorus, various kinds of reverb, and a lot more. So the thought was that if I could combine a box of essentially unlimited sounds with these spacey guitar effects, I could cook up some pretty cool live music, or at the very least, synthesized effects to lay over whatever other music I was working on at the time.

Of course I eventually had to give back it back, but I never stopped thinking about all of the unique sounds I could make with that function generator. I'd once seen a documentary on the making of The Dark Side of the Moon, and the idea of crafting an entire composition from little hand-made sonic components truly lit a fire in my engineer's brain.

Back to the present day.

Last week, I finally ordered my own function generator for $25 on eBay and started right where I left off three years ago.

 Clockwise from top left: (1) GW GFG-8015G function generator, (2) Boss RC-2 Loop Station, (3) Fender Cyber Champ amp, (4) Presonus AudioBox USB interface, (5) headphones, (6) PC.

Clockwise from top left: (1) GW GFG-8015G function generator, (2) Boss RC-2 Loop Station, (3) Fender Cyber Champ amp, (4) Presonus AudioBox USB interface, (5) headphones, (6) PC.

The diagram above shows the final setup with all of the required pieces for recording the music. The loop station (top center) is the key: it lets me loop back what I've previously recorded without the help of a computer so I can compose everything live in one shot. The computer in the diagram, and in fact the entire bottom row, is only present for recording purposes.

Also note that technically, you would want any effects—including the amp—placed before the looper so that different effects could be saved with each track rather than the same effects being applied to the entire composition, but I just wanted a quick and simple setup here. The only effects I actually used were very small amounts of reverb, delay, and chorus.

I start with a 2 Hz square wave with a non-50% duty cycle to create a heart-beat-like bass drum. (Listen to this beginning section of Dark Side for comparison.) Typically, this wouldn't be audible due to its being below the 20 Hz human hearing cutoff, but the quick changes in the line level create some residual precussive frequencies that we can hear quite well.

Then, in remembering the repetitive, but beautiful, two-chord droning of Pink Floyd's "Breathe," I set the frequency knob to A for four beats and E for four beats. (Note: it seems like these notes all came out half a step lower, which I'll have to investigate further.)

With the basic notes down, I start overlaying more of the notes that comprise the A major and E minor chords, with the pleasant surprise of some phaser effect as I double-record the same, but phase-shifted, notes.

Add on top of that a couple "slides" into notes and some (admittedly atonal) quickly changing frequencies, and we're pretty much done.

So here's the final 15-second composition (that loops ad infinitum):

Ultimately, I have much bigger plans for this function generator, but this is a nice kick-off to what will hopefully become a seriously fun audio engineering project.

What Discrete Math and Lisp Can Teach Us About Good Coding Habits by Daniel Ehrman

When I was in my undergrad, I had the pleasure of reading, and to be honest, writing, countless lines of confusing code written in a variety of languages. To be fair, I had inherited some pretty bad habits from when I taught myself BASIC as a kid, but it seems like college courses never did a really great job of pushing clean coding style. Sure, heavy emphasis was always placed on designing scalable, efficient programs, but algorithmic complexity is quite distinct from code complexity, and generally a quality coding style was almost always left as an exercise to the student.

That is until my senior year—Introduction to Artificial Intelligence. Building on a foundation in Discrete Math, where students are educated in the laws and techniques of formal logic, this course sought to remove from our minds the baggage of sequential programming (C, FORTRAN, Python, etc.) and instead see the computer as an executor of logic—returning a decision from a single logical function.

As it turns out, thinking about programs this way has profound effects on the way we write our code: specifically, it enforces a logical structure. Let's take a look at a very simple function in C that computes the length of a linked list:

int length_of_list(t_linked_list *some_list)
  int length = 0;
  while(some_list != NULL)
      some_list = some_list->next;
  return length;

Efficient? Sure. Straightforward? Eh. Honestly, with an algorithm so simple, it would be hard to get lost in this code. But the point here is to think about the result we're trying to achieve and to question whether the structure of the code is representative of that goal. Is it?

We'll come back to that thought in a moment. But for now, let's look at an equivalent function in Lisp:

(defun length-of-list (some-list)
  (if (null some-list)
      (+ 1 (length-of-list (rest some-list)))))

Pay attention to the way this code is structured (like a logical proof):

  1. Base case: Does the list have any elements?
  2. Inductive step: What is the length of the list if we remove an element?

While this is a pretty simple function, thinking about our code this way—as a set of "propositions," if you will—can deeply shape the way we plan and organize our code.

"But wait!" says the code-savvy reader. "That's not fair! You're comparing an iterative algorithm in one language to a recursive (and less efficient) algorithm in another!"

Guilty. The truth is that while recursion can result in some truly beautiful programs, it seldom results in the most space- or time-efficient program. And the truth is that Lisp is designed to work with lists, so I've intentionally chosen an unfair example.

But the point here is not to start a language war or, as I said earlier, discuss algorithmic complexity; the point is to demonstrate what we can learn from a (less-than-popular) language that tends to enforce good behavior.

For kicks, here is the recursive variant of the original C function:

int length_of_list(t_linked_list *some_list)
  if(some_list == NULL)
      return 0;
      return (1 + length_of_list(some_list->next));

Alright, so again (as I often ask myself at the end of these posts), what's the point? All I see are two versions of the same function, and the one style that I'm supposedly pushing is actually the least efficient of the two.

Well, the key here is to break out of the algorithm design box and instead think of our code like an expository essay. We've got a lot to say; we can choose a thousand different ways to say it; and we want to find the most effective way to make our point.

So when I begin writing a function, like writing an essay, I develop a plan. I ask myself, "What are we trying to achieve here? What are the possible cases? What are the possible results?" What I've found is that thinking of my programs at this higher organizational level yields a body of code that is easier to follow and perhaps more importantly, easier to update.

Give it a shot sometime; I think you'll like what you see.

Making Chords with the PC Speaker by Daniel Ehrman

At least once in every computer engineer's life, the question must arise,

"Can I create the illusion of a chord in QuickBASIC by rapidly multiplexing different notes on my internal PC speaker?"

Well, OK, maybe not that question exactly. But a lot of engineers can probably attest to having a craving for solving tough problems with limited tools, and since I have a passion for both music and retro computers, this was my chosen challenge last weekend.

As a little background, QuickBASIC is the simple language/compiler that came bundled with MS-DOS back in the day. It's the distant ancestor of the Visual Basic many of you may know now and Microsoft's replacement of the older GW-BASIC which arguably first exposed programming to the masses (perhaps with the exception of Applesoft BASIC of Apple II fame).

QuickBASIC lets you do a lot of fun stuff with your computer (i.e. graphics and sound) with just a couple lines of code. Like the Arduino of the 80's.

So in my search for the lost chord (yeah, Moody Blues reference), naturally I returned to this glorious language where I first cut my teeth on programming. I won't bore you with the bulk of the code, but here's what the important stuff looks like:

SUB PlayChord (SomeChord AS Chord, Duration, MultiplexingDuration)

' Convert from seconds to clock ticks.
LocalDuration = 18.2 * Duration
LocalMultiplexingDuration = 18.2 * MultiplexingDuration

' Compute local values.
NumIterations = LocalDuration / LocalMultiplexingDuration
NumNotes = SomeChord.NumNotes
NoteDuration = LocalMultiplexingDuration / NumNotes

' Play the notes.
FOR i = 1 TO NumIterations
   SOUND SomeChord.Note1, NoteDuration
   IF (SomeChord.NumNotes >= 2) THEN
      SOUND SomeChord.Note2, NoteDuration
      IF (SomeChord.NumNotes >= 3) THEN
         SOUND SomeChord.Note3, NoteDuration
      END IF


where a chord is defined as below:

TYPE Chord
   NumNotes AS INTEGER

Each integer in the chord represents the frequency of the given note. I also added a NumNotes member to give us the option of experimenting with different numbers of simultaneous notes. Pretty simple.

Sure we could improve the scalability of the code, but this isn't a programming exercise, and frankly, this beginner's language from 1985 doesn't exactly make arrays embedded in custom types easy. Our goal here is to make music, so let's get started….

I start with a simple G major chord:

DIM Gmaj AS Chord
Gmaj.NumNotes = 3
Gmaj.Note1 = 196
Gmaj.Note2 = 247
Gmaj.Note3 = 294

which sounds like this with a "multiplexing duration" of 0.02:

Note that the "multiplexing duration" effectively defines how long each note in the chord should be played. So if we increase the multiplexing duration, we should expect to hear the individual notes in the chord become clearer and clearer:

While the higher mux durations give us that cool arcade sound we all know and love, the 0.02 mux value clearly masks the multiplexing best, producing a sound closest to anything we could consider a chord.

However, the short note durations of the first track yield a distinctly "dirty" sound that's not too appealing. Taking a closer look in Audacity's Frequency Analysis, we can see why:


Whoa! That's a lot of extra frequencies that we really don't want. For reference, this is what the chord looks like if we set NumNotes to 1 and just play the G note:


Much cleaner.

OK, so what's happening in the first plot (the triad G major chord) that's giving us all of those bogus frequencies, and how bad is it really? Well, let's take a closer look at the magnitudes of each of those frequencies by exporting the data to Excel:

Frequency (Hz)Level (dB)

OK, so while this provides some clarity—note that we can see frequencies near the three expected ones: 196, 247, and 296—it also raises a couple interesting, perhaps related, questions:

  1. Why do each of the three primary frequencies come out a little flat (slightly lower) than what they should be?
  2. What is this unexpected 277-Hz frequency doing at the top of the chart??

Well, to help us better understand what's going on, let's take a look at the data for the 0.08 mux duration (i.e. longer notes):

Frequency (Hz)Level (dB)

Alright, now that looks a little better. This tells us that when the notes are played for a longer period of time, not only do they come out closer to the expected frequencies, but they also appear as the three loudest frequencies in the spectrum.

So it appears from the data (granted from only two data points) that the accuracy of the notes depletes as their duration decreases.

As it turns out, when the notes are short enough, it doesn't appear that there is enough time to generate a strong consistent tone free of aberrations:


In this waveform view of the 0.02 mux chord, the issue is painfully obvious: the G note—the root note of the chord—is too low of a frequency to be played in the small time slice allotted to each note in the chord. The time slice is so small in fact that only one cycle of the note can by played, effectively resulting in no audible note. 

Upon closer inspection, we can account for the specific reason why the frequency 190 Hz appears stronger than the expected frequency of 196 Hz: the transition from the D note to the G note is slightly longer than the pulse width of the G note itself (i.e. one half of 1/196 seconds). This slightly longer duty cycle is what yields the flat G and is likely what causes the other flat notes as well.

The final big issue is the inconsistency between note transitions. If you choose any note transition (e.g. B to D) and then look for the same transition later in the waveform, you'll likely find that the duration of the transition is slightly different. While this discrepancy might seem trivial on the scale of microseconds, our ears (as well as our software) is sensitive enough to pick up the difference, and it's these little discrepancies that are likely resulting in the unwanted extra frequencies in our chord.

Unfortunately, the transition consistency problem is far from trivial: because commands sent to the PC speaker are interrupt-driven, using these high-level constructs for generating sounds results in fairly non-deterministic behavior. To gain full control, we need lower-level code (i.e. assembly) that can manipulate the PC speaker directly. This of course, is a whole other blog entry—and probably entire project—altogether.

For reference, there is a lot of information and previous work out there for playing full-fledged WAV-like music through the PC speaker that completely blows this effort out of the water. Check out this example if you're interested:

A More Efficient Model For Problem-Solving Across Hierarchies by Daniel Ehrman

We work in organizations where different people have different specialties in different fields of knowledge, all of which we depend on in one way or another. We have some worker in Site A who's an ARM expert, another one in Site B who knows DDR, and a third one in Site C who understands verification IP. All of these people are solving their own problems independently, despite the fact that most problems consist of a variety of contributing knowledge bases; the worker pushes on through the problem, struggling to understand what he doesn't already know—but what someone else does.

We can think of this situation like a Venn diagram with each person having his own circle of knowledge or expertise. Each person approaches his problem by sort of pecking in the dark all around his circle, maybe expanding his search a bit, hoping that he eventually hits the slice of area intersecting with someone else's domain. A real-world example may be, after asking around the office and bouncing along a chain of e-mails for a few days, finally finding the person who can fill the knowledge gap—and even then, often, only partially.

Perhaps more concretely though, it's best to think of the situation as it fits in with the typical hierarchy:


This kind of a structure will be as present in a computer engineer's mind as it is in that of a human resources department. And as any computer engineer could tell you: those "leaf nodes" at the bottom of the tree—they're isolated from each other. In fact, it's inherent in this structure that the only way of getting from one leaf to the other is to go up the tree through each node's "parent."

So what's the point? The fact is passing information between leaves (i.e. lower-level employees) is remarkably easy these days with e-mail, web conferencing, and other collaboration tools. The key insight though is to recognize that the leaves only see what they've been given, and they will only communicate with each other when asked. So although in reality they may have direct connections (phone, e-mail, etc.) they remain isolated by what they're told to work on.

Taking a step back, let's think about how a task makes it's way to the individual problem solvers at the bottom of the tree. High-level goals are constructed by upper management; criteria are defined; and tasks designed to meet those criteria are assigned to workers at the next lowest level, with increasing amounts of detail at each level down.

But—the point at which a problem is assigned to a specific node at the next lowest level, an important (and potentially costly) decision has been made: the other nodes at that level, and all of the workers beneath them, have just been pruned from solving that problem. And at that moment, the decision-maker must operate with only the limited information he has been provided by the single level beneath him.

As it turns out, this is a classic problem in artificial intelligence: in the searching of decision trees, such as in the game of chess, often there are far too many possibilities to consider in a reasonable amount of time. The solution is to develop a heuristic, search only a few levels below oneself, and make an educated guess about what you think is probably the best move.

This is effectively how the business world is working now, and has been for a long time—managers making big decisions with highly filtered information.

But with the technology available to us these days, there is no reason to keep proceeding with this outdated approach. Think about how you find an answer to a question when you're searching with Google. Do you browse through a hierarchical list of categories, at each step of the way considering all of the links and choosing the one which poses the highest likelihood of containing your answer? No. That model of search has been dead for years now because people understand the power of letting the algorithms choose your results for you (based on their unbeatable knowledge of millions of available options).

So where is the Google of problem-solving in today's businesses? Where are the "page rank" algorithms for telling managers the best resources to assign a task? Imagine indexing everything related to a problem in a centralized database: products affected, people involved in the solution, who knew a lot about it (and who didn't), tools used, etc. And imagine that every time a person contributes to a solution, his information is linked to that problem, effectively creating a "portolio" of problems that worker has under his belt. Finally, we can pull in live data like individuals' schedules and project timelines to ensure that resources are always properly allocated.

Over time, we have a map of our resources embedded in this database, and we can use "big data" algorithms to intelligently assign workers to a task based on their comparative advantage.

The final result is something like matrix management but with even less barriers between teams. The new style dramatically crushes information silos and makes tribal-style management difficult to exist at all. But here it is important to be wary of a potential motivational pitfall that may result from these new highly cross-functional resources: in fact, the flattening effect that occurs when all resources become available to all possible problems may actually induce competitive issues at managerial levels. A manager is responsible for the work of his team, and thus if his people are being pulled into other teams' work, it will be difficult for the manager to find reason to support such "outside" activities.

Therefore, if such a system were implemented, it may be most effective to apply the traditional resource-pruning at a moderately high level while still leveraging the power of the algorithm's decision-making based on its in-depth knowledge of all available resources. Even still, a front end for the problem-solvers database could be provided to the lower-level employees to give them a means to get answers more quickly. In this case, employees would have the power to solve their problems efficiently, while the managers would still have the power to decide how those employees should spend their time.