cat /dev/brain > blog

yet another weblog – by Sven A. Schmidt

Asynchronous, lazy initialization with synchronous accessor

leave a comment »

I’ve come to love Grand Central Dispatch and blocks for making it so easy to add asynchronous tasks to your application. Without the overhead of thread class instantiation or defining callback methods you can send a task in the background and keep your main thread unblocked.

However, sometimes you need a mix of synchronous and asynchronous or more specifically you want to start something asynchronously initially and to block and wait for its completion elsewhere in your code. One example for this could be a unit test of an asynchronous algorithm where you need synchronous access to the results for validation.

Another example is a current project of mine which involves plotting of medical data that is parsed from CSV files. There are four CVS files and each takes about a second to parse. It’s not long but when you try and do it on demand when a plot is about to be displayed on the screen, you find that blocking your main thread for a second can be very annoying. The obvious solution is to do the parsing on a background queue but that immediately raises the question: How do I then handle the plotting? Do I show an empty plot which populates later, when the data is available? That doesn’t look good. Another alternative would be to make the whole display plot action asynchronous. But then you’ve decoupled user interaction (user taps a button to bring up a plot) and GUI action (plot actually displays) and will probably find that users tap multiple times until the plot shows up.

Ideally then, the data would be loaded early on and the actual plotting would be synchronous. In my application, the data is loaded and parsed asynchronously in the initializer of a singleton which is used throughout the application for global data. Therefore, as soon as my global is being accessed for the first time the data gets loaded in the background. I can then afford to use blocking access to the data, because there is no (or very little) chance for the user to activate the GUI to display the plot before the data has been parsed. And even if they do, the processing is far along and the delay minimal.

So in summary, the requirements for my use case are:

  • Several initialization tasks need processing
  • Processing can happen in parallel
  • Processing must not block the main thread
  • Access to the results should block while processing is in progress

And here is how it’s implemented:

First off, we have an initializer that does our parsing, slowInitForKey in the example code. The idea here is that initialization work is based on a key (e.g. a filename) and returns a single result object that can be stored in a results dictionary.

Next we define a singleton Globals which is instantiated early on in our code, for example in viewDidLoad and has the following init method:

- (id)init {
  self = [super init];
  if (self) {
    valuesSerialQueue = dispatch_queue_create("de.abstracture.valuesSerialQueue", NULL);
    self.values = [NSMutableDictionary dictionary];

    [[NSArray arrayWithObjects:@"A", @"B", @"C", @"D", @"E", @"F", nil]
     enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
       dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
         NSString *value = [self slowInitForKey:obj];
         dispatch_async(valuesSerialQueue, ^{
           [self.values setObject:value forKey:obj];
         });
       });
     }];
  }
  return self
}

What does this do?

  • First, we set up a serial queue and a dictionary for the results. We use a serial queue to make sure that only one thread will access the values dictionary at a time. Think of it as a locking mechanism in GCD terms.
  • Next, we iterate over the initialization keys (A-F in this example – these would be filenames in the CSV parsing example). Each key we send to a concurrent dispatch queue for parallel processing of slowInitForKey:. After processing is finished, the result is written to the dictionary via an async dispatch to our serial queue valuesSerialQueue. Again, this ensures that no two threads access the values dictionary at the same time.

Now that initialization is on its way, all that’s left is the synchronous access to the results. This is very simple:

- (NSString *)valueForKey:(NSString *)key {
  __block NSString *result = nil;
  do {
    // keep polling until there's a value
    dispatch_sync(valuesSerialQueue, ^{
      result = [self.values objectForKey:key];
    });
  } while (result == nil);
  return result;
}

All we do is simply poll the results dictionary via the serial queue until there is a value. Of course you need to make sure that your initializer will always set a result – otherwise you would block forever. A safer way would be to set a time limit on how long you block before you eventually break from this method and return nil.

If you are worried that there may be a lot of polling going on until there is a result you could add a little delay after each unsuccessful poll. It’s probably irrelevant though, because polling only happens until initialization is finished.

An example project is available on github.

Advertisement

Written by sas

April 2, 2012 at 00:04

Posted in dev, ios, iphone, osx

Tagged with

iOS User Accounts

leave a comment »

Wouldn’t it be convenient if you could pick up any iPhone or iPad and have it personalized with your settings quickly? This is something that occurred to me last week when my girl friend had left her iPhone at home and wanted to continue reading her book in iBooks. I had my iPad with me but of course it is tied to my iTunes account, not hers, and it’s way too much hassle to reconfigure it just for a brief reading session.

But it made me wonder what such a feature could look like on iOS and what it would take to make it happen. Basically, you’d want an extension of something that’s already possible on OSX: signing in with an Apple-ID. Once you’re authenticated with your Apple-ID, your content and settings are only a few steps away: iCloud, if you’re using it, has got it and in theory, that’s all you need to restore your device.

I’ve upgraded quite a few devices in the past and so far backup and restore has worked really well. Now imagine there were an (optional) login screen on iOS devices where you could log in to your iCloud account and immediately you’d get your home screen, with your content and settings trickling in in the background – just like it’s happening now when you restore through iTunes or from iCloud. With future devices having more storage space, the OS could cache multiple user accounts so that on subsequent logins your data would only need an update rather than a completely fresh pull. Also, you can imagine some things like big apps being referenced from multiple accounts and therefore needing to be stored only once on a device and not per account.

If that use-case still sounds esoteric to you, because your iPad is yours alone, think about places where iPads could be shared by larger audiences: Schools, universities, sales people, etc. For example, if a school wanted to start using iPads in one course only, say their biology class, they’d only need to get enough iPads for their maximum class size, not for the total number of students attending that class. (Caveat: no iPad based learning at home unless students log in using their private iPad.) Or there could be iPads per course that wouldn’t need to be moved: Your course material appears at your desk wherever you are – you don’t actually carry it there anymore. It would certainly help reduce the risk of iPads being dropped between classes or on the bus.

Technically, I would assume something like that being investigated or even in place already at Apple. It’s probably just a matter of broadband connections catching up to make this a smooth experience. One that Apple would be willing to ship and tout as a new feature.

Written by sas

February 7, 2012 at 19:02

Posted in apple, iphone

Tagged with

autotm 0.94 supports local backups

leave a comment »

As introduced in a previous blog post, autotm is an OSX system daemon that automatically switches Time Machine targets depending on their availability. The initial version of autotm only supported network based targets but I’ve recently updated the script to also allow locally connected disks (e.g. USB). This update requires some minor changes to your autotm.conf file: The server section is now called “destinations” and each destination has a “type”, which can be remote or local. For example:

destinations:
 - type: remote
   hostname: myhomeserver.local
   username: jdoe
   password: s3cr3t
 - type: remote
   hostname: myofficeserver.local
   username: john_doe
   password: pa55 
 - type: local
   volume: /Volumes/Time Machine

To learn more about autotm, have a look at the Readme on Github. Please file any problems you encounter on the issue tracker at github!

Thanks to Andy and Daniel for their help in testing this release!

Written by sas

January 2, 2012 at 10:01

Posted in mac, osx

Tagged with

CouchDb Migrations

leave a comment »

A few weeks ago I attended CouchConf in Berlin and during the sessions (and in between) one topic was raised several times: How to migrate data between “schemas” or document versions. I described how we are migrating documents and I want to take a moment to explain the process in more detail. It might sound trivial but there was interest in the description during the conference, so I’m hoping it may prove helpful for others nonetheless.

Since CouchDb is inherently unstructured, there’s no global schema that you manage to control your data’s structure. That’s often a good thing, because it gives you flexibility, but it can also cause problems, for example when you want to access documents without handling against all sorts of different “versions” of your document you might have.

For example, say you have started out with an initial player document (we’re sticking with the RPG theme set in the Couchbase examples ;)):

{
  'version' : 1,
  'name:' : 'Player A',
  'xp': 1234
}

but you find during testing that you need to know a player’s level. You’ve decided that the level should always be xp/100 + 1 but you don’t want to recompute this all the time in code but rather store it in the document directly. For various other reasons you’ve also decided against creating a view and therefore you want to migrate all your documents to this format:

{
  'version' : 2,
  'name:' : 'Player A',
  'xp' : 1234,
  'level' : 13
}

Note that the initial document already included a version attribute that we’re using to keep track of our migrations but even if this weren’t the case from the start, it’s easy to simply treat documents without a version attribute as “version 0” so to speak and handle them similarly to the rest of this example.

So how do we migrate from version 1 to version 2 then?

The idea is to create a view that shows all old revision documents and process them until the view has no more items. The view would be defined with the following (trivial) map function:

function(doc) {
  if (doc.version && doc.version == 1) {
    emit(doc._id);
  }
}

Now it’s simply a matter of processing all items in this view, for example with the following python-couchdb method that takes a database object as a parameter:

def migrate_v1_v2(db):
  v1 = db.view('_design/migration/_view/v1')
  for row in v1.rows:
    doc = db[row.key]
    if doc['version'] == 1:
      doc['version'] = 2
      # we want to add the level stat, which is simply xp/100, starting from 1
      doc['level'] = doc['xp']/100 + 1
      db[doc.id] = doc

and where “v1” is the name of the view we defined above.

The complete example in the form of a unit test is available on github. The only dependency is python-couchdb. It should be trivial to translate this pattern to other client libraries. It might also be useful to extend this concept to a migration framework á la Ruby on Rails.

Written by sas

December 19, 2011 at 14:12

Posted in dev, nosql

Tagged with

Using S/MIME on iOS Devices

with 2 comments

The following article explains how to set up your iPhone or iPad to send and receive encrypted emails via S/MIME. Prerequisite is an S/MIME certificate from a certificate authority. Some CAs provide them free for personal use. The procedure is not very complicated even though the description may look lengthy due to the many screenshots. The biggest hurdle is to pick the correct file format when exporting your S/MIME key on your Mac. (A description on how to export the correct certificate on Windows will follow.)

Set-up for Receiving Encrypted Emails

1. Export your private key in a format that you can import on your iOS devices.

To do this, open “Keychain Access” and find your certificate. Select it and choose “File” / “Export Items”, as shown below.

01 export key

 

2. Next, save the certificate in p12 format.

In the process of saving the certificate, as detailed below, you will be asked to provide a password to encrypt your key. This will allow you to send it via email without fear of it being intercepted and used by someone else. Depending on your keychain settings you will also be asked to provide your administrator password to read the privatekey for exporting.

02 save p12

3. Now drag this exported file to your Mail.app icon to send it to yourself.

(Make sure you don’t encrypt it 😉

03 send key

4. Turn to your iOS device to import the certificate.

Open the email you just sent to yourself and tap on the attachment to import your certificate.

04 import on ios 05 unsigned certificate 06 enter password

5. Enable S/MIME in advanced mail settings and choose your certificate.

On your iOS device go to “Settings” / “Mail, Contacts, Calendars” / “<Your Account>” / “Advanced” (at the very bottom of your account settings) and activate S/MIME. Important: Make sure you leave the account settings by tapping “done” in the top right of the tool bar. Changes don’t appear to be applied until you do so.

07 enable smime 07b confirm settings

You can also enable signing and encrypting of messages here but more on that in a moment. What we’ve achieved so far is being able to read messages that have been signed with our public key. Unfortunately, sending encrypted messages involves a few more steps and has a few caveats.

Set-up for Sending Encrypted Emails

In order to send an encrypted message, you need to do the following.

1. Import the recipient’s public key.

This happens automatically in Mail.app on OSX but requires some manual interaction on iOS. You may have noticed when looking at signed messages (like the one you sent yourself earlier) that there’s a new little star icon in the blue email address bubble after S/MIME has been activated. This is the UI indicator for signed messages. And the address bubble is also a button that you can tap to bring up address – and certificate – information.

08 address bubble star

Tapping this button will bring up the address info view:

09 address info

Tap install to register this public key, which will allow you to send encrypted emails to the key’s owner. You will need to repeat this procedure once for every recipient.

2. Send email.

There’s not really a step two other than making sure you’re sending to the recipient’s correct email address and from your correct account so that the available keys match up with the email addresses used in the process. You can tell that your message is being encrypted by the “Encrypted” string in the title bar of your message:

10 encrypted message

Caveats

What’s a bit unfortunate is that there’s no easy way to selectively send encrypted emails. The encryption setting is global for the account under “Settings”, meaning that you have to go there and enable/disable encryption for all messages from that account. It would be nice if that were the default only, with an option to override it in the message composition view.

It would also be nice if public key importing were automatic, like it is on the Mac.

But all in all, it’s nice to be able to read encrypted emails on iOS devices now.

Written by sas

December 12, 2011 at 09:12

EasyPay in the Apple Store 2.0 app

leave a comment »

In the latest 5by5 Talk Show John Gruber and Dan Benjamin speculate how EasyPay in the Apple Store 2.0 app works. EasyPay is a feature that allows shoppers to scan an item’s barcode and complete a purchase via their iTunes account without any interaction with the shop’s staff.

John and Dan are puzzled by how Apple prevents someone from just walking out without properly scanning and purchasing an item.

My wild guess would be the following happens: The barcode contains an RFID chip that will allow sensors at the exit to tell when an item is removed from the store. The barcode also contains an ID that is associated to this RFID in the store’s inventory system. When you scan the barcode and purchase an item, the RFID associated with that item is cleared to leave the store and will therefore not raise an alarm.

Maybe that’s one reason this only works in US stores right now.

Written by sas

November 10, 2011 at 13:11

Posted in apple

Tagged with

Automatic Time Machine Switching

with 6 comments

With the ubiquity of mobile computer and especially their dominance among Apple’s product offerings, it’s probably a very common set-up for people to use a MacBook both at home and at the office. This gives you a lot of flexibility and avoids having to maintain two installations — which can take a lot of time, depending on the amount of customization you’re applying to your machine. You bring your machine and therefore never have to sit down at an out-of-date computer.

There’s one problem though: Do you also carry along your time machine backup? Because if you don’t, and you spent a significant amount of time at either location, there will be large gaps and opportunities for failure in your backup schedule. (Yes, there’s mobile time machine but I see that as an option when you’re really on the road. A same disk backup is not truly a backup, it’s more like “Trash on steroids”.)

So what are the options? You could carry an external disk around and use that for backups. The problem with this is, though, that it takes a lot of discipline to hook it up every time you sit down in one place in order for the hourly time machine backups to happen. Part of the beauty of time machine is that, if it’s configured to back up to a network volume, you never have to do anything for it to kick in. All you need to do is enter you wifi zone.

Another reason an external disk is not ideal is the fact that it’s not redundant in itself. It’s just a single disk and single disks fail. Ideally, a time machine backup sits on a RAID-5 or some other redundant configuration – none of which is going to be portable.

In my opinion, the ideal solution to this is to have a time machine set-up at each location where you spend a significant amount of time and which you get switched to automatically on joining the respective network. When I saw the macosxhints article about using two time machine backups a few days ago, I knew that all the bits were there to set this up. However, I didn’t want to install extra tools like the article describes (MarcoPolo) and therefore I wrote a little ruby script that does everything automatically.

The script is available at github. The readme file explains most of the details but in a nutshell autotm does the following:

  • autotm looks at your system.log to determine if the last backup failed
  • if it failed, autotm will go through the list of configured servers to look for an alternative
  • if multiple servers respond to pings, autotm will pick the fastest one (your office server may be visible via a presumably slower VPN connection for example but you want to avoid backing up there from home)
  • if your last backup was successful but the server is not available anymore autotm will check for alternatives and pick the fastest one, as described above

So essentially, all you have to do is set up two (or more) time machine backups for you machine and then record their details in the config file. The LaunchDaemon will then trigger autotm every 30 minutes to check if it needs to switch time machine targets without any action required on your end.

Written by sas

September 22, 2011 at 11:09

Posted in mac

Tagged with ,

Coffee Disaster

leave a comment »

It was bound to happen. I spend countless hours tapping away on my MacBook Pro and I drink lots of coffee while I’m at it. So inevitably, as always when chances are small but opportunities abundant, disaster struck and I managed to pour half a cup of Rosabaya de Columbia on my Macbook’s keyboard. Don’t ask for details. Let’s leave it at too lazy to walk twice, balancing too many things, and the presence of gravity.

So I ended up with half a cup of coffee on the WASD area of my keyboard plus some on the trackpad, going for the edges. What do do? I went for:

  1. Calming down by shouting expletives
  2. Turning the laptop upside down
  3. Shutting it down
  4. Fetching a vacuum cleaner to suck out coffee, while holding the machine upside down (a situation man apparently doesn’t find himself in very often, or evolution would have had us develop a third arm)

Initially that didn’t help much. After rebooting I found that some keys appeared to work while others didn’t. It took me a moment to realize that actually all keys worked except the ‘fn’ key, which was ‘stuck’ in the on position. Or rather coffee remains bridged it into a pressed state and it wouldn’t go.

In a situation like that you find out things you never would otherwise, like:

  • fn + cursor keys does nothing – even though fn + pretty much any other key sends the key
  • I never missed forward delete, on the contrary I’m a backspace guy (fn pressed will turn backspace into forward delete and drive you mad)
  • keys with state are a nightmare and I love the fact that you can turn off caps lock on Lion
  • speaking of caps lock, why is it even there and why didn’t it break instead of ‘fn’ (ok, I wouldn’t notice even if it had, actually)
  • the keys can be removed rather easily (revealing things better left unseen…)

and finally:

  • the aluminum bluetooth keyboard actually fits perfectly on top of a unibody 15″ MBP

The battery compartment fits nicely into the dent above the function keys and thanks to the identical key size and layout you end up with a nice piggy-back set-up that you can actually work with:

Strap-on Keyboard

The even better news is that after a day of drying, the ‘fn’ key has decided to get stuck in the ‘off’ position. That means I can’t control the brightness nor the volume from the keyboard right now but at least the rest of it is back to normal.

-sas

Written by sas

August 17, 2011 at 18:08

Posted in mac

Tagged with , ,

Integrating git version info in iOS/Cocoa apps

leave a comment »

This is a quick reminder on how to add version info from git to your Xcode application – iOS or Cocoa – so you can see in the actual application which repository state this binary was built from.

There’s nothing new in this post really – others have done the same and blogged about it – but it serves as a note to self on how to quickly go about it and, as such, may be helpful to others. It’s really a quite simple two step process:

  1. Add a script build phase to your build target (at the end, after the other build steps):

    git status
    version=$(git describe --tags --dirty)
    echo version: $version
    info_plist=$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.app/Info.plist
    /usr/libexec/PlistBuddy -c "Set :CFBundleVersion $version" $info_plist

    The extra command git status to refresh the index was added, because sometimes the repos would be reported as dirty otherwise. Not sure why exactly it’s happening but this is the fix. Maybe it’s some temporary build file.

  2. Add a UILabel to one of your views (or simply NSLog to the console) and show the version:

    - (void)viewDidLoad {
    ...
    NSString *version = [[[NSBundle mainBundle] infoDictionary]
    objectForKey:@"CFBundleVersion"];
    self.versionLabel.text = version;
    }

Written by sas

December 29, 2010 at 15:12

Posted in iphone, osx

Thoughts On Unit Testing

leave a comment »

In a recent seminar at our company we talked about unit testing and during the discussion that ensued I found that I had a few things to say about the topic. There’s probably no wrong or right here (apart from the fact that you must test!) but there are things that I found work well in practice and others that really only look good on paper. At lot of it has to do with how you end up working, especially when under pressure, and, of course, to some extent with personal preference.

With that out of the way, the following are my observations collected over the course of some relatively large projects.

Don’t bloat test numbers

A lot of IDEs these days provide automation for tasks that are tedious. One of these tasks is setting up your test infrastructure. There are tools that auto-generate tests or test stubs whenever you add a method to a class or something similar. While this is sure to increase your coverage I’m not convinced this is a good idea in the long run.

What’s going to happen is that you generate a large number of trivial tests that you otherwise wouldn’t have written. The kind of trivial tests that come to mind are accessors, for example. What’s the point of testing methods for attribute assignment/read-out? Especially if they’re auto-generated anyway.

Effectively, what you’ll do is make it harder to manage tests overall. For instance, when your test logs are spammed with meaningless succeeding tests you’re more likely to miss a problem. Also, updating expectation values should your inputs change is going to generate a lot of work (maybe to the effect that you’ll hold off on it, which is never good). But most importantly, your test run time is going to increase and in consequence may make you run the full suite less often, especially when you’re in a hurry.

Similarly, if you yourself have written code that auto-generates other code, don’t auto-generate test code as well. Create tests for your generator and make sure it works but don’t spawn code and tests — you’re essentially re-testing the same thing with predictable outcome.

Make sure your tests add value

In the same vein, make sure your tests actually increase the value of your test suite. There’s no point copy-pasting a test and just changing parameters to have yet another test unless it actually ends up running a new code path or testing an edge case (NULL parameters etc).

This may sound trivial but perhaps a less obvious case is a wrapper method that calls a complicated one. Is it really worthwhile to add a test for a trivial method which basically only replicates the complicated test? Blindly adding tests for sake of coverage can lead to the problem of long-running test suites with no or little extra value, as mentioned above. Either use the trivial, higher-level method to test the complicated under-lying one or test the latter directly.

Coverage is great but don’t over-interpret its value. If full coverage leads to a test suite that is so slow you’ll only run it once a month you’ll end up testing less, not more.

Adapt your test strategy to your type of software

There are different kinds of software and along with them come different approaches to unit testing.

I believe the main distinctions to make are the following:

  1. library code / API
  2. faceless application
  3. GUI application

1. Library code is probably the easiest to tackle with traditional unit testing and the one where the strictest rules apply. I would maintain that every public API has to be covered by unit tests, typically including edge case parameters. It’s pretty embarrassing to ship an API and find the advertised calls into your library don’t work. The best way to ensure it does is to be your own (and first!) client by running your test suite against the full set of APIs. It doesn’t stop at the public API, of course, but it’s really the place to start, also to get a feeling if the interface is good. Test driven design works great for libraries.

What makes it easier to maintain full or extensive coverage in library unit testing, is that there’s typically much less state involved than in application code — it’s much simpler to maintain test data.

2. A ‘faceless application’ (to me) is an application that interfaces to users by some means other than GUI or code, for example file based exchange or network sockets.

Therefore, testing the interface of a ‘faceless application’ is quite different from library or GUI testing. Where in the case of a library you write code to test your interfaces, here you are going to spend much more of your time setting up your test fixtures in the form of files or network services (or mock interfaces for that matter).

I think in unit testing it really comes down to what the interfaces to your code are. In case of the library, it’s really other code that connects with yours and therefore the best way to test is write calls into your library. With a faceless application you have quite different interfaces. So while internally, i.e. inside your application, you still use test driven design to cover your internal interfaces to some extent, you also have to address a different test infrastructure to your public interface.

Typically setting up this kind of test infrastructure is quite a bit more complicated than in the case of library code. You’ll probably end up doing some “test triage”: You can’t be everywhere, so to speak, and I believe the most important coverage is that of your public interface.

3. Finally, GUI applications. I have not really found a good way to do GUI application unit testing. Naturally one will have “back-end” code that can and must be tested in “traditional” ways but most importantly you really want to have tests in place for your public interfaces, i.e. do common user tasks so you can be sure that you (or rather your test) has successfully clicked all the essential buttons before you ship.

I know there are tools like “Eggplant” that claim to cover this. They may well do so but I’ve never tried any of them for the few (small) GUI applications I’ve written.

What I did try was use AppleScript automation for an Objective-C/Cocoa/OSX application (I’m sure there are similar scripting tools for other platforms/languages) but in the end it was too slow to allow running a big test suite. Also, this approach is limited in its result checking: One can read out controls (“did the expected text appear when I clicked the ‘Transform’ button?”) but obviously result checking is not always that simple.

Actually, I’m not convinced that automated unit testing is really the proper model for GUI applications. One reason for this is that they allow so much freedom in how you can chain actions that it’s virtually impossible to instrument all combinations in unit tests. Plus if you do, you may end up being constrained by your tests not to make GUI changes for fear of breaking a huge suite of unit tests.

You may argue that you’re expecting the same “breaking with habit” from your users but GUIs are much more about design and “feel” and you sometimes need to “force” the direction. Legacy unit tests can make you keep a “stale” GUI when you should really move ahead. For library code this is really the other way around: Unit tests ensure that you maintain source compatibility and make you think twice if it’s worthwhile to make an incompatible change because you’re the first one that’s going be hit by it: you’re going to have to update all your tests — the same you’re asking your software developer clients to do. From how much work this update is going to be for you you can judge if the change is really worth it. The defining difference is probably that in one case you have persisted behavior (code) whereas in the other it’s transient — the users’ “muscle memory” or habits.

The solution for GUI applications is probably: Rely on good beta-testers. (See also: http://www.wilshipley.com/blog/2005/09/unit-testing-is-teh-suck-urr.html)

Written by sas

June 4, 2010 at 10:06

Posted in dev