About a month ago we decided to make the transition off of Django’s test suite over to the Nose runners. Our main selling point was the extensibility, and the existing ecosystem of plugins. Four weeks later I’m happy to say we’re running (basically) Nose with some minor extensions, and it’s working great.
Getting Django running on Nose is no small feat. Luckily, someone else has already put in a lot of that effort, and packaged it up all nice and neat as django-nose. I won’t go through setting up the package, but it’s pretty straight forward. One thing that we quickly noticed however, was that it didnt quite fit our approach to testing, which was strictly unittest. After a couple days of going back and forth with some minor issues, we came up with a few pretty useful extensions to the platform.
A few of the big highlights for us:
Xunit integration (XML output of test results)
Skipped and deprecated test hooks
The ability to organize tests outside of the Django standards
I’m wanted to talk a bit about how we solved some of our problems, and the other benefits we’ve seen since adopting it.
The biggest win for us was definitely being able to reorganize our test suite. This took a bit of work, and I’ll talk about this with some of the plugins we whipped up to solve the problems. We ended up with a nice extensible test structure, similar to Django’s own test suite:
We retained the ability to keep tests within the common app/tests convention, but we found that we were just stuffing too many tests into obscure application paths that it became unmaintainable after a while.
The first issue we hit was with test discovery. Nose has a pretty good default pattern for finding tests, but it had some behavior that didn’t quite fit with all of our existing code. Mostly, it found random functions that were prefixed with test_, or things like start_test_server which weren’t tests by themselves.
After digging a bit into the API, it turned out to be a pretty easy problem to solve, and we came up with the following plugin:
classUnitTestPlugin(object):""" Enables unittest compatibility mode (dont test functions, only TestCase subclasses, and only methods that start with [Tt]est). """enabled=TruedefwantClass(self,cls):ifnotissubclass(cls,unittest.TestCase):returnFalsedefwantMethod(self,method):ifnotissubclass(method.im_class,unittest.TestCase):returnFalseifnotmethod.__name__.lower().startswith('test'):returnFalsedefwantFunction(self,function):returnFalse
Test Case Selection
To ensure compatibility with our previous unittest extensions, we needed a simple way to filter only selenium tests. We do this with the –selenium and –exclude-selenium flags.
One feature I always thought was pretty useful in the Django test suite was their --bisect flag. Basically, given your test suite, and a failing test, it could help you find failures which were related to executing tests in say a specific order. This isn’t actually made available to normal Django applications, but being a large codebase it’s extremely useful for us.
I should note, this one adapted from Django and is very rough. It doesn’t report a proper TestResult, but it’s pretty close to where we want to get it.
class_EmptyClass(object):passdefmake_bisect_runner(parent,bisect_label):defsplit_tests(test_labels):""" Split tests in half, but keep children together. """chunked_tests=defaultdict(list)fortest_labelintest_labels:cls_path=test_label.rsplit('.',1)# filter out our bisected testiftest_label.startswith(bisect_label):continuechunked_tests[cls_path].append(test_label)chunk_a=chunk_b=midpoint=len(chunked_tests)/2forn,cls_pathinenumerate(chunked_tests):ifn<midpoint:chunk_a.extend(chunked_tests[cls_path])else:chunk_b.extend(chunked_tests[cls_path])returnchunk_a,chunk_bclassBisectTestRunner(parent.__class__):""" Based on Django 1.3's bisect_tests, recursively splits all tests that are discovered into a bisect grid, grouped by their parent TestCase. """# TODO: potentially break things down further than class level based on whats happening# TODO: the way we determine "stop" might need some improvementdefrun(self,test):# find all test_labels grouped by base classtest_labels=context_list=list(test._tests)whilecontext_list:context=context_list.pop()ifisinstance(context,unittest.TestCase):test=context.testtest_labels.append('%s:%s.%s'%(test.__class__.__module__,test.__class__.__name__,test._testMethodName))else:context_list.extend(context)subprocess_args=[sys.executable,sys.argv]+[xforxinsys.argv[1:]if(x.startswith('-')andnotx.startswith('--bisect'))]iteration=1result=self._makeResult()test_labels_a,test_labels_b=,whileTrue:chunk_a,chunk_b=split_tests(test_labels)iftest_labels_a[:-1]==chunk_aandtest_labels_b[:-1]==chunk_b:print"Failure found somewhere in",test_labels_a+test_labels_bbreaktest_labels_a=chunk_a+[bisect_label]test_labels_b=chunk_b+[bisect_label]print'***** Pass %da: Running the first half of the test suite'%iterationprint'***** Test labels:',' '.join(test_labels_a)failures_a=subprocess.call(subprocess_args+test_labels_a)print'***** Pass %db: Running the second half of the test suite'%iterationprint'***** Test labels:',' '.join(test_labels_b)printfailures_b=subprocess.call(subprocess_args+test_labels_b)iffailures_aandnotfailures_b:print"***** Problem found in first half. Bisecting again..."iteration=iteration+1test_labels=test_labels_a[:-1]eliffailures_bandnotfailures_a:print"***** Problem found in second half. Bisecting again..."iteration=iteration+1test_labels=test_labels_b[:-1]eliffailures_aandfailures_b:print"***** Multiple sources of failure found"print"***** test labels were:",test_labels_a[:-1]+test_labels_b[:-1]result.addError(test,(Exception,'Failures found in multiple sets: %s and %s'%(test_labels_a[:-1],test_labels_b[:-1]),None))breakelse:print"***** No source of failure found..."breakreturnresultinst=_EmptyClass()inst.__class__=BisectTestRunnerinst.__dict__.update(parent.__dict__)returninstclassBisectTests(Plugin):defoptions(self,parser,env):parser.add_option("--bisect",dest="bisect_label",default=False)defconfigure(self,options,config):self.enabled=bool(options.bisect_label)self.bisect_label=options.bisect_labeldefprepareTestRunner(self,test):returnmake_bisect_runner(test,self.bisect_label)
Improvements to django-nose
Finally I wanted to talk about some of the things that we’ve been pushing back upstream. The first was support for discovery of models that were in non-app tests. This works the same way as Django in that it looks for appname/models.py, and if it’s found, it adds it to the INSTALLED_APPS automatically.
The second addition we’ve been working on allows you to run selective tests that dont require the database, and avoids actually building the database. It does this by looking for classes which inherit from TransactionTestCase, and if none are found, it skips database creation.
I’m curious to here what others have for tips and tricks regarding Nose (or maybe just helpful strategies in your own test runner).