Archive for the ‘Testing’ Category

Grails 2 Unit Tests – Undocumented Properties

May 15th, 2012 No comments

GrailsUnitTestMixin (and its descendents DomainClassUnitTestMixin, ServiceUnitTestMixin, etc) give your test classes some static and instance variables that aren’t mentioned in the docs yet. You get applicationContext, grailsApplication, config, and messageSource. Previously, if you had code that needed a grailsApplication (such as the JSON configurations), you had to set it up yourself. Now you get it for free. The mixin also implements the shouldFail method from GroovyTestCase so you can keep that functionality even though your tests are now JUnit 4 based.

Categories: grails, Testing, unit test Tags: ,

Grails 2 Unit Tests – Using Base Classes

May 15th, 2012 No comments

I’ve been meaning to write up my experiences with upgrading a 1.x app to grails 2 for two months now. Since I can’t seem to find the time to do a large post, I’ll follow MrHaki’s example and sprinkle some nuggets of wisdom around. Here’s a couple of bits on the new unit testing components.

First, read Ted Naleid‘s great writeup. He encountered many of the same issues I found and documented them in great detail. Plus he has a list of other reference pages.

Second, as Ted mentions, there are problems using a base class for tests. This is especially true if you have JUnit annotations in the base class like @Before. Instead, turn your base class into a generic POGO and then use the @TestMixin annotation to include it in your tests.

class MyAppTestSupport {
    //...useful common properties and methods here....

Then in your test class:

class MyDomainClassTests {
    //you can access the properties and methods from MyAppTestSupport here just as 
    //if they were part of your class

This makes it a little harder to jump to routines in the mixed-in class in an IDE (at least Intellij gets a bit confused), but the tests will run consistently.

Categories: grails, Testing, unit test Tags:

More Parallel Processing in Jenkins Notes

October 31st, 2011 No comments

October must be my month to think about CI testing. Unfortunately, I haven’t had time to do much with it since my last post a year ago, but I did stumble across these rough notes I took while getting a “job pyramid” setup in our Hudson/Jenkins server last year. I never got around to turning them into a full post, but I wanted to record them in a better place for future reference. Maybe they will help someone else as well. Remember, these are now a year old, so some bugs may be fixed or new features added. I haven’t had time to stay current on happenings with Jenkins since then. Please forgive the lack of formatting, proof reading, grammar, or anything resembling organization.

Random Tips

  • I installed the HTML Publisher plugin to view the codenarc reports until I could figure out why the Violations plugin doesn’t like the codenarc XML output. Make sure to put your reports in unique directories. I initially tried to just output the report in the target directory and it ended up archiving my whole target directory.
  • The codenarc plugin requires that the full parent path for the file it will output be present, so if you want to write it to, say, the target/test-reports directory, must make sure that directory is there. I added another line to do this in the groovy script I already had for creating the fingerprint file.
  • I added the job config history plugin. Very handy when experimenting with different setups so you can easily get back to a config that worked.
  • Turned on the log parsing plugin. Would be handy if it shipped with a default rules parsing file, but nice none-the-less.
  • Downstream builds report plugin – Doesn’t really provide any new info when only have a single layer of downstream jobs
  • Groovy plugin – I put some shared scripts in a common directory and created a Hudson global property for the path. If I tried to use that variable to list the path to a script (e.g. $SHARED_SCRIPTS/some_script.groovy), the plugin would assume the path was relative to the workspace. I had to insert an extra “/” in the “Groovy script file” space (e.g. /$SHARED_SCRIPTS/some_script.groovy) for the plugin to realize it was an absolute path.
  • Like the codenarc tool, the JSUnit ant tool will fail unless the parent directory of the output directory has not already been created.
  • Increase your quiet time if you often check in groups of files using multiple commits
  • If run functional (e.g. selenium) tests in separate build step, need to output reports to different directory or else they wipe the unit/integration reports
  • There is a time sync problem between servers documented in the sonatype blog. The article says to use VMWare Tools to sync time on VM with the ESX server. Unfortunately, it our own ESX server’s time is about 25 minutes slow, so that causes problems of it’s own.
  • If you want to have a single build step which runs unit and integration tests and generates coverage stats, you should specify the arguments in this order: -coverage -xml -unit -integration. If you try to do something like this: -unit -integration -coverage -xml, mvn/grails will think that you want to only execute an integration test named “-coverageTests” and it will also give you an error because it doesn’t understand the -xml parameter on it’s own. (Yes, I know this is the old syntax form for running grails tests. I haven’t tried with the newer form.)
  • If you clone a downstream project, the build trigger to run when the upstream project completes is not cloned with it
  • You can’t set a build to a better result than it already has. i.e. if it’s FAILED, you can’t set it to UNSTABLE. This is a pain if mvn tries to fail your build and you want to run a post-build script that checks if it was really a failure.
  • A small problem with having to combine the cobertura output (and thus not using the HTML output directly from the cobertura-grails plugin) is that you don’t get the nice closure-renaming in the file-level stats from the grails plugin. So you see things like MyController$_closure2... You can still see what lines are covered or not right below the stats, so this isn’t a huge issue.
  • Using the clone workspace method makes the config a little cleaner, but there may be a risk that not all the downstream jobs get the archive copied before the next build job completes and removes the current copy (since it only keeps 1, I think). Not sure if this could happen unless possibly the downstream job is spun out to a clone and then a next build of the upstream job is started on the master. If the clone doesn’t start copying the workspace archive until after the master completes the next build, you could get the wrong archive. You also can’t fingerprint it. At least, I don’t think you can.

Syncing Builds

I setup the full “job diamond” as a mentioned in my last post on this. One issue I ran into was how to make sure the join job pulls artifacts from the proper build of the origin job? You can’t do “last successful build” because there could have been additional builds of the originator job while the first-tier sub-jobs were run but prior to the join-job running.

The solution I came up with was to have the join trigger call two sub-triggers

  1. Aggregate archived artifacts – grab the artifacts you need from the primary downstream jobs. In my case, I wanted to get the test reports.
  2. Parametrized build trigger – Define a sub-parametrized build trigger that invokes the join-job, passing the BUILD_NUMBER of the origin job as the parameter. (Use the Predefined parameters dialog for this.)

Then, in the join-job

  1. Define a string parameter to accept the passed in build number.
  2. In the build steps, create a “Copy artifacts” step, set it to get a specific build, and then set the build parameter name as the value
Categories: Testing Tags: ,

Running latest Groovy from Maven

April 5th, 2011 2 comments

Say you have a groovy-project that you build with maven.  You use the org.codehaus.gmaven:gmaven-plugin to compile your groovy code and run groovy tests without a problem.  Then you add some features or tests that need groovy 1.7.  You add the proper dependency and version to the <dependencies> section of your pom, run your test… and watch it blow up because the gmaven-plugin defaults to using groovy 1.6.  So you dig around on the web and find references for how to use the <providerSelection> tag of the gmaven-plugin to get your code compiled with 1.7 and to use 1.7 when running tests.  Things seem good.  Until…

You add a feature that requires some version of groovy greater than 1.7.4 (the version included with the latest gmaven-plugin, 1.3).  In my case, I used the @Delegate annotation with some inheritance in a test configuration and hit a bug that was fixed in groovy 1.7.6.  No matter what version I used in my dependencies section, my tests were executed under groovy 1.7.4.  I finally came up with the configuration below which let me run with a different groovy.  Note that it made no difference what I included in the dependencies section.  The gmaven-plugin configuration appears to be completely independent of that.

        <!-- This is only used if you want to run a groovy script from the command line using maven -->
    <!-- This block is required in order to make the gmaven plugins use a groovy other than 1.7.4.
     This is independent of the groovy entry in the dependencies section.  This does not affect the class path.

     What is interesting is that there must be both the gmaven.runtime entry with the explicit excludes
     and the additional dependency on whatever version we do want to use.  If you exclude the former,
     it will throw an exception. -->

You don’t know what you don’t know

October 12th, 2010 No comments

Disclaimer: If you are reading this post, then you probably don’t need to.  It likely doesn’t contain anything that you don’t already know or practice.  But perhaps it will help you convince a co-worker who is still pounding away in the dark.

I used to believe that I didn’t need to keep up with technical RSS feeds or newsgroups. I learn things quickly and know how to use Google. Thus, if I ever wanted to find some tool to accomplish a certain task or if I hit a problem with a library I was using, I could quickly go out and find what I was looking for. Spending time reading the Hibernate user group and the JBoss user groups and the Spring user group and all the rest just took away time from designing and implementing the next feature on the schedule. Who could possibly remember all the other new product, library, and feature announcements unless you had a need for them right then?

I now know that this is an excellent way to regularly reinvent the wheel. Why? Because you don’t know what you don’t know. Sometimes it’s easy to guess when there are tools, libraries, or entire languages that can help you do your job. For example, we all know that there are many ORM, logging, and xml-parsing libraries.  There are many situations, however, where it is unlikely that you will realize you are facing a problem that someone else has already solved.

This happened to me last spring.  We were in crunch time trying to finish up a release and I had fallen behind on my feed/newsgroup reading.  I winced every time I opened up Google Reader and saw the unread count tick higher and higher.  As part of this release, I wrote a handy little utility routine that let me override a single method of a class by using groovy’s ExpandoMetaClass.  The nice thing about the routine was that it always discarded the modified MetaClass after the closure that was passed to it finished.  I could thus remove the many try…finally blocks that were piling up in my tests as a tried to make sure I didn’t corrupt a MetaClass from one test to another.

A couple of days later, I was able to whittle down the backlog in Google Reader.  That’s when I saw this entry from mrhaki:  Built into the GrailsUnitTestCase class was a method that did exactly what I wanted, plus a lot more – registerMetaClass.  Given, my utility routine can be used outside of a Grails’ unit test, so it’s not a complete waste.  But I could have saved myself a few hours of effort if I had been up to date on my reading.  Perhaps I could have spent those hours catching up on the rest of my “To Read” list…

Categories: pain, Testing Tags: ,

Using Hudson to run tests in parallel

October 11th, 2010 1 comment

We have been using Hudson to do CI on our grails application for a few months now. We use it to run unit, integration, jsunit, and functional (via selenium) tests, as well as code coverage metrics using Cobertura. Since this was the first project that we used Hudson for, the person who originally set it up just put all of these steps into a single job. It worked fairly well, but a single build was taking between 20-30 minutes, depending on how extensive the changes were. This can be a real pain, especially during very active development cycles when you most need rapid feedback. I decided to look into how I could run the tests in parallel, and hopefully get our build times to under 5 minutes. This is what I have come up with so far. It’s not perfect, but the tests do run more quickly (around 5 1/2 minutes total, primarily limited by the speed of the Selenium tests).

Primary Job – Poll and Build

The primary job is very simple. It polls Subversion and executes clean and test-compile targets. (We use maven with the grails-plugin to compile the app.) It normally only takes 30-40s to run. It doesn’t do a full package/install because that adds another minute to the build time.

Primary Build Steps

Build steps for the primary job

Using the Groovy Plugin

One shortcoming of not running the package step is that the job doesn’t generate a unique file that can be used for fingerprinting like a jar or a war. You need to turn on fingerprinting in order to aggregate test results.  (More on that subject, below.)

To resolve this, I used the Groovy plugin to execute a one-line script to generate a unique file for this purpose. I pass in the path and build information as script parameters rather than environment properties because the Groovy plugin doesn’t resolve Hudson environment variables when they are set in the Properties field. This seems like a big shortcoming, so perhaps I just misunderstand how to pass them correctly as properties.

A side-effect of using the Groovy command option rather than invoking an existing Groovy script is that you end up with a copy of the script in your workspace directory. In fact, you get a copy of it for each build. I am not certain why the plugin needs to generate the script file at all as opposed to just running it in memory. Hopefully, this will get corrected in a future release. For now, I may put the script in a regular file just so I don’t have to clear out all the temps.

Downstream Jobs – Test, Test, Test

The tests are divided into 3 downstream jobs that are kicked off when the primary job has a successful build. The jobs run the unit and integration tests, the Selenium tests, and the JsUnit tests, respectively. I didn’t bother splitting out the unit and integration tests into separate jobs because they take less time combined than do the Selenium tests. I could easily split them out later.

Cloning the Workspace

In order to speed up the downstream jobs, I didn’t want to pull the code for each and recompile. Instead, I used the Clone Workspace SCM Plugin to make copies of the primary job’s workspace for each downstream job. This adds only a few seconds to the primary job in order to create the zip archive.

Cloning a workspace - upstream project view

Cloning a workspace - downstream project view

I had two small issues with the plugin:

  1. The zip command doesn’t include empty directories by default. This can be an issue if any of your tests in downstream jobs expect some directories to be there already.
  2. In the downstream job, the working directory is set to the workspace directory itself rather than the (sub-)directory pulled from Subversion as it is in the primary job. This makes sense since the Clone Workspace plugin does clone the entire workspace and not just what was pulled from the SCM system. It just threw off some of the tests that expected files at specific paths. (Yes, I know – the tests should be more flexible. It’s “on the list…”)

I need to do more research to see if I can use this approach when I spin up some slave machines. I will post when I tackle that issue.

Downstream Build Steps

The build steps for the downstream jobs look like this:

Unit and Integration Tests Job

Funtional Tests Job

JsUnit Tests Job

You can see that the two jobs which run the server-side test jobs execute a shell command rather than a regular maven target. This was because of the working directory issue I mentioned above. I use the shell to cd into the proper directory first, and then execute the tests. A little hackish, but it was a good experiment with using the Execute shell build step.

You can also see how many different ports are used by the tests – the JsUnit acceptor servlet, the Selenium server servlet, the app itself when launched for Selenium, etc. I use the Hudson Port Allocator Plug-in to keep tests from stomping on each other.

Aggregating Test Results and Fingerprinting

I turned on test aggregation in the primary job so I could get a single view of test results. In order for this to work, the primary job needs to know how to associate specific downstream builds with specific builds in it. This is done through the fingerprinting feature. None of the docs mention this connection. I didn’t figure it out until I did a build, clicked on the Aggregate Test Results link, and saw this error message:

Generating a unique-per-build file using the above Groovy script (discussed above) let me link the jobs together for aggregating.

NOTE: You do not select the “Publish JUnit test result report” option in the primary job as part of this feature. That option is only used in the jobs that actually run the tests. If you turn it on in the primary job, you will get an error during the build because there are no JUnit files for it to process.

The Aggregate Test Results feature is nice since it provides links through to the test results pages for the downstream builds and a simple table listing the fail/total counts from the downstream builds.

Aggregate results report for a good build

Unfortunately, there appears to be a bug in it where it will not report results for any downstream job that has failures. A failing downstream job is shown in the build details page for the primary job

Aggregate results summary with failing job

but you can see that the Aggregated Test Result link lists “no failures.” Clicking that link shows the summary table, but it is missing a line for the downstream job with failures:

Aggregate results, but failing job is missing

In addition to having this bug, the feature does not show the test result trend chart or the tests by packages in the primary build. This makes it of very limited usefulness.

Next Steps

I was able to accomplish my primary goal of cutting our CI build times, but not without losing some required output.  Most of the shortcomings for this approach are related to one core issue – Multiple jobs means multiple places to configure and view results. For example, I had to configure failure email notifications in all downstream jobs rather than just one. Also, there is no way to get an aggregated code coverage report that spans all the tests (unit, integration, and functional). I could live with not having a single view of all test failures since I can get that info from the downstream jobs, but not having accurate code coverage metrics is not an option. I have to figure out a way around that.

Since most of the problems with this configuration were related to aggregating the test results (both direct results and coverage stats), my next step will be to try a “diamond” build configuration using the Join Plugin. Hopefully, I can pull all of the test results, coberturra.ser, and other such files into the bottom of the diamond to get a single place to view the status of the build.

I also want to get CodeNarc output displaying for the job via the Violations plugin. I can generate the CodeNarc output, and the Violations tries to parse it, but it then crashes with an NPE. I need to narrow down what part of the file is causing the exception so I can report the issue.

Categories: Testing Tags: ,

Hudson + Windows + JsUnit = Boom

October 5th, 2010 No comments

Anyone thinking about upgrading Hudson on a Windows box might want to hold off for a bit if you have any Ant build steps. In Hudson v1.370, they introduced a change for parsing some Ant parameters (HUDSON-7108). However, this caused several other problems, including one that will bust any step you have that runs JsUnit tests (HUDSON-7657). I banged my head against this for a while before I realized that my configuration changes hadn’t caused the problem, so I thought I’d try to save others some pain.

Categories: Testing Tags: ,

Validation differences between Grails 1.1 and 1.2

June 9th, 2010 No comments

…or My Penance For Ignoring Validation Errors…

We recently updated our app from Grails 1.1.1 to 1.2.2. (We wanted to move all the way to 1.3.1 but since we build with maven, we finally decided to wait until a new grails-maven plugin is released. See GRAILS-6327.) During the upgrade, we hit two particularly annoying issues related to persistence and setting up associations.

The first involved a belongsTo relation like this:

class AppUser {
    UserPrefs prefs = new UserPrefs()

class UserPrefs {
    static belongsTo = [user: AppUser]

This worked under Grails 1.1, but under Grails 1.2 the prefs object was not getting persisted when the user was saved. Other belongsTo relations, both one-to-many and one-to-one worked as expected. We finally discovered that using the short form of the relation notation did work:

static belongsTo = AppUser

This post provided a clue as to where the problem may be. The user property of the Prefs object is not automatically set because the relationship is one-to-one. (At least, it is not automatically set all the time. If we load an AppUser object with Hibernate, it appears that the user property is set in the UserPrefs object. Go figure.) Under Grails 1.1, having a null user was apparently fine. My guess (based on the second error we hit, below) is that any validation errors caused by the child object were not stopping the save of the parent object. Grails 1.2, on the other hand, does care about the child object validation, though I could never get it to report anything in the errors property of either the parent or child object.

In order to work around the issue, I loosened the constraints on UserPrefs a little:

class UserPrefs {
    static constraints = {
        user(nullable: true)
    static belongsTo = [user: AppUser]

With this change, cascading persistence works. Given, it’s not an ideal solution, but it does let us have access to the user property if needed.

What made this extra confusing is that we have a similar relationship in another set of classes that worked without a problem. The only difference there is that the child class is not automatically created with a static initializer as is done in for the prefs property in AppUser. It is always explicitly set into the owning object. I’ve run out of time to pursue this one further, so I don’t have a final answer. Anyone else out there have more insight into this?

The second persistence issue related to this section of code in the update method of one of our controllers:

def update = {
    def hubMap = HubMap.get(; = params;
    if ( {
        ...add success message to response...
    } else {
        ...add failure message to response...

Again, the code worked fine under Grails 1.1.1, but the save call failed under 1.2.2. Unfortunately, there was a hole in both our unit and integration tests for this method, so we didn’t catch it until much later in the release cycle.

Was the JSON conversion in the controller’s get method that generated the original data for the browser different? Nope.
Had the behaviour of the properties meta-method changed? Nope.

The difference is in how Grails handles any existing validation errors on a domain object when you call save(). In our case, the JSON that was being sent to the controller (via a Prototype AJAX call) contained two properties that were references to other objects. The javascript object conversion in the browser was not setting these properties in a meaningful way for the AJAX call; they were both coming across with the string value [object Object]. Since these fields are never updated in this particular workflow, we had never checked what was happening to them. Grails obviously could not convert from the string values to the proper objects, it ignored the values and set two errors on the domain object to record them. However, we didn’t check for errors after the properties call. We went straight to the save call. Under Grails 1.1, deep within the saving sequence in a random class called AbstractDynamicPersistentMethod, you come across this bit of code:

doInvokeInternal(...) {
    Errors errors = new BeanPropertyBindingResult(target, target.getClass().getName());
    mc.setProperty(target, ERRORS_PROPERTY, errors);

Any existing errors are replaced with a fresh, clean BeanPropertyBindingResult object, wiping out the error information. We never spotted it because we never expected the properties with errors to change anyway. That hole has been closed in Grails 1.2:

doInvokeInternal(...) {
    Errors errors = setupErrorsProperty(target);

The new setupErrorsProperty call will copy out existing errors. We put in a few adjustments to not attempt to update those properties and all is well.

So there you have it. A couple of gotchas in the upgrade path for grails. Hope this saves some folks from banging their head against the wall.

Grails, Maven, packaging, and skipTests fun

April 22nd, 2010 No comments

The grails maven plugin honors the maven.test.skip option (which turns off compiling and executing tests), but not the skipTests option (which only turns off executing tests). This means that you cannot easily create a package with test classes in it without running all of the tests. I finally come up with a close work around:

mvn clean grails:exec install -Dcommand=test-app -Dargs="-unit DoesNotExist" -Dmaven.test.skip=true

By explicitly listing the test-app command but specifying a test that does not exist, I was able to get the unit tests to be compiled. Since I only needed some common testing files in my package, this works for me. If someone wanted to package both the unit and integration tests, I don’t think this would work. I filed a new issue in the Grails Jira to try to get a real resolution.

Corrections to Grails Documentation

September 5th, 2009 No comments

Here are two quick tips for anyone working with Grails 1.1.1:

1) The User Guide has conflicting info about cascading persistence.   Section says “The default cascading behaviour is to cascade saves and updates, but not deletes unless a belongsTo is also specified.” Section 5.3.3 says, “If you do not define belongsTo then no cascades will happen and you will have to manually save each object.”

The former,, is correct. Create and Update actions are cascaded for one-to-many relations. However, hey are not cascaded for one-to-one relations.

2) Section 9.1 (Unit Testing) demonstrates how to pass in a list to the mockDomain(clazz, instanceList) call.  It then says that you can check that list to verify that persistence actions behaved as expected.  This is no longer correct.  The list is copied inside the mockDomain call.  To get access to the list of persisted objects for a class, use the MockUtils.TEST_INSTANCES map with the class name as the key.  For example:

def testInstances = []
mockDomain(Item, testInstances)
def i = new Item(name:"foo").save()
assertEquals 0, testInstances.size()
testInstances = MockUtils.TEST_INSTANCES[Item]
assertEquals 1, testInstances.size()

And in case you try the same thing as me, the mocking behavior does not support cascades.