Archive

Posts Tagged ‘Testing’

Grails Plugins Presentation

February 17th, 2012 No comments

I presented at the DC GUG meeting a couple of days ago, this time on Grails plugins. I wanted to go beyond the normal “these are the basics of Grails” intro to cover the main points of what you need to do to get a real project up and running and how to maintain it. I covered things like authentication, fixtures, tests and test data sets, audit logging, remote debugging, and monitoring. A bit too aggressive for an hour-long presentation (okay, so way too aggressive), but folks seemed to enjoy it. Hopefully I’ll have another opportunity to present it so I can tune it up a bit. For now, the slides are on slideshare and the basis for the sample project is on github. I’ll try to breakout the key points into individual posts soon.

Categories: grails, groovy Tags: , ,

More Parallel Processing in Jenkins Notes

October 31st, 2011 No comments

October must be my month to think about CI testing. Unfortunately, I haven’t had time to do much with it since my last post a year ago, but I did stumble across these rough notes I took while getting a “job pyramid” setup in our Hudson/Jenkins server last year. I never got around to turning them into a full post, but I wanted to record them in a better place for future reference. Maybe they will help someone else as well. Remember, these are now a year old, so some bugs may be fixed or new features added. I haven’t had time to stay current on happenings with Jenkins since then. Please forgive the lack of formatting, proof reading, grammar, or anything resembling organization.

Random Tips

  • I installed the HTML Publisher plugin to view the codenarc reports until I could figure out why the Violations plugin doesn’t like the codenarc XML output. Make sure to put your reports in unique directories. I initially tried to just output the report in the target directory and it ended up archiving my whole target directory.
  • The codenarc plugin requires that the full parent path for the file it will output be present, so if you want to write it to, say, the target/test-reports directory, must make sure that directory is there. I added another line to do this in the groovy script I already had for creating the fingerprint file.
  • I added the job config history plugin. Very handy when experimenting with different setups so you can easily get back to a config that worked.
  • Turned on the log parsing plugin. Would be handy if it shipped with a default rules parsing file, but nice none-the-less.
  • Downstream builds report plugin – Doesn’t really provide any new info when only have a single layer of downstream jobs
  • Groovy plugin – I put some shared scripts in a common directory and created a Hudson global property for the path. If I tried to use that variable to list the path to a script (e.g. $SHARED_SCRIPTS/some_script.groovy), the plugin would assume the path was relative to the workspace. I had to insert an extra “/” in the “Groovy script file” space (e.g. /$SHARED_SCRIPTS/some_script.groovy) for the plugin to realize it was an absolute path.
  • Like the codenarc tool, the JSUnit ant tool will fail unless the parent directory of the output directory has not already been created.
  • Increase your quiet time if you often check in groups of files using multiple commits
  • If run functional (e.g. selenium) tests in separate build step, need to output reports to different directory or else they wipe the unit/integration reports
  • There is a time sync problem between servers documented in the sonatype blog. The article says to use VMWare Tools to sync time on VM with the ESX server. Unfortunately, it our own ESX server’s time is about 25 minutes slow, so that causes problems of it’s own.
  • If you want to have a single build step which runs unit and integration tests and generates coverage stats, you should specify the arguments in this order: -coverage -xml -unit -integration. If you try to do something like this: -unit -integration -coverage -xml, mvn/grails will think that you want to only execute an integration test named “-coverageTests” and it will also give you an error because it doesn’t understand the -xml parameter on it’s own. (Yes, I know this is the old syntax form for running grails tests. I haven’t tried with the newer form.)
  • If you clone a downstream project, the build trigger to run when the upstream project completes is not cloned with it
  • You can’t set a build to a better result than it already has. i.e. if it’s FAILED, you can’t set it to UNSTABLE. This is a pain if mvn tries to fail your build and you want to run a post-build script that checks if it was really a failure.
  • A small problem with having to combine the cobertura output (and thus not using the HTML output directly from the cobertura-grails plugin) is that you don’t get the nice closure-renaming in the file-level stats from the grails plugin. So you see things like MyController$_closure2... You can still see what lines are covered or not right below the stats, so this isn’t a huge issue.
  • Using the clone workspace method makes the config a little cleaner, but there may be a risk that not all the downstream jobs get the archive copied before the next build job completes and removes the current copy (since it only keeps 1, I think). Not sure if this could happen unless possibly the downstream job is spun out to a clone and then a next build of the upstream job is started on the master. If the clone doesn’t start copying the workspace archive until after the master completes the next build, you could get the wrong archive. You also can’t fingerprint it. At least, I don’t think you can.

Syncing Builds

I setup the full “job diamond” as a mentioned in my last post on this. One issue I ran into was how to make sure the join job pulls artifacts from the proper build of the origin job? You can’t do “last successful build” because there could have been additional builds of the originator job while the first-tier sub-jobs were run but prior to the join-job running.

The solution I came up with was to have the join trigger call two sub-triggers

  1. Aggregate archived artifacts – grab the artifacts you need from the primary downstream jobs. In my case, I wanted to get the test reports.
  2. Parametrized build trigger – Define a sub-parametrized build trigger that invokes the join-job, passing the BUILD_NUMBER of the origin job as the parameter. (Use the Predefined parameters dialog for this.)

Then, in the join-job

  1. Define a string parameter to accept the passed in build number.
  2. In the build steps, create a “Copy artifacts” step, set it to get a specific build, and then set the build parameter name as the value
Categories: Testing Tags: ,

Running latest Groovy from Maven

April 5th, 2011 2 comments

Say you have a groovy-project that you build with maven.  You use the org.codehaus.gmaven:gmaven-plugin to compile your groovy code and run groovy tests without a problem.  Then you add some features or tests that need groovy 1.7.  You add the proper dependency and version to the <dependencies> section of your pom, run your test… and watch it blow up because the gmaven-plugin defaults to using groovy 1.6.  So you dig around on the web and find references for how to use the <providerSelection> tag of the gmaven-plugin to get your code compiled with 1.7 and to use 1.7 when running tests.  Things seem good.  Until…

You add a feature that requires some version of groovy greater than 1.7.4 (the version included with the latest gmaven-plugin, 1.3).  In my case, I used the @Delegate annotation with some inheritance in a test configuration and hit a bug that was fixed in groovy 1.7.6.  No matter what version I used in my dependencies section, my tests were executed under groovy 1.7.4.  I finally came up with the configuration below which let me run with a different groovy.  Note that it made no difference what I included in the dependencies section.  The gmaven-plugin configuration appears to be completely independent of that.

<plugin>
    <groupId>org.codehaus.gmaven</groupId>
    <artifactId>gmaven-plugin</artifactId>
    <version>1.3</version>
    <configuration>
        <providerSelection>1.7</providerSelection>
        <!-- This is only used if you want to run a groovy script from the command line using maven -->
        <source>${groovy.script}</source>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>compile</goal>
                <goal>testCompile</goal>
            </goals>
        </execution>
    </executions>
    <!-- This block is required in order to make the gmaven plugins use a groovy other than 1.7.4.
     This is independent of the groovy entry in the dependencies section.  This does not affect the class path.

     What is interesting is that there must be both the gmaven.runtime entry with the explicit excludes
     and the additional dependency on whatever version we do want to use.  If you exclude the former,
     it will throw an exception. -->
    <dependencies>
        <dependency>
            <groupId>org.codehaus.gmaven.runtime</groupId>
            <artifactId>gmaven-runtime-1.7</artifactId>
            <version>1.3</version>
            <exclusions>
                 <exclusion>
                     <groupId>org.codehaus.groovy</groupId>
                     <artifactId>groovy-all</artifactId>
                 </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <version>1.7.6</version>
        </dependency>
    </dependencies>
</plugin>

You don’t know what you don’t know

October 12th, 2010 No comments

Disclaimer: If you are reading this post, then you probably don’t need to.  It likely doesn’t contain anything that you don’t already know or practice.  But perhaps it will help you convince a co-worker who is still pounding away in the dark.

I used to believe that I didn’t need to keep up with technical RSS feeds or newsgroups. I learn things quickly and know how to use Google. Thus, if I ever wanted to find some tool to accomplish a certain task or if I hit a problem with a library I was using, I could quickly go out and find what I was looking for. Spending time reading the Hibernate user group and the JBoss user groups and the Spring user group and all the rest just took away time from designing and implementing the next feature on the schedule. Who could possibly remember all the other new product, library, and feature announcements unless you had a need for them right then?

I now know that this is an excellent way to regularly reinvent the wheel. Why? Because you don’t know what you don’t know. Sometimes it’s easy to guess when there are tools, libraries, or entire languages that can help you do your job. For example, we all know that there are many ORM, logging, and xml-parsing libraries.  There are many situations, however, where it is unlikely that you will realize you are facing a problem that someone else has already solved.

This happened to me last spring.  We were in crunch time trying to finish up a release and I had fallen behind on my feed/newsgroup reading.  I winced every time I opened up Google Reader and saw the unread count tick higher and higher.  As part of this release, I wrote a handy little utility routine that let me override a single method of a class by using groovy’s ExpandoMetaClass.  The nice thing about the routine was that it always discarded the modified MetaClass after the closure that was passed to it finished.  I could thus remove the many try…finally blocks that were piling up in my tests as a tried to make sure I didn’t corrupt a MetaClass from one test to another.

A couple of days later, I was able to whittle down the backlog in Google Reader.  That’s when I saw this entry from mrhaki: http://mrhaki.blogspot.com/2010/03/grails-goodness-using-metaclass-with.html.  Built into the GrailsUnitTestCase class was a method that did exactly what I wanted, plus a lot more – registerMetaClass.  Given, my utility routine can be used outside of a Grails’ unit test, so it’s not a complete waste.  But I could have saved myself a few hours of effort if I had been up to date on my reading.  Perhaps I could have spent those hours catching up on the rest of my “To Read” list…

Categories: pain, Testing Tags: ,

Using Hudson to run tests in parallel

October 11th, 2010 1 comment

We have been using Hudson to do CI on our grails application for a few months now. We use it to run unit, integration, jsunit, and functional (via selenium) tests, as well as code coverage metrics using Cobertura. Since this was the first project that we used Hudson for, the person who originally set it up just put all of these steps into a single job. It worked fairly well, but a single build was taking between 20-30 minutes, depending on how extensive the changes were. This can be a real pain, especially during very active development cycles when you most need rapid feedback. I decided to look into how I could run the tests in parallel, and hopefully get our build times to under 5 minutes. This is what I have come up with so far. It’s not perfect, but the tests do run more quickly (around 5 1/2 minutes total, primarily limited by the speed of the Selenium tests).

Primary Job – Poll and Build

The primary job is very simple. It polls Subversion and executes clean and test-compile targets. (We use maven with the grails-plugin to compile the app.) It normally only takes 30-40s to run. It doesn’t do a full package/install because that adds another minute to the build time.

Primary Build Steps

Build steps for the primary job

Using the Groovy Plugin

One shortcoming of not running the package step is that the job doesn’t generate a unique file that can be used for fingerprinting like a jar or a war. You need to turn on fingerprinting in order to aggregate test results.  (More on that subject, below.)

To resolve this, I used the Groovy plugin to execute a one-line script to generate a unique file for this purpose. I pass in the path and build information as script parameters rather than environment properties because the Groovy plugin doesn’t resolve Hudson environment variables when they are set in the Properties field. This seems like a big shortcoming, so perhaps I just misunderstand how to pass them correctly as properties.

A side-effect of using the Groovy command option rather than invoking an existing Groovy script is that you end up with a copy of the script in your workspace directory. In fact, you get a copy of it for each build. I am not certain why the plugin needs to generate the script file at all as opposed to just running it in memory. Hopefully, this will get corrected in a future release. For now, I may put the script in a regular file just so I don’t have to clear out all the temps.

Downstream Jobs – Test, Test, Test

The tests are divided into 3 downstream jobs that are kicked off when the primary job has a successful build. The jobs run the unit and integration tests, the Selenium tests, and the JsUnit tests, respectively. I didn’t bother splitting out the unit and integration tests into separate jobs because they take less time combined than do the Selenium tests. I could easily split them out later.

Cloning the Workspace

In order to speed up the downstream jobs, I didn’t want to pull the code for each and recompile. Instead, I used the Clone Workspace SCM Plugin to make copies of the primary job’s workspace for each downstream job. This adds only a few seconds to the primary job in order to create the zip archive.

Cloning a workspace - upstream project view

Cloning a workspace - downstream project view

I had two small issues with the plugin:

  1. The zip command doesn’t include empty directories by default. This can be an issue if any of your tests in downstream jobs expect some directories to be there already.
  2. In the downstream job, the working directory is set to the workspace directory itself rather than the (sub-)directory pulled from Subversion as it is in the primary job. This makes sense since the Clone Workspace plugin does clone the entire workspace and not just what was pulled from the SCM system. It just threw off some of the tests that expected files at specific paths. (Yes, I know – the tests should be more flexible. It’s “on the list…”)

I need to do more research to see if I can use this approach when I spin up some slave machines. I will post when I tackle that issue.

Downstream Build Steps

The build steps for the downstream jobs look like this:

Unit and Integration Tests Job

Funtional Tests Job

JsUnit Tests Job

You can see that the two jobs which run the server-side test jobs execute a shell command rather than a regular maven target. This was because of the working directory issue I mentioned above. I use the shell to cd into the proper directory first, and then execute the tests. A little hackish, but it was a good experiment with using the Execute shell build step.

You can also see how many different ports are used by the tests – the JsUnit acceptor servlet, the Selenium server servlet, the app itself when launched for Selenium, etc. I use the Hudson Port Allocator Plug-in to keep tests from stomping on each other.

Aggregating Test Results and Fingerprinting

I turned on test aggregation in the primary job so I could get a single view of test results. In order for this to work, the primary job needs to know how to associate specific downstream builds with specific builds in it. This is done through the fingerprinting feature. None of the docs mention this connection. I didn’t figure it out until I did a build, clicked on the Aggregate Test Results link, and saw this error message:

Generating a unique-per-build file using the above Groovy script (discussed above) let me link the jobs together for aggregating.

NOTE: You do not select the “Publish JUnit test result report” option in the primary job as part of this feature. That option is only used in the jobs that actually run the tests. If you turn it on in the primary job, you will get an error during the build because there are no JUnit files for it to process.

The Aggregate Test Results feature is nice since it provides links through to the test results pages for the downstream builds and a simple table listing the fail/total counts from the downstream builds.

Aggregate results report for a good build

Unfortunately, there appears to be a bug in it where it will not report results for any downstream job that has failures. A failing downstream job is shown in the build details page for the primary job

Aggregate results summary with failing job

but you can see that the Aggregated Test Result link lists “no failures.” Clicking that link shows the summary table, but it is missing a line for the downstream job with failures:

Aggregate results, but failing job is missing

In addition to having this bug, the feature does not show the test result trend chart or the tests by packages in the primary build. This makes it of very limited usefulness.

Next Steps

I was able to accomplish my primary goal of cutting our CI build times, but not without losing some required output.  Most of the shortcomings for this approach are related to one core issue – Multiple jobs means multiple places to configure and view results. For example, I had to configure failure email notifications in all downstream jobs rather than just one. Also, there is no way to get an aggregated code coverage report that spans all the tests (unit, integration, and functional). I could live with not having a single view of all test failures since I can get that info from the downstream jobs, but not having accurate code coverage metrics is not an option. I have to figure out a way around that.

Since most of the problems with this configuration were related to aggregating the test results (both direct results and coverage stats), my next step will be to try a “diamond” build configuration using the Join Plugin. Hopefully, I can pull all of the test results, coberturra.ser, and other such files into the bottom of the diamond to get a single place to view the status of the build.

I also want to get CodeNarc output displaying for the job via the Violations plugin. I can generate the CodeNarc output, and the Violations tries to parse it, but it then crashes with an NPE. I need to narrow down what part of the file is causing the exception so I can report the issue.

Categories: Testing Tags: ,

Hudson + Windows + JsUnit = Boom

October 5th, 2010 No comments

Anyone thinking about upgrading Hudson on a Windows box might want to hold off for a bit if you have any Ant build steps. In Hudson v1.370, they introduced a change for parsing some Ant parameters (HUDSON-7108). However, this caused several other problems, including one that will bust any step you have that runs JsUnit tests (HUDSON-7657). I banged my head against this for a while before I realized that my configuration changes hadn’t caused the problem, so I thought I’d try to save others some pain.

Categories: Testing Tags: ,

Grails, Maven, packaging, and skipTests fun

April 22nd, 2010 No comments

The grails maven plugin honors the maven.test.skip option (which turns off compiling and executing tests), but not the skipTests option (which only turns off executing tests). This means that you cannot easily create a package with test classes in it without running all of the tests. I finally come up with a close work around:

mvn clean grails:exec install -Dcommand=test-app -Dargs="-unit DoesNotExist" -Dmaven.test.skip=true

By explicitly listing the test-app command but specifying a test that does not exist, I was able to get the unit tests to be compiled. Since I only needed some common testing files in my package, this works for me. If someone wanted to package both the unit and integration tests, I don’t think this would work. I filed a new issue in the Grails Jira to try to get a real resolution.

Grails JSON converter and transient properties

November 23rd, 2009 5 comments

I figured out why my domain object was rendering as JSON differently in unit tests vs. when running the application and integration tests. Drum roll, please…No grailsApplication object is setup in unit tests.

I hear a great big, “Huh?”

First, a little background on converters. (You may want to get the Grails source code and have directories open to the org.codehaus.groovy.grails.web.converters and grails.converter packages before diving into this.) When a JSON converter is created, it obtains a ConverterConfiguration from the global ConvertersConfigurationHolder. ConverterConfiguration has several properties which influence the conversion process, but the most interesting for this discussion is its prioritized set of ObjectMarshallers. ObjectMarshallers are what do the actual work of turning an object into a JSON representation. Each ObjectMarshaller handles a certain set of classes, like arrays, maps, beans, grails domain objects, etc. When the JSON object is ready to convert a data object, it calls config.getMarshaller(data). The config iterates through its list of marshallers, calling marshaller.supports(data) on each. Since multiple marshallers may be able to handle a specific class (like GroovyBeanMarshaller and DomainClassMarshaller can both handle domain classes), whichever marshaller has highest priority (i.e. is called first) will be invoked.

When the standard ConverterConfiguration is initialized, the DomainClassMarshaller is given higher priority than the GroovyBeanMarshaller. Thus, it handles the conversion of domain objects during normal Grails application execution even though domain objects are also groovy beans. However, the DomainClassMarshaller.supports() method has a slight twist to it. It depends on there being a GrailsApplication object registered in the ConverterUtil class in order to tell it that a particular object is a domain object. Without the GrailsApplication object, ConverterUtil.isDomainClass() always returns false, causing DomainClassMarshaller.supports() to also return false. The ConverterConfiguration next checks the GroovyBeanMarshaller. Its supports() method returns true, so it handles the conversion in this case.

So why is this a big deal? For most domain object, it probably isn’t. Both marshallers render the primary properties of the domain object. However, the DomainClassMarshaller only renders properties return by domainClass.getPersistentProperties() while GroovyBeanMarshaller renders all properties, including transient ones. Because I needed some transient properties in the JSON representation, my unit tests worked great (since there is no grailsApplication setup, so the GroovyBeanMarshaller did the full rendering), but then my code failed when I ran the integration tests and the real app (since the DomainClassMarshaller handled the rendering).

How do you fix this? It’s pretty easy once you know where to look. First, I wanted to make my unit tests behave like the real app, so I had to get them to fail. Unfortunately, the conversion classes are written in java, not groovy, so you can’t just do a quick override of the ConverterUtil metaclass to get it to return what you want. You actually need to setup a GrailsApplication object. Rather than trying to create and initialize a whole one though, I determined that I could create a small stub class that overrode the two methods I needed:


import org.codehaus.groovy.grails.commons.*
class GrailsApplicationStub extends DefaultGrailsApplication {
def artefacts = [(DomainClassArtefactHandler.TYPE):[:]]

boolean isArtefactOfType(String artefactType, String className) {
return getArtefact(artefactType, className) != null
}

GrailsClass getArtefact(String artefactType, String name) {
def retVal = artefacts[artefactType] ? artefacts[artefactType][name] : null
return retVal
}
}

When initializing the unit test, I created the stub and set in the particular domain class that I wanted to be recognized. Then register the application stub with the ConverterUtil class:


def grailsApp = new GrailsApplicationStub();
grailsApp.artefacts[DomainClassArtefactHandler.TYPE][YourClass.name] = new DefaultGrailsDomainClass(YourClass);
ConverterUtil.setGrailsApplication(grailsApp);

Instances of YourClass will now be handled by the DomainClassMarshaller in the unit test. When the unit test checks for the existence of a transient property in a JSON representation, it should fail.

Next, I needed to change the marshaller prioritization so that the desired domain class was handled by a regular GroovyBeanMarshaller. Since I always wanted the GroovyBeanMarshaller to handle the class, I inserted this code into my unit test setup (and later into BootStrap so the behavior would apply to the full app):


JSON.registerObjectMarshaller(YourClass, {o, c ->
new GroovyBeanMarshaller().marshalObject(o, c)
})

This creates a ClosureObjectMarshaller that handles YourClass objects and gives the new marshaller the default priority of 1 (which puts it at the top of the priority list). When processing an object, this closure marshaller simply passes it through to a GroovyBeanMarshaller. Since the new marshaller only handles YourClass objects, the rendering behavior for all other domain objects is not affected.

This resolved my original issue. The JSON rendering of my object is now correct when I run my app. I did go down a side path while coming to this final solution. If you only want to override how a YourClass object is rendered some of the time, you could instead register a named configuration for it like this:


JSON.createNamedConfig("FullYourClassRender") {cfg ->
cfg.registerObjectMarshaller(new ClosureOjectMarshaller(YourClass, {o, c ->
new GroovyBeanMarshaller().marshalObject(o, c)
}))
}

Then when you want to use that particular configuration, you wrap the JSON object calls in a use statement:


JSON.use("FullYourClassRender") {
render(contentType: "application/json", text: myObject as JSON)
}

Other configuration options are also available. Take a look at the JSON class to see them.

I hope this helps document the conversion process a bit.

Categories: Uncategorized Tags: , ,