More Parallel Processing in Jenkins Notes
October must be my month to think about CI testing. Unfortunately, I haven’t had time to do much with it since my last post a year ago, but I did stumble across these rough notes I took while getting a “job pyramid” setup in our Hudson/Jenkins server last year. I never got around to turning them into a full post, but I wanted to record them in a better place for future reference. Maybe they will help someone else as well. Remember, these are now a year old, so some bugs may be fixed or new features added. I haven’t had time to stay current on happenings with Jenkins since then. Please forgive the lack of formatting, proof reading, grammar, or anything resembling organization.
- I installed the HTML Publisher plugin to view the codenarc reports until I could figure out why the Violations plugin doesn’t like the codenarc XML output. Make sure to put your reports in unique directories. I initially tried to just output the report in the target directory and it ended up archiving my whole target directory.
- The codenarc plugin requires that the full parent path for the file it will output be present, so if you want to write it to, say, the
target/test-reportsdirectory, must make sure that directory is there. I added another line to do this in the groovy script I already had for creating the fingerprint file.
- I added the job config history plugin. Very handy when experimenting with different setups so you can easily get back to a config that worked.
- Turned on the log parsing plugin. Would be handy if it shipped with a default rules parsing file, but nice none-the-less.
- Downstream builds report plugin – Doesn’t really provide any new info when only have a single layer of downstream jobs
- Groovy plugin – I put some shared scripts in a common directory and created a Hudson global property for the path. If I tried to use that variable to list the path to a script (e.g.
$SHARED_SCRIPTS/some_script.groovy), the plugin would assume the path was relative to the workspace. I had to insert an extra “/” in the “Groovy script file” space (e.g.
/$SHARED_SCRIPTS/some_script.groovy) for the plugin to realize it was an absolute path.
- Like the codenarc tool, the JSUnit ant tool will fail unless the parent directory of the output directory has not already been created.
- Increase your quiet time if you often check in groups of files using multiple commits
- If run functional (e.g. selenium) tests in separate build step, need to output reports to different directory or else they wipe the unit/integration reports
- There is a time sync problem between servers documented in the sonatype blog. The article says to use VMWare Tools to sync time on VM with the ESX server. Unfortunately, it our own ESX server’s time is about 25 minutes slow, so that causes problems of it’s own.
- If you want to have a single build step which runs unit and integration tests and generates coverage stats, you should specify the arguments in this order:
-coverage -xml -unit -integration. If you try to do something like this:
-unit -integration -coverage -xml, mvn/grails will think that you want to only execute an integration test named “-coverageTests” and it will also give you an error because it doesn’t understand the -xml parameter on it’s own. (Yes, I know this is the old syntax form for running grails tests. I haven’t tried with the newer form.)
- If you clone a downstream project, the build trigger to run when the upstream project completes is not cloned with it
- You can’t set a build to a better result than it already has. i.e. if it’s FAILED, you can’t set it to UNSTABLE. This is a pain if mvn tries to fail your build and you want to run a post-build script that checks if it was really a failure.
- A small problem with having to combine the cobertura output (and thus not using the HTML output directly from the cobertura-grails plugin) is that you don’t get the nice closure-renaming in the file-level stats from the grails plugin. So you see things like
MyController$_closure2...You can still see what lines are covered or not right below the stats, so this isn’t a huge issue.
- Using the clone workspace method makes the config a little cleaner, but there may be a risk that not all the downstream jobs get the archive copied before the next build job completes and removes the current copy (since it only keeps 1, I think). Not sure if this could happen unless possibly the downstream job is spun out to a clone and then a next build of the upstream job is started on the master. If the clone doesn’t start copying the workspace archive until after the master completes the next build, you could get the wrong archive. You also can’t fingerprint it. At least, I don’t think you can.
I setup the full “job diamond” as a mentioned in my last post on this. One issue I ran into was how to make sure the join job pulls artifacts from the proper build of the origin job? You can’t do “last successful build” because there could have been additional builds of the originator job while the first-tier sub-jobs were run but prior to the join-job running.
The solution I came up with was to have the join trigger call two sub-triggers
- Aggregate archived artifacts – grab the artifacts you need from the primary downstream jobs. In my case, I wanted to get the test reports.
- Parametrized build trigger – Define a sub-parametrized build trigger that invokes the join-job, passing the BUILD_NUMBER of the origin job as the parameter. (Use the Predefined parameters dialog for this.)
Then, in the join-job
- Define a string parameter to accept the passed in build number.
- In the build steps, create a “Copy artifacts” step, set it to get a specific build, and then set the build parameter name as the value