Testing AngularJS apps in IntelliJ IDEA with jsTestDriver

Now that’s a messy title… Let’s take a look what are we going to weld together:

  • AngularJS 1.1.1
  • Jasmine 1.3.1
  • IntelliJ IDEA 12.0.3 + jsTestDriver plugin 132.2

This tutorial is based on a pet project: Minesweeper Kata game. Sources are available on Github, following article concerns state from commit 0bd5e5fbbb. All instructions are given assuming your project has some Jasmine tests for AngularJS controllers running smoothly, just not from the IDE.

IntelliJ Plugin

First of all, install the jsTestDriver plugin for IDEA. Next, download Jasmine Adapter. It’s a single file called jasmineAdapter.js, available here. Now put the adapter code in your project structure where you keep jasmine.js.

Configuration

Create a configuration file called jsTestDriver.conf. I guess the name is not very important but that’s the convention. You can keep this file with your code in repository and put it in root unit testing directory. In my case it was src/test/unit, whereas src/test/unit/specs contains all the necessary test code. Here’s a screenshot of actual structure:

 config-location

Don’t worry about the test.dependencies.js file. It’s required for separate execution in SBT. Here’s an example of our configuration file content:

server: http://localhost:9876

load:
– ../lib/jasmine-1.3.1/*.js
– ../../main/webapp/assets/js/angular-1.1.1.js
– ../../main/webapp/assets/js/angular-resource-1.1.1.js
– ../lib/angular/angular-mocks-1.1.1.js
– ../../main/webapp/js/*.js

test:
– specs/*.js

Make sure that the “load” section contains references to jasmine, angular and your application code. The “test” section should list all your unit test files within this configuration context. Be careful and keep correct order according to the dependencies.  Most of my time wasted with cryptic errors was caused by this kind of mistake (loading angular-resource before angular).

Executing tests

Your .js file with unit tests should be now decorated with special icon, as you can see above next to gameController-spec.js. Select it and press Ctrl+Shift+F10 or select “Run ….” from context menu. You should see a configuration window similar to this one:

config-execution

Select your jsTestDriver.conf file and fire the test. An error message should appear saying, that no test execution server is running. Click on ‘start a local server’ link in the message popup. In the “jsTestDriver Server” view select desired browser(s) on which your test should be executed:

server-conf

Well, that’s it. Go ahead and run your tests. Experiment a little with passing, failing and debugging with breakpoints. I had some crazy issues with Chrome (Linux Mint Debian 201204; Chrome 23), fortunately it’s all fine on Firefox.

Now the only thing to wish for is having a tool like Mighty Moose to keep the wheel turning continuously. Enjoy insanely productive coding 🙂

We actually build stuff

It’s still almost unbelievable to me that last week I had a chance to meet Greg Young, Udi Dahan and other great names of software engineering at the same conference. Together with my colleagues from CGM we went to Vilnius and got smashed by this remarkable event 🙂

The Idea

Greg organized this conference to do something that hasn’t been done before: bring together some of the best developers and let them talk about their priceless experiences. No advertising for magic frameworks, no theories and no promises but pure experience. It’s usually something that people miss on ordinary conferences which are overloaded with talks about great subject but lacking ground on some real project. The organizers took also an amazing move and instead of giving traditional bags filled with spam fliers and crappy USB stick they donated money to charity! Representatives of charity foundation came to the keynote and told exactly how they were going to spend all the donation to help disabled children. Let’s hope that organizers of other conferences will look at this generous example and follow.

The talks

Mantas Klasavičius presented a great case study about how his team adopted various metrics as a standard part of their programming discipline. I was really impressed that they built an environment where every single deployment can be instantly analyzed on graphs to get immediate feedback about what are the impacts on infrastructure (memory, CPU, network, etc.), application (request/sec., load of different modules) and business (time -> money chart, sic!). We had opportunity to discuss a deployment with extreme performance requirements which was ultimately succesfull. Perfect.

Right afterwards we moved on to watch Johannes Broadwall and his live session of coding with TDD and pair programming. Johannes also encouraged a more than vital conversation about extreme programming and agile practices. This talk made my fingers itch like crazy for some coding, which I did right after returning from conference, sources included 🙂

Rob Ashton came to Vilnius to share with us some hints on JavaScript and HTML performance. Most of his advice was based on painful experiences during creation of games which can be pretty demanding in terms of speed. Although I’m not too much into game development (at least for now, because I co-created one game a few years ago), the presentation was very interesting and entertaining. If I ever need to render thousands of exploding particles in JavaScript, I know who to follow 🙂

“HTTP Caching 101” by Sebastien Lambla was even more loaded with crazy jokes. However, I wish there was more references to some experiences and actual projects than to babies, ponies and unicorns. Anyway, the audience seemed to be very amused and I can live with my slight dissatisfaction.

Greg’s Event Store was the main subject of next presentation and he showed us some quite impressive parts of this database. I really like how he referred to problems that his team encountered and how they overcame them. All of this was similar to things he presented a few months ago on Event Store’s launch presentation.

The last person who spoke was Udi Dahan, who told us about his adventures during six long years of developing NServiceBus and building a community around it. As he admitted, he collected all of the most important experiences and bits of advice that he wish he knew before. It’s difficult to describe how valuable is such knowledge and extremely sincere confessions about the brutal reality of walking towards success.  This was a powerful ending, but the fun part was just about to come 🙂

Beer party

Who wouldn’t like to grab a beer and chat with all of these great thinkers for a moment? What about few hours? Did I mention that Johannes was running another TDD session (this time a minesweeper solver) and one could just sit aside and observe the progress? I tried to use all these opportunities as much as possible and left overwhelmed by openness of organizers and attendees. Greetings to all the people I’ve met and talked to! I guess that’s what I like most in conferences 🙂

Enough writing, enough reading, go and build stuff!

Functional Decorator

Recently I’ve been experimenting with functional programming and Scala. As a developer used to Object Oriented paradigm I was wondering what are the functional equivalents of popular design patterns. One of the best articles summarizing this subject is Mark Seeman’s “Patterns Across Paradigms”. I am currently working on a small project where I had a chance to implement the Decorator Pattern using functional constructs in Scala.

The Object-Oriented approach

My example is be based on “DDD CQRS Leaven”, a project presenting some system and domain modeling concepts. This application was originally created by Sławek Sobótka and Rafał Jamróz, you can browse the codebase on Github. Here we are going to focus only on a small part of the domain, the  Rebate Policy. It’s a simple representation of Strategy Pattern, responsible for calculating eventual rebates for products in an online store. The model of policies can be described in few boxes:

Dead simple so far, right? We have our RebatePolicy contract with a couple of implementations. Now let’s see how it looks like when we add a Decorator:

Here’s the implementation of these components:

Such design allows combining different Domain Policies (Strategies) in a flexible way to obtain object which still matches the RebatePolicy interface and represents the composition. Decorator pattern allows adding new policies and creating various combinations in runtime without modifying existing ones or the “client code”, which keeps using the original abstraction. Neat.

Functional implementation

Trying to achieve similar goals using functional code requires reminding that the GoF Decorator Pattern is, in fact, a supplementary construct required to compensate the shortcomings of typical OO languages. In functional world we can leverage currying and functions as first-class citizens to get same desired effect. Before we explore the functional implementation in Scala, take a look at the RebateDecorator class. It represents an abstract base for all rebates which can wrap other rebates. A RebateDecorator forces our rebate to pass some inner rebate object in the constructor and provides it as a protected member for further use by the inheriting class. Then, the VipRebate class allows creating an instance in two ways: either with some decorated member or without it. Let’s do something similar with functions in Scala:

As you can see, the RebatePolicy type is now a functional type, which means that we speak more directly of our contract: A RebatePolicy is a function which takes a Product, quantity and minimumPrice and returns rebate value of type Money. Standard policy produces a function fulfilling this contract by calculating the rebate with some simple algorithm. What about VipRebate? It’s also a function, but a bit more complex 🙂 In fact, the VipRebate represents kind of a Factory (yes! Another pattern that we get for free!) which allows creating new function of type RebatePolicy with additional parameters: two Money values and innerPolicy. The Option type in Scala gives a way to initialize the rebate with “none” inner policy which is much more elegant than null manipulation that we saw before. Our goal has been achieved with some additional bonuses:

  1. Much more concise. The baroque entourage of Java has been strongly reduced to what’s essential. No more unnecessary ceremony, while the readability and comprehensibility are still high (or even higher). The superfluous RebateDecorator class is no longer needed and it doesn’t “pollute” our real domain logic anymore.
  2. Flexibility. Thanks to currying we have free “functional dependency injection” capabilities. The VipRebate signature allows creating final policy in three steps: first with initial parameters (minimalThreshold, rebateValue). Such call would produce another function which we can pass around and eventually call with optional inner policy argument. This second call will finally produce a RebatePolicy ready to use whenever it is needed. In previous, objective approach we were forced to build our object with all the dependencies right away (with constructor). To achieve more flexibility we would require some setters which breaks immutability and stinks 😉

Consequences

Exploring the world of functional programming is addictive and changes your mindset forever. If you are interested in further learning then you should definitely check the free “Functional programming principles” course on Coursera. First edition just ended but next one will launch probably around spring and you will be more than satisfied 🙂

Warsjawa: Workshops with SoftwareMill

Recently Warsaw JUG organized its 100th meeting, offering 10 different workshops to choose from. I signed up for “One day with a difficult client” organized by SoftwareMill and so did a few of my friends. The workshops were supposed to make us experience typical traps and difficulties set by clients and try to overcome them, so we expected it to be real fun 🙂 Piotrek Buda has already blogged about it. I must also warn you, that following text contains SPOILERS. If you would like to attend such workshops in the future, then maybe jump straight to “Conclusions” and don’t let the rest spoil the surprises.

The setup
The workshop leaders invited us into a short and amusing game to make us better remember our names, feel more comfortable and relaxed. Then we were divided into three groups, ~5 people each. Leaders gave us a short contract describing the task and the teams were separated. From this moment on, each group had to fulfill the contract and cooperate with “the client” who was very well played by people from SoftwareMill 🙂

The task
I haven’t mention it yet but our goal was not related to software at all. In fact, what the contract stated was a very short information, that we have to “build a spacecraft capable of transporting an atomic family into space”. The launch time was scheduled for 12:30, which was around 1.5h after we started. Our group received also a set of tools and materials: cardboard boxes, ducktape, pins, glue, paint, brushes, markers and some other stuff. Our client announced that we were going to work in 15-20 min. iterations and have short meetings with his representatives after each sprint. This was basically the whole briefing and our team had to start working right away.

Up the garden path

So here we are: a “self-organizing”  team of five people willing to build an awesome spacecraft. The specification was intentionally minimal, encouraging us to ask lots of questions and squeeze more information out of our client. Every 20 minutes we had a chance to ask about anything yet we failed big time to get what was actually important. After 2-3 iterations we accidentally discovered that the shuttle will be shot into space by a catapult produced by external company and that successfully mission means that it should pass a distance of minimum 3 meters. Our group totally missed the goal of getting the essential requirements and focusing on client’s real value. Instead we fell into a trap of making the shuttle pretty (colors, colors…) and equipping it with bonus content (like tv or…. pool!). The client did a really great job leading us off the track by happily acknowledging every crazy idea that we proposed. We also easily got caught into meaningless chit chat and almost wasted one whole meeting talking about nothing. In other teams the client also took away their leader for one iteration to paralyze work a little bit 🙂
Before our last iteration we had:

  • A big carton with no idea if it could even fit the catapult
  • Zero tests petformed
  • A detailed plan of shuttle interior including TVs, toilets and a minibar (sic!)
  • Very specific documentation about the colors of every single piece of our shuttle

With a stroke of luck we managed to get some data from tests performed by the owners of catapult and with this data we drastically shrinked our shuttle literally minutes before launching…
Then, the big moment of truth 🙂
Fortunately, the craft passed required 3 meters and our mission succeeded. The moment of launch was breathtaking and we had lots of laugh after 🙂

Retrospection
Another good idea of workshop leaders was to organize a retrospective meeting afterwards and discussing the whole work together. First we learned what are the common problems occurring when working with a difficult client (which is basically any client 😉 Then the organizers told us how they tried to set up similar traps for us, which pretty much worked out very well for them 🙂
Finally, each team presented their insights on mistakes they made and how they overcame certain problems. In our group the most important failures were especially:
1. Losing focus on what our client requires
The contract stated clearly that we need to launch a craft and nothing more. Instead of getting details on the launching process and the flight itself we started to dwell into discussions about how should the craft look like with tons of completely irrelevant information. But hey, the client acted perfectly, pulling us further and further into oblivion 🙂
We learned that it’s extremely important to keep focus on the real needs and values expected by client, especially because he usually cannot clearly name them himself. In this case our job was to precise that he needs to fit a craft into certain catapult and shoot it 3 meters away. That’s it, really. All that could be achieved in 1-2 iterations 🙂
2. Neglecting integration with external system
I had a chance to participate in quite a few projects involving integration with some services hosted (sometimes created simultaneously) by third party companies. I know that it is the most vulnerable point of whole project and at the same time, the most underestimated risk in most cases. This case wasn’t different but I could not manage to convince my team to focus on the catapult and to get as much data and testing as possible. Well, lesson learned once again 🙂 Let me just say this mantra again: you must perform tests with the external system as soon as possible in your project life cycle or you will be screwed and sorry.

Conclusion
Failing is great! Thanks to Ola, Janek and Tomek for letting me “safely” fail this time and learn so much 🙂 There is no conference talk or any kind of theoretical discussion which could make us feel so much knowledge as such workshops with palpable experiences. I’m really surprised how many things we could grasp in such a short time. No matter how agile your team thinks it is, you should try it yourself 🙂

Connecting Jenkins on Windows to Git

Setting up connection from Jenkins/Hudson to your Git server shouldn’t be any problem if the CI server is running on Linux, but achieving the same from Windows seems to be little trickier. The problem I encountered was that any build I tried to execute hang forever right on the start trying to obtain sources from the repository.
Googling around I found some suggestions to run Jenkins server on a dedicated user account and put private SSH key into proper directory. Unfortunately, that didn’t work. Without going into more details I’d just recommend you to keep Jenkins running as a service by the system user.
Assuming that Jenkins has the Git plugin properly installed and your build is configured, there are two key things that you need to do:

known_hosts

This file contains fingerprints of external ssh hosts. Each entry means a “trusted” host. Without it, ssh shows a warning during connection and prompts you for action, which exactly the reason why Jenkins build hangs on connection to Git server.

To create such file you can, for example, execute “ssh targethost” in your console (or connect using putty), answer “yes” and get known_hosts from your home/.ssh Next, put it in your Git client’s .ssh subdirectory (create one if necessary). In my case it was:

C:\Program Files (x86)\Git\.ssh\known_hosts

Private ssh key

I assume that you already know how to generate a pair of RSA public/private key and the public key to your Git server configuration. The important thing that you need to do in order to make the pair work properly for Jenkins is, similarly to known_hosts, to put your private key in Git .ssh subdirectory and name it “id_rsa”:

C:\Program Files (x86)\Git\.ssh\id_rsa

Voilà. Try now to execute your build, it should download sources.

git-svn vs Maven Build Number plugin

Recently I joined a project with sources hosted on external Subversion server. Migration to Git is out of the question since the central repository is located in different country and used by many teams from departments spread all over this international corporation. Fortunately, there’s “git-svn” tool which can provide many great Git features. After setting it up I initialized my repo, downloaded the latest revision and launched Maven build. Surprisingly, it exploded with following error message:

Provider message:
The svn command failed.
Command output:
svn: ‘.’ is not a working copy

[INFO] ————————————————————————
[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Cannot get the revision information from the scm repository :
Error!

After quick investigation (mainly scouring through lenghty POM hierarchy) I found the culprit:

Maven Build Number plugin

This plugin gives many possibilities of generating build numbers as variables and then use them for any purpose. In my case, the plugin was included only to obtain the last svn revision number and store it in some variable. Then the value supposed to be put in .war Manifest file. Since I downloaded the sources via “git svn” there was no .svn directory anywhere and the plugin refused to work, so it terminated the whole build.

Solution

There are many options to avoid this problem, for example disabling the plugin in POM file or removing it entirely. Such methods are, however, too invasive and I didn’t want to mess with project configuration, even if it’s only on my machine. We all know that such “temporary” modifications are very likely to be eventually accidentally committed to central repo with painful consequences. Having that in mind, I went with another trick.

Silencing plugin from the inside

I downloaded source code of Maven Build Number plugin and erased most of it, leaving empty mojos with no bodies in execute() methods. Then I built it and replaced the original plugin in local maven repo with my “dummy”. Next, I launched the main project build and noticed no more svn errors. Everything went smoothly

The cost

The cost of my tinkering is of course invalid revision number in the Manifest file but I don’t believe it has any relevance for me during the development. Let the Continuous Integration server use it and save it, I am totally happy with my option.

What next?

There could be a distinction between different build types with possibility to switch them with Maven profiles. However, it adds some complexity for every person using the project, which I would like to avoid. I seem satisfied that the problem has been resolved quickly and without touching the project itself, so I can focus back on  coding.