I implemented the Mars Rover Kata through TCR (Test && Commit || Revert) and these are my reflections

Jordi
6 min readDec 30, 2018

--

I was willing to test TCR in a “real environment” in a kata, after reading in twitter some fuzz from Kent Beck about it. TCR is a constraint added to TDD that forces you to create baby steps. I wanted to check if it was feasible to work like that, and if I could work it out with current tools and IDE.

The result of the experiment has been quite positive. I implemented the Mars Rover Kata successfully and TCR helped me to deliver a better development flow and git history.

Here you can find some info about it. Ready to launch?

TCR basics

TCR forces you to create baby steps while developing through TDD. It does so because if tests fail, it reverts all the code you have written since last commit.

Every time you do a save, tests are executed. If tests pass, then a commit is made. If tests fail, then code is reverted.

This forces you to make super tiny baby steps, to loose the less code the possible, and to make sure you are able to debug the “just next step”.

You can read more about it here: https://medium.com/@kentbeck_7670/test-commit-revert-870bbd756864

The goal

My goal was to implement the Mars Rover Kata through TCR. You can check read the specifications here: http://kata-log.rocks/mars-rover-kata

  • You are given the initial starting point (x,y) of a rover and the direction (N,S,E,W) it is facing.
  • The rover receives a character array of commands.
  • Implement commands that move the rover forward/backward (f,b).
  • Implement commands that turn the rover left/right (l,r).
  • Implement wrapping from one edge of the grid to another. (planets are spheres after all)
  • Implement obstacle detection before each move to a new square. If a given sequence of commands encounters an obstacle, the rover moves up to the last possible point, aborts the sequence and reports the obstacle.

The TCR catch and the “vanishing tests” problem

As stated before, every time you do a save, tests are executed. If tests pass, then a commit is made. If tests fail, then code is reverted.

The problem with this approach is that, if you are doing TDD (you are supposed to do so), then you will always get a moment in which the tests fail (the first time you write the tests before writing the code).

If code is reverted in that moment, then your tests will disappear. That is why it is not really easy to start implementing that.

Another problem is that if commits are made automatically, there is no way that you can control the message log.

To avoid this, I tackled it this way:

  • I worked with different src and test folders. Tests were not mixed inside src folder.
  • watch changes in “src” and “test” folders. When a change is done, tests are executed.
  • If tests pass, then a commit is done. The commit message is calculated as the difference between the current tests results and the previous tests results. This way I get a control version history based on added tests (how behaviour has changed). The commit command I used was: “git add — all && git commit -m \”$COMMIT_MSG\””
  • If tests fail, everything under src is reverted. This way, tests are not reverted. The revert command I used was: “git checkout HEAD — src && git reset HEAD src/ — && git clean -fd”
  • I also added a keypress listener for “p”, which helped me to push regularly (git push); so I didn’t have to quit the TCR framework.

Other useful outputs

While doing the Mars Rover Kata, I have produced the following outputs that can help you or others to work with or test TCR:

Reflections on TCR

Here are some first impressions about using TCR:

The very first commit.
  • The overall experience was good and enjoyable. It’s something that you can really practice in a real environment, for example during work.
  • If you actually use TDD with baby steps, you probably will not find much difference; as this constraint, only enforces you to do something that you might already be doing.
  • Adding features was the “easiest” part. It forced you generally to create several incremental tests like: “create new method and assert it exists”, “call the method and assert it returns something”, “call the method and assert it returns the real expected value”. Something that you might be doing TDD if you practiced baby steps.
  • The most “tricky” part for me was during refactoring. Refactors usually are big. You create new files to abstract a concept, and change several files to call or use that new abstraction. While there is no problem changing things while tests work (in refactoring tests are green), one typo mistake or “compile bug”, usually reverted everything. To avoid that, what I used to do is to start creating the abstraction first, a new method, a new feature. And then, when I saw that this method compiled, then I swapped it with the real code. Actually, this is the same that you do with architectures. And suddenly, thanks to TCR, I found myself doing it with code too. For me, this has been the biggest mind changing concept with TCR.
  • The last feature of the kata was the “hardest” one to do it with a baby step. You must throw an exception when the rover finds an obstacle. I implemented it with 12 commits, starting from this mini step: https://github.com/jmarti-theinit/tcr-mars-rover/commit/73eb826b279901225a14e62a13821150780c53d6
  • I found myself not reading the error log from tests when they failed. The steps I used to do where so tiny, I hardly needed to check the log.
  • Although I am not sure if it is worth all the setup for the benefits, at least, while using it in katas, it helps you to reflect about the way you do baby steps.

Some reflections on tcr-cli

While it was good to create a cli library that I can reuse in future katas or even at work, it has some key points that have to be polished before using it in a real environment:

  • The commit history that you end up with are based on new added tests. I think that is cool, but has some flaw when doing refactors. I usually found myself wanting to add something like ‘Refactor XX method’ or something like that.
  • I though it was going to be a problem the fact of using several files and saving only some. When using Intellij IDEA, CMD + Save keypress saves all files. As the ‘node-watch’ library has a mini delay to trigger the event, whenever it launched the tests, all files were already saved.
  • When reverting files, it took some time to Intellij IDEA to realize the hard disk change, and I had to manually synchronize the file (CMD + ALT + Y). Though it was sometimes tempting to keep the memory changes and try to fix the error.
  • If using console.log (I had to use it once to find out what was happening), that made it all the way to the commit message. It would be better not to use console.log, but it would be nice if tcr-lib does not print it out when using the diff.

Conclussions

In general, practicing TCR was quite satisfying, and I think that it really has some mind changing effect that is worth to try.

To be able to use it daily at work, for example, I should check how to:

  • Revert code (only code) when tests are mixed inside the src folder (so I don’t have to use separate src and test folders).
  • Avoid the console.log problem.
  • When reverting, synchronize faster with IntelliJ.

Anyway, it has been an enjoyable experience. And wait for next katayuno on the north, as I will convince Gonzalo Ayuso to let me dynamize one based on this.

See you at the next Katayuno!

--

--

Jordi
Jordi

Written by Jordi

Learning and growing in teams that develop software and create impact. I work in @lifullconnect