20 November 2014

Nesting angular directives

Using angular you quickly find directives really handy. Often you can google a directive that does exactly what you need... almost. For example there is plenty of ready-to-use multichoice dropdowns. But let's say we need a dropdown that is automatically disabled when there is no possible choices. One way is to get the source and change it - with angular it's usually easy. But then we are stuck with this version and won't get any updates nor bugfixes. Better solution is to wrap an existing directive in our own.

Let's say we have an existing dropdown-choices directive that uses one parameter (dropdown-choices) for providing all possible choices and another one (dropdown-selected) for currently selected value. Let's use it to get what we need.

Tests first. Make sure your karma.conf.js contains a reference to dropdown-choices, for example:
module.exports = function(config) {
    files: [
Example tests
'use strict';

describe('autoOffDropdown', function() {

    var autoOffDropdownDiv, scope;

    beforeEach(inject(function($rootScope, $compile) {
        scope = $rootScope.$new();
        multicomboDiv = $compile('<div id="someId" auto-off-dropdown  \
                                       choices="choices" selected="selected"></div>')(scope);

    it('should be disabled when there is nothing to select', function() {
        scope.choices = [];


    it('should show div with dropdown when anything can be selected', function() {
       scope.choices = [1];


    function expectSingleVisibleChildToBe(jquerySelector) {

       // may differ depending on angular version
       var visibleChildren = multicomboDiv.children().filter(
                                    function() {return $(this).css('display') !== 'none';});

And now our directive
'use strict';


    return {
        restrict: 'A',
        replace: true,
        scope: {
            choices:  '=',
            selected: '='
\ <input data-ng-show="noChoicesAvailable()" type="text" disabled="disabled" \ placeholder="No choices available"/> \
', link: function(scope) { scope.noChoicesAvailable = function() { return _.isEmpty(scope.choices); }; } }; });

Tested with angular 1.0.8

1 October 2014

Mocking $location in angular tests

Let's say we want to mock a single service, i.e. $location. In case of internal angular's services we can use dedicated mocks, like the one described here, but often something much simpler is enough. Let's say we have a service:
angular.module('myModule').factory('myService', function($location) {
   return {
      urlSize: $location.absUrl().length
and now we want to test it. Obviously all we need is just an object with a single function. In such case, for this test we can simply provide a new definition of the required service:
var url;

module(function($provide) {
   $provide.factory('$location', function() {
      return {
         absUrl: function () {return url}
and now our new $location is registered in angular's DI container and will be provided to all dependant services. Furthermore this test is a clear documentation that myService uses only this one single function of $location. There are no spies that obfuscate behaviour of tested code. How can we use this mock? It's a closure so in every test we can change the global variable url and the mocked $location.absUrl() will return this value. Now we can simply inject the service we want to test:
inject(function(myService) {
Tested with angular 1.0.8 and 1.2.16

3 September 2014

9 lines that made me learn monads

Some time ago I saw this programing languages comparison based on a small task. The task is to implement evaluation system for simple expressions: addition and multiplication of numbers and variables. Most solutions (regardless of the language) look like the first one:
(use '[clojure.core.match :only [match]])
(defn evaluate [env [sym x y]]
  (match [sym]
    ['Number]   x
    ['Add]      (+ (evaluate env x) (evaluate env y))
    ['Multiply] (* (evaluate env x) (evaluate env y))
    ['Variable] (env x)))

(def environment {"a" 3, "b" 4, "c" 5})
(def expression-tree '(Add (Variable "a") (Multiply (Number 2) (Variable "b"))))
(def result (evaluate environment expression-tree)) 
It boils down to defining 4 types of expressions, each of which receives subexpressions and the environment and later passes the environment to the subexpressions. Easy, right? There are also a few solutions that avoid passing the environment. Instead they have 'evaluate' function that preprocesses the expression tree and later evaluates it without the environment at all. For example they replace variables with their values upfront.

And then I saw this:
import Data.Map
import Control.Monad

number   = return
add      = liftM2 (+)
multiply = liftM2 (*)
variable = findWithDefault 0

environment = fromList [("a",3), ("b",4), ("c",7)]

expressionTree = add (variable "a") (multiply (number 2) (variable "b"))

result = expressionTree environment
And I was wondering how does it work. Where the hell is environment passing or 'evaluate' function?! Can you write such code? If not, it's time to learn monads.

17 May 2014

Spring boot: @DependsOn is not enough anymore

Spring boot can really help speed up starting new project. Ten lines of java configuration (mostly generated by let's say data-jpa-mvn archetype) and you are ready to write your business logic. All the configuration changes can be postponed until you really need them. But when you start doing those changes, very quickly you may see that something is reeeeeally missing.

Let's say we want to use beans A and C created by spring boot. But also we want to create our own bean B that should be initialized in between. The real word scenario is for example: default DataSource, default EntityManagerFactory and customized Flyway service which must run before hibernate. We can easily say flyway should depend on the datasource - @Autowired does the trick. But how can we say that hibernate should depend on our flyway? We can't add anything to the hibernate bean because we don't declare it. So what's the solution? First let's check how and where spring declares the hibernate. After listing *AutoConfiguration classes we see HibernateJpaAutoConfiguration and later, in its superclass, we can find:
@ConditionalOnMissingBean(name = "entityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
                           JpaVendorAdapter jpaVendorAdapter) {...}
So one way of enforcing the order is:
class FlywayConfig extends HibernateJpaAutoConfiguration {

   @Autowired Flyway flyway;
And if we need more precise control, we can override the bean:
class FlywayConfig extends HibernateJpaAutoConfiguration {
   public LocalContainerEntityManagerFactoryBean entityManagerFactory(
                                   JpaVendorAdapter jpaVendorAdapter,
                                   Flyway flyway) {
      return super.entityManagerFactory(jpaVendorAdapter);
Used versions: spring 4.0.3.RELEASE, spring-boot 1.0.2.RELEASE

3 April 2014

Encrypted filesystem on top of an LVM

I always forget how to create encrypted partition inside an lvm, so that's a good reason to write it. I'm not an admin so there might be better ways to do the same.

Objective: To have an encrypted device, unmounted by default, ready for one-click mount that asks for the password.

My tools:
  • lvm2
  • system-config-lvm - gui for LVM management
  • palimpset - gui that allows to mount, unmount encrypted LUKS devices inside lvm volumes, renaming VG (volume group), LV (logical volume) and filesystem inside encrypted LUKS volume
  • ubuntu 12.04 LTS
  • mate (it offers one-click mount but probably any other desktop environment will do)
During the whole process tab button will be your friend. It will help with luks subcommands, with finding luks devices etc. So let's start:

Creating encrypted FS

  1. Create partitions that don't waste space but can store your extents and lvm metadata. Something around (n*extent_size)+1mb. Create new partition in Gparted, format as LVM and compare the total size with the available size.
  2. Create desired VGs (my_VG) and LVs (my_LV) using system-config-lvm
  3. Create luks volume using whole available space on created LV. You will be asked for a passphrase
    sudo cryptsetup luksFormat /dev/mapper/my_VG-my_LV
  4. Open created luks volume and register it under custom name (my_luks)
    sudo cryptsetup luksOpen /dev/mapper/my_VG-my_LV my_luks
  5. Create filesystem using whole available space on opened luks volume
    mkfs.ext4 /dev/mapper/my_luks
  6. (Un)mount volumes and change all the labels using sudo palimpset
That's it. Your system (places or drivemount_applet2) will report existing encrypted LV waiting to be mounted. By default file system label will be used as a mount point for on-click mount.

Shrinking encrypted FS

There is often some confusion which tools use kilo- prefix for 2^10 and which for 10^3. It looks like all commands used below use the 2^10.
  1. Prepare for the shrinking. Open luks, check the FS size and the FS itself.
    sudo cryptsetup luksOpen /dev/mapper/my_VG-my_LV my_luks
    sudo mount -r /dev/mapper/my_luks /any_custom_dir/
    df -h
    df -B1k   # show size in kilobytes (2^10 bytes)
    sudo umount /any_custom_dir/
    e2fsck -f /dev/mapper/my_luks
  2. Shrink the FS a bit more than you really need to (about 90%). Provide new size or shrink it maximally with -M. Option -p adds a progress bar. In my case (ext4) system didn't allow me to shrink below the possible minimum size so it looks like you don't have to calculate the new size very precisely. Units: 2^10
    resize2fs -Mp /dev/mapper/my_luks
    resize2fs -p /dev/mapper/my_luks 50G
    Check if all is good
    e2fsck /dev/mapper/my_luks
  3. Shrink the LV. there is no need to shrink the luks volume as it doesn't have the concept of 'fixed size'. During every mount, luks uses all the space of the underlying device. But shrinking the LV is the tricky part. I couldn't find a way to precisely calculate the space. It looks like df shows FS size including its metadata but I don't know which size is used by resize2fs. Anyway, remember about the FS metadata (a few percent) and luks metadata (around 2MB). So add about 10% (depending on your FS type) to the FS size and shrink LV to that size.

    There are many ways to expres the new size (absolute value, difference, percentage etc. use man). The man never says if it uses 2^10 or 10^3 units but it 'seems' like the first one.

    LVM will not warn or prevent you from overwriting your data so bad assumption, calculation or a typo will result in a data loss.
    sudo cryptsetup luksClose my_luks
    sudo lvreduce -L 900G /dev/my_VG/my_LV
    Check how big damages were made
    sudo cryptsetup luksOpen /dev/mapper/my_VG-my_LV my_luks
    e2fsck -f /dev/mapper/my_luks
  4. If e2fsck said that you've just shrunk your LV too much, don't panic. Just close luks, extend LV a bit lvextend -L, open luks and check your FS again.
    df said my FS was 811G but shrinking LV to 820G (lvreduce -L 820G) was too much. However extending it to 830G was enough.

  5. Grow the FS to fill whole underlying LV, otherwise it would be a waste of space
    resize2fs -p /dev/mapper/my_luks
    e2fsck /dev/mapper/my_luks
That's all. Some other useful commands to check the status of devices: http://ubuntuforums.org/showthread.php?t=726724

22 March 2014

Refactor, don't reinvent the wheel

Recently I saw a training regarding clean code and refactoring. One of shown examples of bad code was something like this:
public void setName(String name) {
    this.name = name;
    if (this.name != null ) {
        if (this.name.length() > 30) {
            this.name = name.substring(0,30);
            this.name = this.name.toUpperCase();
        } else {
            this.name = this.name.toUpperCase();
And after a few slides the code was refactored to the final version:
public void setName(String name) {
    if (!isValid(name)) {
        this.name = null;

    this.name = limit(name, to(30)).toUpperCase();

private boolean isValid(String name) {
    return name != null;

private String limit(String input, int limit) {
    return input.substring(0, limit);

private int to(int x) {
    return x;
And I will argue that's a very wrong approach or a really bad example. Why? Because every language has its idioms and most commonly used tools that became de-facto standards and are well known and understood. In java world it's guava, apache commons, lambdaj etc. Using those libraries, you can be much more functional, null-safe and concise. You can use well known existing functions instead of creating new ones and learn them again in each project. In my opinion, much more readable would be:
public void setName(@Nullable String name) {
    this.name = StringUtils.upperCase( StringUtils.left(name, 30));
or in case we'll need it more than once:
public static upperCasePrefix(@Nullable String input, int limit) {
    return = StringUtils.upperCase( StringUtils.left(input, limit));

public void setName(@Nullable String name) {
    this.name = upperCasePrefix(name, 30);

21 March 2014

Use unicode for better names

Let's say in a java application we have a few tabs and sometimes we hide some of them. So now we want to document a new requirement and of course we do it as a test:
public void should_hide_more_tab_when_no_additional_information_is_available() {
but wait... what does it exactly mean? Should our application hide more tabs then it usually does? Or is there a tab named 'more' that should be hidden? How can we clarify this? After a quick look at the unicode char table, we pick the ʻ char (or any other that makes you happy). It's a '02BB turned comma' and more information can be found for example here. There is a table with detailed information about that character and the interesting part is:
Character.isJavaIdentifierPart()  Yes
Cool! So let's write:
public void should_hide_ʻmoreʻ_tab_when_no_additional_information_is_available() {
Is this test more readable now?

For example in racket (a dialect of lisp) you can define lambdas using λ:
(λ(x) (+ x 1))

20 March 2014

The myth of random data in unit tests

Many times I see people generate random data for any irrelevant variable in tests:
String anyName = RandomStringUtils.random(8);

Customer customer = customerBuilder()

First of all, it would probably be better if this test looked somehow like this:
Customer customer = newCustomerWithAge(18)

I know, I know: sometimes tests are a bit more complex and badly written and you just need the name as a constant. So why not simply:
private static final String ANY_NAME = "John";
customer = customerBuilder()

Does the random generator make you feel safer? If the name is irrelevant, why bother generating it? It just makes your code less readable.

But some people go even further. Let's say we want to test StringUtils.contains from the Apache Commons. Some people want to generate the significant parameters:
String random1 = randomString();
String random2 = randomString();
assertTrue(StringUtils.contains(random1 + random2 + random3, random2));
Easy, right? But how will we test if it returns false correctly? Now our random data needs to obey some specific constraints. So it's rather hard to generate the data without, in fact, implementing the functionality again in tests. Another problem is that when you have such tests you think everything is tested and you stop thinking about corner cases.

But is everything really tested? What about nulls? what about empty strings? What about combinations of them? And even if your generator can produce nulls and empty strings, still: is everything tested?

How often will your random test run before the tested code goes on production? If you do continuous delivery then the test will run a few times during your local development, once on your CI server and... that's it. If you're not so lucky to do continuous delivery then let's assume your commit goes on production in 3 weeks. Probably soon there will be feature freeze and branch stabilization. How many times will this test run? 50 times on CI server? Random tests are totally useless when running only a few times. Of course you may expect those tests will run very many times during local development of the rest of your team but...

If it fails on someone's else machine, are you sure he will record the test result? Wait! There will be no result! There will be only information that true was expected but false was returned. So you have to remember about adding logs to all your random tests. And even if logs are being dumped, are you sure that other developer (who has to deliver his own, completely different functionality) take care about irrelevant, non-deterministic test failure? Because other option is simply re-run tests, see the green light, commit and go home. No one will ever know.

Let's face it, it can't work this way. If you are not sure if your test data is good enough then:
  • Simplify your code. Extract methods/classes, avoid ifs, avoid nulls, be more immutable and functional.
  • Try to analyze the edge cases and include them in your tests.
  • If needed, throw away the part of code and start again doing TDD. If you've never tried it, you will be surprised how different the design can be.
Seriously, those two rules will almost always be enough. That's because the sad truth is that the vast majority of all the development is a typical corpo maintenance. It's not a rocket science and all the complexity is usually incidental. But the refactoring can be expensive. And if above rules are not enough:
  • Generate a lot of random data sets, look at them and check if some of them differs from what you had in mind when designing your code. And, of course, add new cases to your tests.
  • Use mutation testing.
  • Whenever a bug is discovered during the development, uat or production, add new cases to your tests to avoid regression.
  • Do real random testing. Keep the testing server running 24/7. Every generated data that breaks the tests should be logged and added to your deterministic unit tests.