Archive for category Uncategorized

Fibonacci revisited

The world of Java Enterprise development full of frameworks, configuration files, things like hibernate, maven, JPAs, EJBs and other three letter acronyms may make you forget how cool it was to dive deep into the most exciting areas of the CS.

Luckily there there are places on the web that help you remember the good old times (namely the classes on algorithms at your the university, I guess). Javalobby with their Thursday Code Puzzler is definitely one of them.

So today, they asked to find the n-th Fibonacci number. A naive solution is pretty straightforward. A good solution is not that obvious, though. You could probably easily code a solution that runs in O(n) time.

It turns out it can be computed in logarithmic time. There are a couple of tricky ways to do it, I like one of them the most. First, however you have to know how to raise a number to a power in logarithmic time.
In order to do that the Divide-and-Conquer paradigm may be used. To make a long story short, the idea is to use an algorithm that halves the exponent and then recursively computes the power. In the merge phase of the divide and conquer process, the result is multiplied by itself for even exponents (the odd exponents needs one step more, but that only affects the constant factor of algorithms running time). The merge step is based on a following property of exponentiation:

Xm * Xn = Xm+n

In case of even exponent e:

Xe/2 * Xe/2 = Xe

In case of odd numbers and extra step is needed

X(e – 1)/2 * X(e – 1)/2 * X = Xe

Once you know how to implement exponentiation in logarithmic time (and I leave the proof that this runs in logarithmic time to the reader), you can move on to the tricky part. Let’s assume the following is true (that’s the tricky part and I will not prove it here either):

The only thing you have to do is to implement matrix exponentiation and you then can calculate the n-th Fibonacci number in logarithmic time by raising the matrix

See my example implementation of the recursive matrix exponentiation.

	private TwoByTwoMatrix computePower(TwoByTwoMatrix matrix, long e) {
		if (e < 0) {
			throw new IllegalArgumentException("Exponent must be non-negative,
                                                          [" + e + "] was given");
		} else if (e == 0) {
			return TWO_BY_TWO_IDENTITY_MATRIX;
		} else {
			if (isEven(e)) {
				TwoByTwoMatrix raisedToHalvedPower = computePower(matrix, e / 2);
				return raisedToHalvedPower.multiplyBy(raisedToHalvedPower);
			} else {
				TwoByTwoMatrix raisedToHalvedPower = computePower(matrix, (e - 1) / 2);
				return raisedToHalvedPower.multiplyBy(raisedToHalvedPower)
                                                          .multiplyBy(matrix);
			}
		}
	}

Your n-th Fibonacci number will be always in the top-right or bottom left corner of the result matrix.
You can browse the complete project on GitHub.

Advertisements

2 Comments

Unit Testing with Komarro – can implicit be more durable?

Testing a unit of code that doesn’t interact with any other test subjects is pretty straightforward. A set of direct inputs to a method and possibly inner state of the system under test determine how the output that has to be verified.

For example a test of a method exponentiation(int base, int power) could look like this:

@Test
public cubing() {
    // given
    int base = 2;
    int power = 3;

    //when
    long result = exponentiation(2, 3);

    //then
    assertThat(result).isEqualTo(8L);
}

The fun starts, however, when the method under test depends on elements that should not influence the test result. When developing a car engine, you don’t want to evaluate how good the throttle, the car’s on-board computer systems or any other piece of the car works. You are interested in the engine. That is why you’d  set up some testing harness to provide all necessary conditions to ignite the engine and to see how it performs.
A computer engineer in order to emulate all elements it depends on would use so called test doubles. Java developers (and I must admit we are very lucky), were given a set of great tools to do the job. Mockito, a prominent example among many great ones, could be used as follows:

private StatisticsService sut;
private clientRepository clientRepository;

@Setup
public prepareSut() {
    clientRepository = mock(ClientRepository.class);
    sut = new StatisticsService(clientRepository);
}

@Test
public averageAgeCalculatedCorrectly() {
    // given
    when(clientRepository.getAllClients()).thenReturn(asList(personAtAgeOf(15),
            personAtAgeOf(45), personAtAgeOf(90)));

    //when
    double averageAge = sut.getAverageAgeOfClients();

    //then
    assertThat(averageAge).isEqualTo(50.0);
}

Passing indirect inputs to a method under test is possible thanks to Mockito’s when idiom. When‘s basic responsibility is to stub method’s response. But is it all it does? Doesn’t it implicitly verify the exact interaction with its dependency? In terms of car engines, is it important where the fuel comes from when testing a motor?

Komarro tries to answer these questions and to expose some other subtleties of unit testing in Java. This is how the Komarro version of the previous test looks like:

private StatisticsService sut;

@Setup
public prepareSut() {
    sut = instanceForTesting(StatisticsService.class);
}

@Test
public averageAgeCalculatedCorrectly() {
    // given
    List<Client> clients = asList(personAtAgeOf(15), personAtAgeOf(45), personAtAgeOf(90));
    given(new TypeLiteral<List<Client>>() {}).isRequested().thenReturn(clients);

    //when
    double averageAge = sut.getAverageAgeOfClients();

    //then
    assertThat(averageAge).isEqualTo(50.0);
}

Komarro, as opposed to other stubbing utilities, replaces the exact method calls in the set-up phase of a test with implicit by-type indirect input definitions. It simplifies  other fixture set-up activities – mocked dependencies don’t have to be created and injected manually.
It is a fully functional stubbing framework based on Mockito that can be complemented with Mockito’s syntax every time it is needed (e.g. for verification purposes).
Komarro injects and manages the mocks automatically, in a manner that is transparent for the user. It guesses the types of the collaborators based on the application metadata in form of annotations. So if your application uses annotations to perform dependency injections you are ready to go (the installation will be especially easy for the maven users).

For usage examples, installation instructions, API and any other further details see http://code.google.com/p/komarro/.


Leave a comment

To unit test or not to?

Once  a team of developers, who created a pretty much complete set of automated functional tests for their application, asked me how they could possibly benefit from writing unit tests too. Some of them claimed that there is no need for any other type of tests as the automated suite they run could perfectly verify if the application matched client’s requirements.

So why should we even bother?

So let’s imagine you’re building a swimming pool in your backyard. You’ve already dug a hole. Now let’s suppose that you use bricks, tiles and some glue to finish the job. You don’t really care about the fact each brick has a different shape. You are also OK with the fact that some tiles are broken and half of them are just a bit thinner the the rest.
So you start building the walls first. It goes quickly. Even though the bricks are not rectangular, you can always fill the opening that tend to appear every now and then with some clay and the stones you found in your garden last week. Then you put the tiles. It’s not that easy any more as apparently the walls are not that flat and smooth. Fortunately, you’ve a lot of glue to to fix this and what’s more important you don’t really care. You want your kids to have fun as soon as possible. And finally, you complete your work. Soon the swimming pool is ready and full of water.
You try it out, and it works!

And it really does, the swimming pool will not leak in 10 years. The walls are a bit curved, but nobody can see it when the pool is filled with water. Job done.

So do you need to check if every single brick has the same dimensions as the other ones.  Do you care if their faces are rectangular?  Do you need to unit test the tiles too?

I’ll not answer this here. I’ll just ask some more questions instead 🙂

  • Would you build your house the same way?
  • Would you build a treehouse for your kids in a similar manner?
  • Would it be possible to reuse the bricks (used for the swimming pool construction) to pave your drive when you decide to do so?
  • And a bonus one: would you build a shed for your tools behind the garage of your summer house?

 

,

Leave a comment

Unit tests are your safety net

I recently came across this blog post on automated unit testing. This time I was really curious about the opinions on the topic. And quite not surprisingly it turns out there are two groups of people with two totally distinct attitudes to unit testing. There are the lovers and the haters.

What struck me, though, is that neither of them pointed the real value that is brought by a set of unit tests to a piece of software. As the examples given by the author are written in Python (which is not a statically typed language) somebody pointed out that the most of the issues would never happen if a statically typed language was used. And they were just a bit mistaken: you would run into the same problems, but you would detect them so early that you would never consider them as real issues.

And that’s what unit testing (or a fast set of automated tests) is made for. The main goal of unit testing is to provide a set of rules that are checked at compile-time (I’d call it ‘test-time’, still it should always be as close a possible to compile-time). A set of unit tests should play a role of a safety net that finds out all pieces of code that do not comply with the defined rules. And this should be done as soon as possible. Ideally, it should be as quick as the statically typed languages perform their type verifications.

And thanks to unit tests you can define any set of rules you like.

,

Leave a comment

‘Explicit interface per class’ antipattern

I recall, right after the lecture of the Gang of Four’s masterpiece on the design patterns, I suffered from this kind of Russell Crowe-John Nash syndrome: I started seeing patterns all over my code just as Russell-John did in the code cracking scene. Then, it took me some time to understand that no class of problems has a universal solution. The solutions to the problems have to be always based on the context the problem appears in.

I guess it happened to the majority of the developers that their initial enthusiasm about the design patterns resulted in some highly over-engineered pieces of code. But eventually all of us make their peace with the precooked solutions and we exactly know when to and when not to use them.

This surprisingly does not apply to some practices widely used within the Java EE ecosystem. Providing an explicit interface per each Java class is a notable example. By many it is considered to be a costless solution which brings only benefits and provokes no side effects. Worse, it’s often applied as a no-brainer that cannot be done wrong.

Why do people do that? This practice undoubtedly originates from the early Java Enterprise frameworks. The great inventions like Dependency Injection or Aspect Oriented Programming were possible thanks to the JDK proxies based on the existence of explicit interfaces. However, given the fact that every Java class implicitly defines its interface and given the improvements that were introduced to the Java world over the years, all modern frameworks have overcome this imperfection and nowadays they are capable of providing the same functionality with no explicit interfaces present in the classpath. Also, with the advent of the mocking frameworks (among them my favorite Mockito and PowerMock), class mocking is no longer a good reason for providing an explicit interface per class.

So why do people keep doing that? An explicit interface is an extremely powerful tool. It gives you a possibility to separate the implementation from the interface. In other words, you are able to decouple the essence of your application from the underlying technologies. Why would like to do this? Well, the technologies do get better, your client’s no-functional requirements may be volatile or  your management might change their mind on the technologies you use. You want to be prepared to adapt to new technologies easily. The problem is that it is not enough to extract an interface from a class using some one-click -IDE magic.This may be good enough for Spring so it creates the javassist proxies. However, if you want to take the real advantage of the interface separation, you have to design them with a big dose of care.

It is necessary to realize that providing any explicit interface does not guarantee you any flexibility when it comes to replacing the implementations. It is a common error in the Java webapps world that the explicit interfaces are build for specific implementations or worked out in the bottom-up manner: from existing implementations (often using some IDE helpers).
If you want to take advantage of the explicit interfaces you define in order to be able to replace the implementations easily, you have to have in mind that you design an API that has to be implementation agnostic.
To be more precise you define a special kind of API called  SPI that will be used by other clients of your module (or maybe you in some future) to build any implementation of the service described by the explicit interface . That is why the interfaces cannot leak anything from the originally proposed implementation. You have to have in mind that an SPI cannot be modified once it is published, it has to be done right in the first place. Getting it right has to cost time and effort. If you however never publish your modules and your team has the complete ownership of the whole application you should consider deferring creating the explicit interfaces until it is necessary.
Joshua Bloch cites the Rule of Threes by Will Tracz in this video where he claims it takes three implementations to get the SPI right (the whole video is certainly worth seeing, if you are interested in the SPI design – navigate to 16:30). In any case, one has to be very optimistic to trust that an interface extracted from a specific implementation (e.g. using IDE assistance) will be good for any other implementation.

Please see the following list of indicators that should warn you that you may be getting it wrong:

  1. Your Interface is tied to some implementation.
    This is probably the most dangerous issue and at the same time the easiest one to run into.
    Make sure following does not take place:

    • the interface or any of its methods contain the name of some technology used (e.g. JPAPersistentStore),
    • the parameters of any of the methods are coupled to the implementation (e.g. a Hibernate Criteria object as a parameter of a persistence service), this applies to the generic parameters too,
    • the return type of your method is coupled to the underlying implementation,
    • the Exception type reveals details of the implementation (e.g. a method that throws a JMSException).

    The naming issue may seem at first less important, though it is the name where everything starts. If you get it wrong in the first place it is very likely at some stage you will forget about the real purpose of the existence of the interface. It is more obvious with the rest o the bullets. What is important to remember is to take all parts o the methods signature into consideration. Still this may not be enough. Make sure that even if the interface declarations and the interface signature are not directly linked to any implementation, the purpose of the methods has to be implementation agnostic too. For example, if it happens that some implementation of a service has to be initiated and you want to delegate this responsibility to the client you have to think twice before publishing the init() method through the interface. Will all implementation need to be initiated?

    An excellent trick to get this done correctly is to create javadocs for all elements of your interface. If you manage not to mention anything about any possible implementation there, then you are probably good.

  2. The names of your implementations contain the Impl postfix or begin with the default word or contain any other keyword that does not describe the implementation.
    Names are important.  In the Java EE world, one of the best reasons for separating the interface from implementation is to separate the underlying technology from other modules of your application (to be able to replace it easily in the future). If this is your motivation, should be able to come up with a good, descriptive name for your implementation. If you cannot do that you may be pretty sure your default implementation will stay the only one forever. Then you gain nothing providing the explicit interfaces for your implementation. You do, on the other hand, increase the complexity of your architecture (and it does not matter how smart your IDE is, you simply duplicate the artifacts with no good reason).

Martin Fowler in his book named Patterns of Enterprise Application Architecture (released in 2002) identified a pattern called Separated Interface (in the Chapter 18 he defines exactly when to apply it). He also states following:
I come across many developers who have separate interfaces for every class they write. I think this is excessive, especially for application development.” and “I recommend using a separate interface only if you want to break a dependency or you want to have multiple independent implementations. If you put the interface and implementation together and need to separate them later, this is a simple refactoring that can be delayed until you need to do it.”
These words are nearly 10 years old, however it looks like Martin would still come across such developers. What’s probably worse is that now they have much less reasons to do so.
The general message of this blog entry is that providing explicit interfaces for services in a standard Java web application in no longer easily justified, it increases the complexity and if costs time if it supposed to bring benefits. Spring, EJB, the set of unit tests can do without them perfectly. If you provide the explicit interfaces in order to decouple your application business logic from the underlying technologies, you have to do it extremely carefully. If you fail to make it implementation agnostic you are wasting your time and the lines of code – you effectively fail to decouple the application from the technology.
Moreover if you or your team have a full control of the code and/or the changes to the technologies you use are unlikely, your inner self should raise a little YAGNI flag once you start writing the ‘just-in-case’ explicit interfaces.
And of course, all the decisions have to be taken consciously with a reasonable dose of skepticism. No solution is good if the context of the problem has not been taken into consideration.


See more:

  1. How To Design A Good API and Why it Matters – the aforementioned presentation by Joshua Bloch,
  2. Service s = new ServiceImpl() – Why You Are Doing That?– a blog post by Adam Bien,
  3. Java Interfaces Methodology: Should every class implement an interface? – a question on a similar topic raised on stackoverflow.

, , , , , , ,

Leave a comment

How to get maven archetype to generate empty directories?

If you think of a maven archetype as of a template for a new java project, you would probably expect it to produce a complete structure of directories and files for you. It is pretty likely you would need some of the directories to be empty. And this is exactly where your expectations would not match the maven-archetype plugin design (2.0-alpha-4). It seems like, in spite of its numerous assets, the archetype plugin has a little flaw: it does not provide an intuitive way to create an archetype which is capable of generating empty directories.

However I found out a somehow-not-so-elegant hack to cope with the problem. It turns out that the issue is well known to the developers and was addressed quite a long time ago as jira.codehaus.org/browse/ARCHETYPE-57. Anyhow, the provided solution gives you a possibility to create an empty directory. In order to do that you have to specify a fileset in the archetype-metadata.xml file (src/main/resources/META-INF/maven/archetype-metadata.xml).
So, for example if you want to create an empty dir src/java/main/configuration you should paste following code into the filesets section of the file:

<fileSet filtered="true" encoding="UTF-8">
 <directory>src/java/main/configuration</directory>
</fileSet>

This works fine as long as you don’t want packaging to be performed. Archetype plugin facilitates the process of creating directories in a way that you can specify a desired Java package for your project and appropriate directories would be created following the java convention. e.g. if you specify the package to be pl.company.project in normal circumstances you would expect to get a following structure:

src/java/main/pl/company/project/configuration.

Unfortunately setting fileSet directory to src/java/main/configuration would result in having

src/java/main/configuration/pl/company/project

as an outcome. This is, obviously not, what you want to achieve… and this is where the hack can be applied.

To achieve the desired result one has to resign from the packaging mechanism provided by the plugin (set packaged parameter of the fileset to false). Instead, one will specify  the insertion point manually.
It appears that paths defined in the archetype-metadata.xml file are processed by the velocity engine (which is quite surprising). Note that context variables have to be surrounded by __(double underscore).

Define your paths in a following manner:

<fileSet filtered="true" encoding="UTF-8">
 <directory>src/java/main/__packageAsDirectory__/configuration</directory>
</fileSet>

__packageAsDirectory__ will be replaced with the value of packageAsDirectory velocity variable. As you presume the packageAsDirectory variable does not exist yet. And this is  the not-so-elegant part of the solution.
You want to force the user to input the package in a form of a directory (i.e. slashes instead of dots). In order to do that you create a new required parameter, defining it in the archetype-metadata.xml file.

<requiredProperties>
 <requiredProperty key="packageAsDirectory"/>
</requiredProperties>

Of course, the users of your archetype have to be aware of what they input as there is no validation check performed on the input path. (e.g. pl/company/project is a valid value, but pl.company.project is invalid).

You can also ignore the package parameter and  give it some default value (you won’t really need it) e.g.:

<requiredProperties>
  <requiredProperty key="package">
   <defaultValue>PLEASE ENTER packageAsDirectory INSTEAD</defaultValue>
  </requiredProperty>
</requiredProperties>

Now, in order to convert slashes back to dots you set a velocity variable $package as demonstrated below:

#set ( $package = $packageAsDirectory.replaceAll("/", ".") )

Put this line at the top of every template file when needed – your .java files or any other file that uses the ${package} variable.

Have fun…

,

6 Comments

%d bloggers like this: