marekdec

This user hasn't shared any biographical information

Homepage: https://marekdec.wordpress.com

Loose coupling, tight coupling, decoupling – what is that all about?

Given that the *coupling family of keywords stands firmly in every software architect/designer vocabulary for over 20 years, it’s amazingly surprising how few people can tell the difference between tightly coupled and loosely coupled pieces of code.
If you ask a coder who comes from a Java-like universe to make an assessment of the degree of coupling of an example of code, they will start putting words interface and implementation in every single sentence. No doubt. But is that really the essence of the problem?

I propose a very simple Test of Coupling:

  1. Piece A of code is tightly coupled to Piece B of code if there exists any possible modification to the Piece B that would force changes in Piece A in order to keep correctness.
  2. Piece A of code is not tightly coupled to Piece B of code if there is no possible modification to the Piece B that would make a change to Piece A necessary.

Consider following example:


public class PieceB {
    private Dependency aDependency;
    public int countRelatedPiecesByColor(Color color) {
        System.out.println("Some text");
        return aDependency.call();
    }
}

...

public static void main(String[] args) {
    PieceB myDependency = new PieceB();
    int count = myDependency.countRelatedPiecesByColor(RED);
}
...

Line 12 with myDependency.countRelatedPiecesByColor(RED) is NOT tightly coupled to the body of the countRelatedPiecesByColor because there is no modification that can be made to it that would force us to change this line of code.
On the other hand the same line of code is tightly coupled to the signature of the countRelatedPiecesByColor method as for example the change of the name of this method would force a change to the line 12. Also, note that because of the latter line 12 is also tightly coupled to the type defined by PieceB, and let’s assume that without any formalisms.

Let’s consider also:

public interface UserRepository {

User find(String email);

}



public class UserHibernateRepository implements UserRepository {

        @Override
	public User find(String email) {
		return find(email, ANYWHERE);
	}

        //code omitted for conciseness

}

public static void main(String[] args) {
    UserRepository repository = MyApplicationContext.getActiveUserRepository();
    User marek = repository.find("marek.dec@example.com");
}

Here the repository.find(“marek.dec@example.com”) method call is once again tightly coupled to the signature of the invoked method and indirectly it is also tightly coupled to the type that defines the find method signature (i.e. UserInterface). Tight coupling can be also observed between the UserRepository interface and UserHibernateRepository class as any change to the find method signature will make a corresponding modification necessary.
There is no tight coupling between the find method body and the repository.find call – there is no modification you can make to the find method body that would force you to change the call. But what about the relation between the find method signature in the  UserHibernateRepository and line 22 in the example? Well, it turns out  if you want make a modification to it, you will have to change the implemented interface first and that would require a change to the repository.find call too.

So, does the extra interface, an extra level of abstraction buy you anything? Surely it does. Note that in the first example PieceB the line PieceB myDependency = new PieceB(); couples you tightly to the type named PieceB. You won’t  be allowed to take advantage of the fact the methods are virtual and  you will not be able to choose the implementations at runtime (do you have what to choose from?). But you are equally coupled to the methods bodies in both cases, and equally coupled to their signatures, not less and not more.

 

3 Comments

Fibonacci revisited

The world of Java Enterprise development full of frameworks, configuration files, things like hibernate, maven, JPAs, EJBs and other three letter acronyms may make you forget how cool it was to dive deep into the most exciting areas of the CS.

Luckily there there are places on the web that help you remember the good old times (namely the classes on algorithms at your the university, I guess). Javalobby with their Thursday Code Puzzler is definitely one of them.

So today, they asked to find the n-th Fibonacci number. A naive solution is pretty straightforward. A good solution is not that obvious, though. You could probably easily code a solution that runs in O(n) time.

It turns out it can be computed in logarithmic time. There are a couple of tricky ways to do it, I like one of them the most. First, however you have to know how to raise a number to a power in logarithmic time.
In order to do that the Divide-and-Conquer paradigm may be used. To make a long story short, the idea is to use an algorithm that halves the exponent and then recursively computes the power. In the merge phase of the divide and conquer process, the result is multiplied by itself for even exponents (the odd exponents needs one step more, but that only affects the constant factor of algorithms running time). The merge step is based on a following property of exponentiation:

Xm * Xn = Xm+n

In case of even exponent e:

Xe/2 * Xe/2 = Xe

In case of odd numbers and extra step is needed

X(e – 1)/2 * X(e – 1)/2 * X = Xe

Once you know how to implement exponentiation in logarithmic time (and I leave the proof that this runs in logarithmic time to the reader), you can move on to the tricky part. Let’s assume the following is true (that’s the tricky part and I will not prove it here either):

The only thing you have to do is to implement matrix exponentiation and you then can calculate the n-th Fibonacci number in logarithmic time by raising the matrix

See my example implementation of the recursive matrix exponentiation.

	private TwoByTwoMatrix computePower(TwoByTwoMatrix matrix, long e) {
		if (e < 0) {
			throw new IllegalArgumentException("Exponent must be non-negative,
                                                          [" + e + "] was given");
		} else if (e == 0) {
			return TWO_BY_TWO_IDENTITY_MATRIX;
		} else {
			if (isEven(e)) {
				TwoByTwoMatrix raisedToHalvedPower = computePower(matrix, e / 2);
				return raisedToHalvedPower.multiplyBy(raisedToHalvedPower);
			} else {
				TwoByTwoMatrix raisedToHalvedPower = computePower(matrix, (e - 1) / 2);
				return raisedToHalvedPower.multiplyBy(raisedToHalvedPower)
                                                          .multiplyBy(matrix);
			}
		}
	}

Your n-th Fibonacci number will be always in the top-right or bottom left corner of the result matrix.
You can browse the complete project on GitHub.

2 Comments

Unit Testing with Komarro – can implicit be more durable?

Testing a unit of code that doesn’t interact with any other test subjects is pretty straightforward. A set of direct inputs to a method and possibly inner state of the system under test determine how the output that has to be verified.

For example a test of a method exponentiation(int base, int power) could look like this:

@Test
public cubing() {
    // given
    int base = 2;
    int power = 3;

    //when
    long result = exponentiation(2, 3);

    //then
    assertThat(result).isEqualTo(8L);
}

The fun starts, however, when the method under test depends on elements that should not influence the test result. When developing a car engine, you don’t want to evaluate how good the throttle, the car’s on-board computer systems or any other piece of the car works. You are interested in the engine. That is why you’d  set up some testing harness to provide all necessary conditions to ignite the engine and to see how it performs.
A computer engineer in order to emulate all elements it depends on would use so called test doubles. Java developers (and I must admit we are very lucky), were given a set of great tools to do the job. Mockito, a prominent example among many great ones, could be used as follows:

private StatisticsService sut;
private clientRepository clientRepository;

@Setup
public prepareSut() {
    clientRepository = mock(ClientRepository.class);
    sut = new StatisticsService(clientRepository);
}

@Test
public averageAgeCalculatedCorrectly() {
    // given
    when(clientRepository.getAllClients()).thenReturn(asList(personAtAgeOf(15),
            personAtAgeOf(45), personAtAgeOf(90)));

    //when
    double averageAge = sut.getAverageAgeOfClients();

    //then
    assertThat(averageAge).isEqualTo(50.0);
}

Passing indirect inputs to a method under test is possible thanks to Mockito’s when idiom. When‘s basic responsibility is to stub method’s response. But is it all it does? Doesn’t it implicitly verify the exact interaction with its dependency? In terms of car engines, is it important where the fuel comes from when testing a motor?

Komarro tries to answer these questions and to expose some other subtleties of unit testing in Java. This is how the Komarro version of the previous test looks like:

private StatisticsService sut;

@Setup
public prepareSut() {
    sut = instanceForTesting(StatisticsService.class);
}

@Test
public averageAgeCalculatedCorrectly() {
    // given
    List<Client> clients = asList(personAtAgeOf(15), personAtAgeOf(45), personAtAgeOf(90));
    given(new TypeLiteral<List<Client>>() {}).isRequested().thenReturn(clients);

    //when
    double averageAge = sut.getAverageAgeOfClients();

    //then
    assertThat(averageAge).isEqualTo(50.0);
}

Komarro, as opposed to other stubbing utilities, replaces the exact method calls in the set-up phase of a test with implicit by-type indirect input definitions. It simplifies  other fixture set-up activities – mocked dependencies don’t have to be created and injected manually.
It is a fully functional stubbing framework based on Mockito that can be complemented with Mockito’s syntax every time it is needed (e.g. for verification purposes).
Komarro injects and manages the mocks automatically, in a manner that is transparent for the user. It guesses the types of the collaborators based on the application metadata in form of annotations. So if your application uses annotations to perform dependency injections you are ready to go (the installation will be especially easy for the maven users).

For usage examples, installation instructions, API and any other further details see http://code.google.com/p/komarro/.


Leave a comment

Monolinguistic development

Speaking you mother tongue is undoubtedly much easier than speaking any foreign language. But is speaking 5 foreign languages harder than speaking just one of them?

When developing a Java based web application it turns out that apart from Java you have to use some sort of SQL to obtain the data, some XML to define your build system and perhaps even some essential elements of the application (like Spring beans), and of course a mixture of HTML and JavaScript to code a rich user interface. Not surprisingly, there are many developers who claim that this sucks (see http://lofidewanto.blogspot.com/2011/10/why-is-polyglot-programming-or-do-it.html for example). They are probably right.

I got really excited when I first saw GWT. Guys at Google have done a damn good job providing a Java-to-JavaScript compiler. I could finally make my apps speak one language (with a small exception for maven’s pom.xml). The Criteria API offered by JPA and GWT for the user interface made it finally possible. I was even thinking of writing an extension to maven polyglot to teach it how to speak Java.

I’m much more sceptical about the idea today, however. The March 2012 issue of the Technology Radar by ThoughtWorks put GWT on hold. They somehow believe it’d be better to replace Java with JavaScript on the server side than to do it the other way round (Node.js by the way seems to be ascending on their pictures) . Though, I don’t think it’s really important that Java is inferior/superior (choose one according to your preferences) to JavaScript. The real question is if any multi-purpose language can beat two problem specific languages when performing two distinct tasks. I’d say, it’d be pretty hard.

Pure Java is just not good to code a user interface – those who used Swing know perfectly than things get messy at some stage. And no matter how good your imagination is, modelling a view with an xhtml tool is a much more intuitive task than any imperative based solution.

‘I love you’ in Klingon just doesn’t sound good.

 

 

Leave a comment

To unit test or not to?

Once  a team of developers, who created a pretty much complete set of automated functional tests for their application, asked me how they could possibly benefit from writing unit tests too. Some of them claimed that there is no need for any other type of tests as the automated suite they run could perfectly verify if the application matched client’s requirements.

So why should we even bother?

So let’s imagine you’re building a swimming pool in your backyard. You’ve already dug a hole. Now let’s suppose that you use bricks, tiles and some glue to finish the job. You don’t really care about the fact each brick has a different shape. You are also OK with the fact that some tiles are broken and half of them are just a bit thinner the the rest.
So you start building the walls first. It goes quickly. Even though the bricks are not rectangular, you can always fill the opening that tend to appear every now and then with some clay and the stones you found in your garden last week. Then you put the tiles. It’s not that easy any more as apparently the walls are not that flat and smooth. Fortunately, you’ve a lot of glue to to fix this and what’s more important you don’t really care. You want your kids to have fun as soon as possible. And finally, you complete your work. Soon the swimming pool is ready and full of water.
You try it out, and it works!

And it really does, the swimming pool will not leak in 10 years. The walls are a bit curved, but nobody can see it when the pool is filled with water. Job done.

So do you need to check if every single brick has the same dimensions as the other ones.  Do you care if their faces are rectangular?  Do you need to unit test the tiles too?

I’ll not answer this here. I’ll just ask some more questions instead 🙂

  • Would you build your house the same way?
  • Would you build a treehouse for your kids in a similar manner?
  • Would it be possible to reuse the bricks (used for the swimming pool construction) to pave your drive when you decide to do so?
  • And a bonus one: would you build a shed for your tools behind the garage of your summer house?

 

,

Leave a comment

Unit tests are your safety net

I recently came across this blog post on automated unit testing. This time I was really curious about the opinions on the topic. And quite not surprisingly it turns out there are two groups of people with two totally distinct attitudes to unit testing. There are the lovers and the haters.

What struck me, though, is that neither of them pointed the real value that is brought by a set of unit tests to a piece of software. As the examples given by the author are written in Python (which is not a statically typed language) somebody pointed out that the most of the issues would never happen if a statically typed language was used. And they were just a bit mistaken: you would run into the same problems, but you would detect them so early that you would never consider them as real issues.

And that’s what unit testing (or a fast set of automated tests) is made for. The main goal of unit testing is to provide a set of rules that are checked at compile-time (I’d call it ‘test-time’, still it should always be as close a possible to compile-time). A set of unit tests should play a role of a safety net that finds out all pieces of code that do not comply with the defined rules. And this should be done as soon as possible. Ideally, it should be as quick as the statically typed languages perform their type verifications.

And thanks to unit tests you can define any set of rules you like.

,

Leave a comment

Komarro released!

The first release o Komarro is out. The version 1.0 has been synced to Maven Central and now it is available to all maven users.

<dependency>
   <groupId>com.googlecode.komarro</groupId>
   <artifactId>komarro</artifactId>
   <version>1.0</version>
   <scope>test</scope>
</dependency>

Leave a comment

Mockarro changes name to Komarro, Komarro&Mockito

As Mockarro was not the prettiest name it has been changed to Komarro. I also cut out the Injection Point class from the SPI so that API stays as minimal as possible. According to the old saying: it’s much easier to add features to an API when they are demanded, than to remove them when it turns out they should not be there. During the last review I also realized that a possibility o defining a custom injection strategy has to be more flexible than it was.

I also created another example of ‘Komarro with Mockito’ usage.

Currently Komarro offers a possibility of defining indirect inputs to a tested method. Sometimes however, it is necessary to verify the indirect output parameters or to verify that an interaction with a collaborator took place.
As Komarro is built upon Mockito they play together quite smoothly.

Let’s consider a local Pizza delivery service. In the example below you can see both the elements of Komarro and Mockito APIs. The interaction verification will be done on the methods of the OrderRepository. That is why it is annotated by the Mockito @Mock annotation. And then initialized using the MockitoAnnotations.initMocks(this) method.
Also it is important to notice that the mocks created by Mockito can be passed to Komarro’s instanceForTesting initialization method. Here, the MockitoMockDescriptionCreator.annotatedMocks convenience method is used to automatically detect the mocks marked with the @Mock annotation.

	@Mock
	private OrderRepository orderRepository;

	private OrderService orderService;

	@Before
	public void init() {
		initMocks(this);

		orderService = instanceForTesting(OrderService.class,
				annotatedMocks(this));
	}

Then we are set and ready to write the actual test method:

	@Test
	public void testPlaceOrderSavesOrder() {
		// given
		given(Pizza.class).isRequested().thenReturn(new Pizza("Margharita"));
		given(Order.class).isRequested().thenReturn(newOrderWithId(17L));

		// when
		long orderId = orderService.placeOrder("margharita", new Address(
				"Embarcadero Rd 123", "Isla Vista", "93117"));

		// then
		verify(orderRepository).save(any(Order.class));
		assertThat(orderId).isEqualTo(17L);
	}

Note that the method uses the Komarro’s idiom to define the behavior of the collaborators and Mockito’s verify method to verify the interaction has taken place.
Komarro can be complemented with Mockito’s functionality every time it is necessary. All the verifications have to be done by Mockito, as Komarro does not provide any way to do this yet. But also, if it is happens that Komarro style definition does not provide a good definition of the indirect inputs, Mockito should be used.

The method under test could look like this:

	@Inject
	private OrderRepository orderRepository;

	@Inject
	private PizzaService pizzaService;

	public long placeOrder(String pizzaName, Address address) {
		Pizza pizza = pizzaService.getByName();

		if (pizza != null) {
			Order order = new Order();
			order.setAddress(address);
			order.setPizza(pizza);

			Order newOrder = orderRepository.save(order);

			return newOrder.getId();
		} else {
			throw new IllegalArgumentException("The pizza " + pizzaName
					+ " does not exist in the menu card");
		}
	}

The source code of the examples posted here (and some more tests) can be found at github: https://github.com/marekdec/komarro-example-pizza-shop. The important phases of the development have been tagged.

, , ,

19 Comments

Mockarro TDD example

Since the day I came up with  Mockarro I find it very hard to evaluate clearly its usefulness. I guess it’s always pretty hard to make a clear judgement on an idea when the border between its advantages and disadvantages is fuzzy. It’s probably even harder if the idea is yours.

Anyhow, I recently decided to bring Mockarro closer to its first release and to expose it to the outer world.

It is important to remember that Mockarro provides a way of defining the indirect inputs to the tested method. It trades the specification of ‘how’ the indirect inputs are obtained for ‘what’ the indirect inputs are.

Let’s start with a simple TDD example. We are going to create a part simple application that searches the database of planets that are possibly inhabitable. The whole application is supposed to be created in a top-down manner. We will focus first on finding a planet whose radius is most similar to the Earth’s mean radius.

	@Test
	public void retrievePlanetWithRadiusMostSimilarToEarthsTest() {
		// given
		Planet earth = planet("earth", 6371);
		given(Planet.class).isRequested().thenReturn(earth);

		given(new TypeLiteral<List<Planet>>() {}).isRequested().thenReturn(
				asList(earth, planet("Mars", 3396), planet("Tatooine", 55000),
						planet("Arrakis", 10123), planet("Solaris", 12700)));

		// when
		Planet planet = planetService
				.retrievePlanetWithRadiusMostSimilarToEarths();

		// then
		assertThat(planet).isNotNull().isNotSameAs(earth);
		assertThat(planet.getName()).isEqualTo("Mars");
	}

The code above defines the expected behavior of the retrievePlanetWithRadiusMostSimilarToEarthsTest method. The fixture set-up section (the given section) defines the indirect inputs to the method under test. As opposed to the standard Mocking frameworks like Mockito or !EasyMock, Mockarro does not couple the test method to collaborators within the given section. It does not define how the indirect inputs will be provided. It does, on the other hand, clearly specify what the indirect inputs the method under test are.
The method that is going to be implemented, briefly, will be given a list of all planets in the database and an instance of the planet Earth. It is required to find a planet whose radius is most similar Earth’s radius, but at the same time it is required not to return the Earth as the result.

A possible implementation of the method could use a PlanetRepository to get both: the instance of a planet Earth and a list of all planets available for the application.

	@Inject
	private PlanetRepository planetRepository;

	public Planet retrievePlanetWithRadiusMostSimilarToEarths() {
		Planet earth = planetRepository.getByName("earth");

		if (earth != null) {
			double earthRadius = earth.getKilometersOfRadius();

			Planet mostSimilarPlanet = null;
			double smallestDiff = Double.POSITIVE_INFINITY;
			for (Planet planet : planetRepository.getAllPlanets()) {
				if (planet != earth) {

					double diff = Math.abs(planet.getKilometersOfRadius()
							- earthRadius);
					if (diff < smallestDiff) {
						smallestDiff = diff;
						mostSimilarPlanet = planet;
					}
				}
			}
			return mostSimilarPlanet;
		} else {
			throw new IllegalStateException(
					"No earth in the planet repository");
		}
	}

Let’s assume that at some stage of the application life cycle a SolarSytemRepository is created. The Earth will be obtained directly from the new service. The input parameters will not change whatsoever.


	@Inject
	private PlanetRepository planetRepository;

	@Inject
	private SolarSystemService solarSystemService;

	public Planet retrievePlanetWithRadiusMostSimilarToEarths() {
		Planet earth = solarSystemService.getEarth();

		if (earth != null) {
			double earthRadius = earth.getKilometersOfRadius();

			Planet mostSimilarPlanet = null;
			double smallestDiff = Double.POSITIVE_INFINITY;
			for (Planet planet : planetRepository.getAllPlanets()) {
				if (planet != earth) {

					double diff = Math.abs(planet.getKilometersOfRadius()
							- earthRadius);
					if (diff < smallestDiff) {
						smallestDiff = diff;
						mostSimilarPlanet = planet;
					}
				}
			}
			return mostSimilarPlanet;
		} else {
			throw new IllegalStateException(
					"No earth in the planet repository");
		}
	}

This refactoring does not require the test method to be changed. The input parameters stayed the same (both direct and indirect). It has to be pointed out though, that the test method will have to be changed every time the set of the indirect input parameters is modified.

The source code of the examples in this post can be found at https://github.com/marekdec/planetary-system. The interesting stages of the development process have been tagged, navigate to https://github.com/marekdec/planetary-system/tags to find them.

, ,

4 Comments

Javascript: the very worst part

It constantly happens to me that the more I learn, the more I realize how little I know. I’m probably not the first one, though.

Anyhow, I knew from the very beginning I didn’t know much on JavaScript. This however didn’t stop me from using it. I didn’t even stop me from getting things working with JavaScript. Then some time ago I came across this talk JavaScript: The Good Parts. I realized I perfectly match the description of a JavaScript programmer given by Douglas Crockford at the beginning of the talk. So, taking advantage of some free time I had this Christmas I decided to fix my JavaScript knowledge…

I definitely recommend learning JavaScript properly to every web application developer who hasn’t done so yet (including those using GWT).

During my recent studies on JavaScript I found my candidate for its worst part: the name. Naming JavaScript JavaScript was like painting concrete green to make it look like grass. Concrete is not grass and it should not pretend to be grass. Concrete has its great parts and being grass is not one of them. And so does JavaScript… it does have great parts. Resembling Java is not one them, however.

 

1 Comment