Comparison: DbC and TDD – Part 3

This blog post is a continuation of the comparison between DbC and TDD that started with a dedicated look at code specification and covered other aspects in part 1 and part 2. It takes a look at further points and shows the characteristics (commonalities and differences) of TDD and DbC towards these aspects.

Universality

One of the most important differences between DbC and TDD is their different universality, because it influences expressiveness as well as verifiability of correctness.

A significant limitation (or better characteristic) of TDD is that tests are example-driven. With a normal manually written unit test you provide exemplaric input data and check for certain expected output values after the test run. Thus you have concrete pairs of input/output values to express a certain feature/behavior that you want to test. Since these values are only examples there has to be a process of finding relevant examples which can be very difficult. One step towards correctness is high code coverage. You should find examples that examine all paths of your code components. That can be a mess since the amount of possible code paths grows exponentially with the count of branches (if, switch/case, throw, …). But even if you find examples that give you 100% code coverage you cannot be sure that the component acts right for all possible input values. On the one side there could be hardware-dependent behavior (e.g. with arithmetic calculations) that leads to differences or exceptions (division-by-zero, underflow/overflow etc.). On the other side perhaps you get 100% code coverage for your components during the TDD process, but this doesn’t help much on integration with other code components or external systems that could act in complex ways. This problem goes beyond TDD. Thus unit tests in a TDD fashion are nice by their flexibility and simplicity, but they cannot give you full confidence. Solutions like Pex can help you here, but this is part of another story (that I hope to come up with in the near future).

DbC on the other side introduces universal contracts that must hold for every value that an object as part of a contract can take. That’s an important aspect to ensure correct behavior in all possible cases. Thus contracts have a higher value in terms of universality than tests (but they fall back in other terms).

Expressiveness

The expressiveness of a component’s behavior and qualities as part of its specification is an important aspect since you want to be able to express arbitrary properties in a flexible and easy way. Universality is one part of expressiveness and has just been discussed. Now let’s look at expressiveness on a broader scope.

TDD has a high value on expressiveness besides the exemplaric nature of tests. You are free to define tests which express any desired behavior of a component that can be written in code. With TDD you have full flexibility, but you are also responsible to get this flexibility under your control (clear processes should be what you need). One aspect that goes beyond the scope of TDD is interaction/collaborative testing and integration testing. The TDD process implies the design of code components in isolation, but it doesn’t guide you in testing the interactions between components (and specification of behavior which relies on those interactions). TDD is about unit testing, but there is a universe beyond that.

Their universal nature makes contracts in terms of DbC a valuable tool. And moreover by extending the definition of a code element they improve the expressiveness of these code elements, what’s great e.g. for the role of interfaces and for intention revealing. But they have downsides as well. First they are tightly bound to a certain code element and are not able to express behavior that spans several components (e.g. workflows and classes interacting together). And second they have a lack of what I would call contentual expressiveness. With contracts you are able to define expectations with arbitrary boolean expressions and that’s a great thing. But it’s also a limitation. For example if you have an algorithm or complex business operation then it’s difficult to impossible to define all expected outcomes of this code as universal boolean postcondition. In fact this would lead to full functional specification which implies a duplication of the algorithm logic itself (in imperative programming) and this would make no sense! On the other side if you use example-driven tests you would not have a problem since you should know what values to expect on a certain input. Furthermore no side effects are allowed inside of a contract. Hence if you use a method to define a more complex expectation this method must be pure (= free of visible side effects). The background of this constraint is that contracts mustn’t influence the behavior of the core logic itself. It would be a mess if there would be a different behavior depending on the activation state (enabled/disabled) of contracts (e.g. for different build configurations). This has a limitation if you want to define certain qualitites with contracts like x=pop(push(x)) for a stack implementation, but it has advantages as well, since it leads to the enforcement of command-query separation by contracts.

Checking correctness

Of course you want to be able to express as many behaviors as possible to improve the specification of your code components. But expressiveness is not leading anywhere when it’s impossible to figure out if your components follow the defined specification. You must be able to stress your components against their specification in a reproducible way to ensure correct behavior!

With TDD correctness of the defined behavior (= tests) can be checked by actually running the tests and validating expected values against the actual values as outcome of a test run. The system-under-test is seen as blackbox and behavior correctness is tested by writing values to the input channels of the blackbox and observe the output channels for correct values. This could include techniques like stubbing or mocking for handling an object in isolation and for ensuring reproducible and verifiable state and behavior. This testability and reproducibility in conjunction with a well implemented test harness is important for continuously checking correctness of your code through regression testing. It’s invaluable when performing continuous integration and when code is changed, but this aspect will be covered in the next blog post. However the exemplaric nature of tests is a limitation for checking correctness which has been shown above when discussing universality.

At first DbC as principle doesn’t help you in terms of checking correctness. It introduces a fail-fast strategy (if a contract is not met, it’s a bug – so fail fast and hard, because the developer has to fix the bug), but how can you verify the correctness of your implementation? With DbC bugs should be found in the debugging step when developing code, thus by actually executing the code. And common solutions like Code Contracts for .NET come with a runtime checking component that checks the satisfaction of the defined contracts when running the code. But this solution has a serious shortcoming: It relies on the current execution context of your code and hence it takes the current values for checking the contracts. With this you get the same problems as with tests. Your contracts are stressed by example and even worse you have no possibility to reproducible check your contracts! Thus dynamic checking without the usage of tests makes no sense. However contracts are a great complement of tests. They specify the conditions that must apply in general and thus a test as client of a component can satisfy the preconditions and then validate postconditions or custom test behavior. Another interesting possibility to verify contracts is static checking. Code Contracts come with a static checker as well that verifies the defined contracts at compile-time without executing the code. On the contrary it actually inspects the code (gathers facts about it) and matches the facts as abstraction of the implementation against the defined contracts. This form of whitebox code inspection is done by the abstract interpretation algorithm (there are other solutions like Spec# that do real formal program verification). The advantage of static checking is that it’s able to find all possible contract violations, independent of any current values. But static checking is hard. On the one side it’s hard for the CPU to gather information about the code and to verify contracts. Hence static checking is very time-consumptive which is intolerable especially for bigger code bases. On the other side it’s hard for a developer to satisfy the static checker. To work properly static checking needs the existence of the right (!) contracts on all used components (e.g. external libraries) which is often not the case. And looking at the static checker from Code Contracts it seems to be too limited at the current development state. Many contracts cannot be verified or it’s too impractical to define contracts that satisfy the checker. Thus static checkers are a great idea to universally verify contracts, but especially in the .NET world the limitations of the checkers make them impractical for most projects. Hence a valid strategy for today is to write tests in conjunction with contracts and to validate postconditions and invariants with appropriate tests.

[To be continued…]

kick it on DotNetKicks.com

Comparison: DbC and TDD – Part 2

This blog post is a continuation of the comparison and differentiation between DbC and TDD. Please take a look at part 1 which covers the design aspect (and shortly specification again which has been discussed in more detail here). Today’s post takes a look at documentation and code coupling and shows commonalities and differences of TDD and DbC towards these aspects.

Documentation

One important thing for other developers to understand your code is documentation. Code comments are a first necessity here to basically explain what a code component is intended to do. But comments have the unpleasant habit to run out-of-sync with the real code if you are not carefully and consequently adapting them. Most of all if you write obligations or benefits into comments there will be no check if those requirements hold or if the real implementation matches them.

DbC with its contracts is a much better way to document those specification aspects. Contracts are coupled to the code and e.g. with Code Contracts you get static and runtime checking for them. This checked documentation makes DbC really powerful (if properly used) and avoids the asynchronicity between code and documentation. It shows developers how a component should be used, what requirements the client  has to fulfill and what he can expect in return. The client can rely on these specified qualities which improves the reusability of contracted code components. Furthermore there are possibilities to integrate Code Contracts into the Sandcastle documentation and for Visual Studio 2010 there will be an add-in that immediately shows defined contracts on a component as you develop against it. With that you get great MSDN-like documentation that contains the defined contracts as well as support for your development process when you use contracted code.

Tests in terms of TDD add another aspect of documentation. Due to their exemplary nature and their specification of an element’s behavior those tests are great to show a developer the intent of a code component and give him a guideline to its usage. Since tests can be run and validated in a reproducible way developers are able to rely on defined behaviors as well.

Together with documentation comes the aspect of intention revealing. And both TDD and DbC add some value here. Both express the developer’s intention with a certain component and show far beyond code comments and naming conventions what behavior a client can expect. Developers can use the component in these specified ways and don’t have to manually investigate the component’s implementation.

Code coupling

One drawback of TDD is the locational gap between the code implementation and the tests as specification and documentation source. For sure this has an advantage as well: the code logic isn’t polluted with the specification and thus it’s kept clean. But the disadvantages weigh heavier for me. If developers want to show how a component behaves they have to cross the gap and investigate the tests. This is difficult for developers who are not very familiar with TDD. Furthermore tests don’t give any support for client usage of a component. Of course they give usage examples, but developers can use and especially misuse a component in arbitrary ways. This can lead to serious problems (e.g. inconsistent states) if there are no other mechanisms to prohibit misusage.

DbC on the other side sets contracts directly on the implementation of a code component in place. Thus they have a declarative nature and extend the definition of a code component. Some realizations of DbC like Code Contracts in .NET have drawbacks since they set contracts imperatively into the code, but rewrite the code after compilation to set the contracts in the „right“ places. Thus Code Contracts break the uniformity principle (different semantics should be expressed through different syntax) and pollute the code logic in some way. Other realizations like the Eiffel language have contracts as keywords built into the language which makes a better choice in my opinion. Anyway contracts at the same place as the implementation avoids the drawbacks of a locational gap. And moreover DbC prevents misusage of a component. Contracts are dynamically checked at runtime or statically at compile time and fail early if requirements are not satisfied. That’s a very important concept because it expresses a clear behavior if something goes wrong (existence of a bug) and gives a clear contract for obligations and benefits that hold and are checked in client/supplier communication.

[To be continued…]

kick it on DotNetKicks.com

Comparison: DbC and TDD – Part 1

Let’s come to another blog post in preparation of elaborating the synergy of DbC and TDD. My first blog post on this topic covered an initial discussion on specification of code elements. Thereby it has shown different characteristics of DbC and TDD in terms of code specification. Since specification is one really important aspect in comparison of DbC and TDD, it’s of course not the only one. Hence today’s blog post is a starting point for a more general comparison of DbC and TDD with some other important aspects. There are 3 more blog posts that I will come up with during the next weeks, forming a 4-part comparison series.

To say first, Test-Driven Development (TDD) and Design by Contract (DbC) have very similar aims. Both concepts want to improve software quality by providing a set of conceptual building blocks that allow you to prevent bugs before making them. Both have impact on the design or development process of software elements. But both have their own characteristics and are different in achieving the purpose of better software. It’s necessary to understand commonalities and differences in order to take advantage of both principles in conjunction.

Specification again…

The last blog post has already given a dedicated look at the specification aspect and how DbC and TDD can add some value. To summarize both principles extend the code-based specification in their own ways. TDD let’s you specify the expected behavior of a code element in an example-driven and reproducible way. It’s easy to use and allows the expression of any expected behavior. DbC on the other side sets universal contracts in place that extend the definition of a code element and are tightly coupled to it. It’s a great concept for narrowing the definition of a code element by defining additional physical constraints as preconditions, postconditions and invariants. By defining contracts on interfaces DbC strengthens the role of interfaces and enforces identical constraints/behavior over all implementations of an interface. However not every behavior can be expressed by contracts and they’re bound to a single code element. Thus they don’t lessen the position of tests, but can be seen as great complement.

Design aspect

Both DbC and TDD have impact on the design of an API. Well, there is a slight but important difference as the names imply: it’s Design by Contract, but Test-Driven Development. Thus with DbC (contract-first) on the one side you are encouraged to write your contracts as you design your components (at design phase), which perfectly fits with the idea of contracts as extension of a component’s definition. With TDD and the test-first principle on the other side you write a test which maps to a new feature and afterwards you directly implement the code to get the test to green state. Hence TDD is tightly coupled to the development phase in contrast to DbC, which seems to come first. In addition personally I wouldn’t fight a religious war on this naming. If you think of DbC as „Contract-First Development“ or „Development by Contract“ you would have the contract-first principle coupled to the development phase as well. The more important thing is to find a way to effectively use contracts in the development cycle. If you are an advocate of up-front design you would perhaps want to set your contracts at design phase. But if you intensively use TDD it would be difficult to go down this design phase road. However you would set your contracts at development phase in conjunction with the test-first principle. This leads to the question of an effective development model with TDD and DbC and that’s another important story…

But for now let’s come back to the impact of DbC and TDD to the design of an API. With TDD you write a new test for each new feature and then you bring this test to green by implementing some piece of logic. This is some form of Client-Driven Development. Your test is the client of your API and you call your methods from the client’s perspective (as a client would do). If the current API doesn’t fit your needs, you extend or modify it. Thus the resulting API is very focussed on the client’s needs and furthermore doesn’t contain code for unnecessary features, which is a great thing in terms of YAGNI. Another impact of TDD is that it leads to loosely coupled components. Tests in form of unit tests are very distinct and should be coupled to the tested component, but not far beyond that (other components that are called, e.g. data access). Thus there is a certain demand for loose coupling e.g. by DI.

With DbC and contracts on your components you have other impacts. Contracts clarify the definition and intent of your components. When you come up with contracts you strengthen your opinion about a component by setting contractual obligations and benefits. This leads to much cleaner components. Moreover your components will have fewer responsibilities (and thus a better cohesion) since it would be painful to write contracts for components with many different responsibilities. Thus DbC is great in terms of supporting the SRP and SoC. Another impact comes from the „limitation“ of contracts to support only pure methods as part of a contract. If you want to use class methods in contracts (e.g. invariants) of this class you have to keep those methods pure. This leads to the enforcement of command-query separation by contracts, which very often is a good thing in terms of comprehensibility and maintainability.

[To be continued…]

kick it on DotNetKicks.com

Specification: By Code, Tests and Contracts

Currently I’m taking further investments in thinking about the synergy of Design by Contract (DbC) and Test-Driven Development (TDD). This process is partially driven by my interests in Code Contracts in .NET 4.0 and other current developments at Microsoft (e.g. Pex). The basic idea I’ve got in mind is to combine DbC and TDD to take best advantage of both principles in order to improve software quality on the one side and to create an efficient development process on the other side. My thoughts on this topic are not too strong at the moment, so feel free to start a discussion and tell me what you think.

Before I come to discuss the synergy of DbC and TDD it’s absolutely necessary in my opinion to understand the characteristics, commonalities and differences of both concepts. This first blog post drives in this direction by looking at a very important aspect of both concepts: the formal and functional specification of code. It’s a starting point for further discussions on this topic (a 4part comparison series follows). So let’s start…

Code-based specification

In essence, DbC and TDD are about specification of code elements, thus expressing the shape and behavior of components. Thereby they are extending the specification possibilities that are given by a programming language like C#. Of course such basic code-based specification is very important, since it allows you to express the overall behavior and constraints of your program. You write interfaces, classes with methods and properties, provide visibility and access constraints. Furthermore the programming language gives you possibilities like automatic boxing/unboxing, datatype safety, co-/contravariance and more, depending on the language that you use. Moreover the language compiler acts as safety net for you and ensures correct syntax.

Since such basic specification is necessary to define the code that should be executed, it has a lack of expressiveness regarding the intent of a developer and there is no way to verify correct semantics of a program with it. As an example interfaces define the basic contract in terms of method operations and data, but looking at C# a client of this interface does not see what the intent of a method is or which obligations he has to fulfill when calling a method or what state he can assume on return of a method. Furthermore interfaces can not guarantee uniform behavior across their implementations. TDD and DbC are there to overcome or at least decrease this lack of expressiveness at some points and to guarantee correct semantics.

Test-based specification

Let’s come to test-based specification using TDD (as well as BDD as „evolutional step“ of TDD). This is inevitably integrated in the TDD cycle. Every test written by a developer maps to a new feature he wants to implement. This kind of specification is functional and example-driven, since a developer defines by exemplary input/output pairs what output he expects as result of the test run under a certain input. With well-known techniques (stubs, mocks) he is able to run his tests in isolation, get reproducible test results and perform state and behavior verification.

Compared to code-based specification, test-based specification in a TDD manner is very valuable when it comes to expression of the behavior of a code element. It gives a set of tests that could at its extreme span the whole behavior of a code element. With their reproducibility the tests are indispensable when it comes to continuous integration to ensure correct behavior of modified code elements. Furthermore tests are a great source for other developers to show the intent of a developer for a method’s behavior and to give demonstration of the usage of a code element. There are other benefits and characteristics of TDD that will be discussed when comparing TDD and DbC altogether in subsequent blog posts.

An important aspect of test-based specification using TDD is that it’s done at a very granular level. For each new feature a test is written and if the present API doesn’t fit the needs, it will be extended or refactored. Thus TDD drives the API design as you go with your tests.

Contract-based specification

Another possibility for specifying code is contract-based specification in terms of DbC, thus defining preconditions and postconditions as method contract and invariants as class contract. With contracts you are able to define the basic obligations that a client must fulfill when calling a method as well as the benefits he gets in return. Furthermore invariants can be used to define basic constraints that ensure the consistency of a class. Thus with DbC a developer is able to define a formal contract for code that makes a clear statement of obligations and benefits in client/supplier communication (aka caller/callee). If a contract fails the behavior is clear as well: by failing fast the developer can be sure that there is a problem (bug) with his code and he has to fix it.

On a technical level there are several possibilities to define contracts. In Eiffel contracts are part of the programming language itself in contrast to Code Contracts that become part of the .NET framework. In any case contracts are directly bound to the code that they are specifying and express additional qualities of the code. Those qualities go beyond the code-based specification. In general contracts allow arbitrary boolean expressions what makes them a very powerful and flexible specification source. Nonetheless contracts only allow partial functional specification of a component. It can be very difficult or even impossible to define the complex behavior of a method (which methods is it calling, what business value does it have, what’s the concrete result for a concrete input, …) or to ensure certain qualities (e.g. infrastructure-related questions like „is an e-mail really sent?“) with contracts. Furthermore it’s impossible to use impure functions in contracts which would be necessary to express certain qualities of code (like expressing inverse functions: x = f-1(f(x)), if f or f-1 are impure). Test-based specification could be used here instead.

But let’s come back to the technical aspect: Contracts are wonderful animals in terms of extending the code-based specification by narrowing the definition of a code element and becoming part of the element’s signature. They can be used to define general conditions, e.g. physical constraints on parameters and return types. Thereby (for example with Code Contracts) the contracts can be inherited to sub-classes while respecting the Liskov substitution principle (LSP). Moreover contracts can be defined on interfaces and thus they are a valuable tool for expressing constraints and qualities of an interface that must be respected by every implementation. With that contracts are wonderfully strengthening the role and expressiveness of interfaces and complementing code-based specification at whole.

With the tight coupling to the code, contracts give immediate support for other developers how an API should be used. They express the developer’s intent (intention revealing) for his code elements, which leads to easier comprehension and usage of those components. They give some kind of checked documentation, which again greatly complements code-based specification and leads to fewer misusage of contracted components.

In contrast to test-based specification, contracts are employed at level of whole components by employing invariants to classes and pre-/postconditions to class methods. Moreover contracts lead to components with very few dependencies (promoting the SRP) since it’s difficult to write contracts for components with many responsibilities.

First conclusion

This blog post is intended to be a first preview of what I want to come up in the next time. It has given an overview of possibilities for specification of code elements with focus on test-based and contract-based specification in terms of TDD and DbC. It has shown some commonalities and qualities of both concepts. In conclusion TDD and DbC are both valuable in terms of the specification of code elements and in revealing the developer’s intent.

In my opinion it’s by no means a „one or the other“ choice. TDD and DbC act on their own terrains with their own benefits and drawbacks. There are overlaps, but in general they can naturally complement each other. It’s time to think about ways to leverage a conjunction of both concepts, isn’t it?

[To be continued…]

kick it on DotNetKicks.com

Reactive Framework (Rx) – first look

During the last weeks I took some time to have a first look at the Reactive Framework. Here I want to share some impressions and information with you.

Rx: What is it about?

The Reactive Framework (Rx) has been developed at the Microsoft Live Labs and is another creation by Erik Meijer, the father of LINQ. Rx will be part of the upcoming .NET Framework 4.0, but is already included as preview DLL (System.Reactive.dll) in the sources of the current Silverlight 3 Toolkit. It’s used there to get some smoother Unit Tests to work.

The aim of Rx is to simplify/unify complex event-based and asynchronous programming by providing a new programming model for those scenarios. That’s very important in a world which becomes more and more async (Web, AJAX, Silverlight, Azure/Cloud, concurrency, many-core, …) and where present approaches (event-based programming with .NET events, Begin/Invoke) hit the wall. Program logic gets hidden in a sea of callback methods which leads to nasty spaghetti code. Those of you who are creating larger async programs know what I mean 😉

Basic ideas

The main concept of Rx is reactive programming and that’s not new at all. It involves a distinction of data processing methods into push and pull scenarios:

  • In a pull-scenario your code/program is interactively asking for new data from the environment. In this case the actor is your program while it pulls new data from the environment.
  • In a push-scenario your program is reactive. The environment in pushing data into your program which has to react on every new incoming data item (see illustration below). Reactive programs are ubiquitous: every event-handler (e.g. for UI events) is an example for a reaction in a push-scenario.

Reactive Programming

The guys at Microsoft got a very important insight which builds the fundament of Rx: there’s a dualism between the Iterator and the Observer pattern! This means that both patterns are essentially the same, if you dismiss their practical intentions for a moment.
With the Iterator pattern you get a sequence of data which is iterated in the pull-mode: the Iterator is the actor and asks the iterable object for data.
The Observer pattern can be interpreted equivalently: the environment pushes data into your program (e.g. through events), which has to react. As result with repeatedly fired events you get a sequence of data, which makes the relationship to the Iterator pattern obvious.

With this, an Observer in the push-scenario is equal to an Iterator in the pull-scenario and that’s the key insight, which has led to the development of the Reactive Framework. 14 years had to pass from the publication of the GoF Design Patterns book until now to figure out this intense relationship of the Iterator and the Observer pattern.

LINQ to Events

Rx realizes the Observer pattern with the two interfaces IObserver and IObservable which are the pendants for IEnumerator/IEnumerable of the Iterator pattern. IObservable is a sort of „asynchronous collection“, which represents the occurrence of a certain event at different points in time. Thus it expresses a sequence of data, on which you can run arbitrary actions. In fact IObservable for push-scenarios is essentially the same as IEnumerable for pull-scenarios.

The interplay of IObservable and IObserver is the following: on an IObservable you can register (Subscribe()) an IObserver (Note the dualism to IEnumerable/IEnumerator here!). The IObserver has three methods OnNext(), OnCompleted() and OnError(). These methods are invoked when the IObservable gets new data, completes the data sequence or results in an error. Thus your IObserver implementations take care of the data processing which replaces your custom callback methods.

The real impressing thing with Rx is that with events as data sequences you can use LINQ syntax to process and transform these data sequences (that’s why it’s sometimes called LINQ to Events). To create instances of IObservable and for providing LINQ methods, Rx comes with the class Observable. If you want for example take the first 10 click events on a button but skip the first click, you could easily write:

Observable.FromEvent(button1, "Click")
    .Skip(1)
    .Take(9)
    .Subscribe(ev => MyEventAction(ev));

Subscribe() has some overloads. In this example I’ve directly specified an action which is executed when an event (= a piece of data) comes in. Instead you could give it e.g. your custom implementations of IObserver as well.
LINQ gives you real power in processing, composing and transforming your events as data stream. Even in the simple example above, if you would implement it by event-based programming without Rx, you would need some additional effort. You would have to use temporary variables to skip certain elements, take others and so forth. In general, more complex cases require complete finite state machines which can be circumvented with Rx.

Composing events with LINQ is another typical and impressing example for Rx. Thus, a dragging event on a textblock can be implemented as:

IObservable<Event<MouseEventArgs>> draggingEvent =
    from mouseLeftDownEvent in Observable.FromEvent(textBlock1, "MouseLeftButtonDown")
    from mouseMoveEvent in Observable.FromEvent(textBlock1, "MouseMove")
        .Until(Observable.FromEvent(textBlock1, "MouseLeftButtonUp"))
    select mouseMoveEvent;
draggingEvent.Subscribe(...);

In this example, with the from keyword the two events "MouseLeftButtonDown" and "MouseMove" are chained together. Everytime the left mouse button is pressed on the textbox, the „gate“ opens to react on mouse move events. The „gate“ closes again when the left mouse button is released. So the composed stream includes all events where the left mouse button is pressed and the mouse is moved – the typical dragging event!

There are other possibilities to compose events, which for example consider the order of events differently. Those build the interesting parts of the Reactive Framework and clarify the use cases, where Rx has real advantages over present event-based .NET programming.

Bottom line

Personally I like the new way of thinking of events as data sequence! The dualism of the Iterator and the Observer pattern has nothing artificially constructed, but some sort of purity and mathematical elegance for me. But one big question remains: do we need this in our everyday’s developer life and which advantages come with it?

Erik Meijer himself says about Rx:

Of all the work I’ve done in my career so far, this is the most exciting. […] I know it’s a bold statement, but I really believe that the problem of asynchronous programming events has been solved.

That’s a really strong message and it has some important weight since Erik is the creator of LINQ and has a big expertise in language design. Does Rx hold what he promises?

In my opinion Rx is really beautiful and valuable when you do complex asynchronous operations, where you have to compose and orchestrate events/data from several sources. With Rx you’re able to chain and filter events to easily create the concrete event, to which you want to react. That’s the power of LINQ, which allows transforming the resulting data as well. LINQ gives you the whole flexibility of processing the data stream that you already know from LINQ to Objects (on IEnumerable) etc.

Beyond that at the moment I don’t see a real big impact in many current development scenarios. If you have to handle simple events or do simple async computations, you are still fine with the present event-based programming model. Of course with Rx you can write your own observers to prevent a sea of callback methods, but encapsulation of handler logic can be done without that as well. Thus at the end I like the concepts of Rx (Observer, LINQ, …) and I’m curious to see how this whole thing develops after the release of .NET 4.0, but I’m not over-enthusiastic at the moment.

Links

Finally here is a list of links to some interesting websites regarding the Reactive Framework:

kick it on DotNetKicks.com

Is the Specification Pattern obsolete?

I’m a guy who loves many of the new patterns that I get to know. But sometimes when I show that pattern to colleagues around me, they tilt their heads and say: „Arrrm, who needs this?“. Often they help me to get back to the ground and think again about intents and benefits of one or the other pattern.

Currently I’m rethinking the Specification Pattern, which has been introduced by Eric Evans and Martin Fowler some time ago and got some attention in the DDD community and beyond. The intent of the Specification Pattern is to separate the logic for (e.g.) filtering an entity from the entity itself. This responsibility is encapsulated in single classes – the specifications. Those specifications are given for example to a single method Filter() on a repository, which takes care of filtering the domain model by the specifications. Thus, you bypass the need for having several special methods in your repository which take care of the actual filter operation (such as FilterByCustomerName(), FilterByCustomerAddress() etc.).

An example interface snippet for a repository of persons with a specification filter method could then be:

interface IPersonRepository
{
    IEnumerable<Person> Filter(ISpecification<Person> filterTerm);
    // ...
}

Summarized, some key benefits of the Specification Pattern are:

  • Loose coupling of the filter logic from the objects being filtered,
  • Single responsibility: Interface of repository/data provider isn’t polluted with lots of filter methods,
  • Callers can express in a flexible way by which criteria entities can be filtered,
  • Composition of specifications (e.g. by a fluent interface) yields to flexible filter queries, for example:
    [sourcecode language=’c#‘]var filter = Filter.By(25).And(„Frankfurt“)[/sourcecode]

But now the provocative question that one could come with: Who needs the Specification Pattern in times of C# 3.0 and Expressions? This might sound disturbing at first, but let me explain…
C# 3.0 came up with the new concept of Expressions/Expression trees. This way, code can be treated as a kind of syntax tree in the program. Developers can extend the tree with additional code portions and then compile it into executable code which can be invoked.

Therefore Expressions seem to give us similar benefits for filtering as specifications do. By using Expressions, we are able to define a single Filter() method on the repository, which gets an appropriate Expression as argument. Consumers of the repository can call this method with their custom filter Expressions and thus we’ve got the same benefits: loose coupling of filter logic and objects to be filtered, flexibility for callers to provide custom filter criteria and the possibility to combine several filter criteria (several Expressions).
One could argue that with specifications we’ve got better encapsulation of the filter logic into separate classes. But this can be done with Expressions as well (e.g. by separate wrapper classes or dedicated Expression providers), so that isn’t a good argument…

The interface of a repository of persons with a filter that uses Expressions might look like this:

interface IPersonRepository
{
    IEnumerable<Person> Filter(Expression<Predicate<Person>> filterTerm);
    // ...
}

This leads again to the question: Do we still need the Specification Pattern or is it already embedded with Expressions in the language (C# 3.0) itself?

Pattern focus

First I want to mention that the intent of a pattern is not (only) to provide a technical solution, but also a conceptual guideline for a problem at hand. Thus Expressions as technical solution don’t cover the intent of the Specification Pattern. The Specification Pattern can be implemented with the help of Expressions, but that’s another point. So, directly comparing the Specification Pattern with Expressions is like an apple-pear-comparison.

Encapsulation focus

Let’s come to use cases when I would prefer the use of dedicated specification classes rather than directly using Expressions.
First as a team lead you could want to enforce the use of dedicated specification classes in your development team (thus enforce encapsulation of the filter logic). By giving Expressions or Predicates to filter methods directly,  developers would be allowed to define filter expressions whereever they want to. You don’t have the control over this process and please don’t trust your project guidelines: If developers are allowed to do something wrong, they will do.

With specifications you can force your team members to write specification classes for their requirements, which leads to enforced encapsulation. Furthermore you drive reuse of existing specification classes. Other developers are encouraged to use existing specifications, especially because of the additional effort for writing new specifications. By combining several specifications, the reusability aspect is pushed even more.

Domain model focus

Giving consumers the full flexibility of defining custom filter criteria with specifications can be very handsome in many situations. However, in other scenarios instead of giving full flexibility, you may want to restrict the set of filters for your domain model. If you focus on your domain, you perhaps want a restricted set of specifications, each with a very specific meaning in your domain! Binding such specifications to your domain model makes the purpose of your domain and how it can/should be used in terms of filtering much clearer.

Technically, in order to restrict access to a particular set of specifications, you could create sealed specification classes inside of your domain model services (same layer as repository). Consumers of the repository would be allowed to call the Filter() method on the repository with those specifications (and compositions of those), but they would not be able to create new specifications if you don’t mark the specification interface as public. This way you get two characteristics: Restricted filtering, which fits into your domain and your use cases on the one hand, and encapsulation/separation of filter logic and entities which should be filtered on the other hand.

Bottom line

This article started with an interesting and provocative question: Is the Specification Pattern obsolete? This question came up with a look at Expressions in C# 3.0. Technically you can achieve similar results when using Expressions instead of implementing the Specification classes by hand. But as the last sections have shown, in my opinion the Specification Pattern is not obsolete! As pattern it’s adding very specific value to the plain technical solution, when you take encapsulation and domains into account. Then it clearly goes beyond the technical aspect which many developers see at first sight.

Those are my current thoughts on specifications, but of course my opinions are not carved in stone. What do you think? What points I’ve missed perhaps? I’m eager for your comments!

kick it on DotNetKicks.com

My Top-3 development tools

Currently, MSDN Germany arranges a blog-parade and is asking developers to disclose their top-3 development tools. I will follow this request, so here are my current top 3 development tools, that make the developer’s life really easier. They are to see in addition to Visual Studio 2008. VS is a really wonderful tool and I do not want to miss Intellisense, TFS integration debugging support anymore. But it’s not perfect and here those tools come on stage:

  1. Jetbrains ReSharper:
    I just started with this brilliant tool and from the first moment on I loved this beautiful piece of software. ReSharper is a Visual Studio plugin, that helps you on managing, refactoring and navigating your code more easily. Visual Studio has a lack of functionality in these areas and ReSharper bridges the gap here. Some examples: if you are heavily using interfaces, you will have pain when navigating through your code with Visual Studio alone, because by pressing F12 you will only come to the interfaces, but not to the implementations. With Resharper you can press Ctrl+Alt+B and come to the direct implementation, if there is just one. If there are more, you will have the choice to navigate to one of them. Other nevigation features include: go to declaration, implementation, inheritors, find usages, … – it really increases productivity! Another example are the great refactorings. Visual Studio has to hide in shame in comparison. One remark: CodeRush Xpress is another nice refactoring tool, which supports you on code navigation, too. It’s kind of limited in the free Xpress edition, but it’s always nice to have, if there’s no chance for the ReSharper.
  2. NDepend Pro:
    As another great tool, NDepend helps you in analyzing your code or existing code bases from other developers. It shows you dependencies between solution projects and creates default queries on the code, while supporting many popular code metrics. NDepend can be run on projects without Visual Studio, but also smoothly integrates into VS context menus. Furthermore, you are able to freely define your own code queries, for example to find all classes, that have more than a certain number of lines or methods, that have a high degree on cyclomatic complexity. By supporting the CQL (code query language), such queries are very flexible and quickly created.
  3. GhostDoc:
    Document your code! Many programmers and even developers have problems with this demand. GhostDoc supports you in this task. It integrates into Visual Studio and if you press Shit+Ctrl+D on any piece of code, you get a documentation template, that’s created automatically by use of the component’s name and the data type (or return type). In some cases this default documentation is sufficient, but mostly you have to adapt it. GhostDoc helps you in giving you a template, but for documenting what your code does you are responsible on your own! Don’t rely on automatic docu, since this isn’t doing the job alone. Additionally, GhostDoc can assist you on changes of your code. If you change the signature of an earlier documented method, GhostDoc can look on the changes and include new parameters to the docu respectively delete old ones. I don’t wanna miss this brilliant tool and the best: it’s for free!

That’s it from my view. I really missed more tools like .NET Reflector, but those are just my top-3. I’m agog: what are your top-tools in development and why?

Architektur erlernen

Golo Roden hat einen netten Blog-Eintrag erstellt, in dem er beschreibt, wie man sich dem Thema Software-Architekturen nähern kann. Meiner Meinung nach hat Golo hier einige Dinge angesprochen, die für den Einstieg ganz nett sind und die Einstiegshürde nehmen. Auch bei mir stellte (und stellt sich immer wieder) die Frage, wie ich zu einem besseren (oder überhaupt einem ;-)) Architekten werden kann. Meiner Meinung nach sind Leute sehr hilfreich, die einfach einen breiten Begriffsschatz zu dem Thema haben. Mit der Zeit schnappt man von diesen Leuten Begriffe auf, die mit dem Thema Architektur zu tun haben und durchsucht dann das Internet für weitere Informationen zu den jeweiligen Themen. Doch der beste Lehrmeister ist immernoch die Praxis. Man muss einfach selbst Erfahrungen sammeln (und das über Jahre hinweg), um feststellen zu können, welche Dinge funktionieren, welche nicht und warum sie das tun bzw. nicht tun. Dieses Hinterfragen ist essentiell und unerlässlich dafür, seine Lösung später auch vor anderen verteidigen zu können.

Zu guter Letzt noch ein Tipp: beim Thema Architektur sollte man 1.) nie auf der eigenen Meinung verharren und vor allem 2.) sein Design der zu erstellenden Anwendung anpassen. Bei 1.) sollte man begründen können, warum eine Architektur schlecht ist und das mit Gründen, die für das aktuelle Projekt auch relevant sind. Jedes Projekt ist anders und besitzt andere Rahmenbedingungen, die Frage nach einer „guten“ Architektur ist daher sehr dynamisch und individuell verschieden! Auch ich bin hier gerade in einem Lernvorgang, umso wichtiger ist es mir diese Erfahrung als Tipp weiterzugeben. Zu 2.) ist zu sagen, dass man das KISS-Prinzip beherzigen sollte: keep it solid and simple! Eine abgefahrene Architektur macht nur dann Sinn, wenn das Projekt sie auch erfordert. Seine Kreativität ausleben zu können ist kein Grund dafür, wilde Architekturspielereien durchzuführen, z.B. durch den Einsatz von Mocking- oder Dependency-Injection-Frameworks. Austauschbarkeit von Komponenten wie dem Datenzugriff ist beispielsweise so eine Spielerei, die nur in wenigen Projekten auch sinnvoll ist. Erst wenn die fachliche Anforderung es konkret erfordert, sollte ein Aspekt in den Entwurf des Architekturdesigns mit einfließen. Man sollte sich immer wieder selber fragen: benötige ich es überhaupt, wenn ja aus welchem Grund und geht es vielleicht auch anders/einfacher? Hilfreich ist auch die Vorstellung, dass man seine Lösung und die einzelnen Teile bei seinem Auftraggeber verteidigen muss. Wenn hier als Argument nur übrigbleibt: „ja das ist jetzt schön austauschbar“, die Anforderungen es aber nicht erfordern, spätestens dann sollte man sich fragen, ob man nicht etwas falsch macht…

Wie gesagt, ich selbst sehe mich perspektivisch als Architekt und finde das ganze Thema sehr spannend, stehe aber 2 Monate nach meinem Studiumsabschluss selber noch ziemlich am Anfang 🙂

Evolution eines Entwicklers

Jeremy Miller hat in seinem Blog einen wirklich interessanten Beitrag zur persönlichen Entwicklung eines Programmierers geschrieben. In diesem Beitrag habe ich mich selbst mehr als einmal wiedergefunden und gesehen, auf welcher Stufe der Evolution ich gerade stehe. Auf jeden Fall empfehlenswert!

Mesh me up!

Microsoft’s Live Mesh ist ja nun endlich auch offiziell als Betaversion in Deutschland verfügbar. Grund genug, sich diese Web-Applikation einmal genauer anzuschauen…  Live Mesh (bzw. das derzeitige Frontend) ist (momentan!) vordergründig ein Online-Dienst zur Synchronisation von Daten zwischen mehreren Rechnern. Gerade für mich, der 3 Rechner besitzt und diese auch abwechselnd in Betrieb hat, kommt so eine Lösung gerade recht. Daten (z.B. Visual-Studio-Projekte), welche man auf allen seinen Rechnern immer auf dem aktuellsten Stand haben möchte, ohne z.B. über einen LAN-Server darauf zuzugreifen, können auf diese Art schnell synchronisiert werden.

Live Mesh

Mit seiner Windows-Live-ID hat man sich schnell beim Mesh-Dienst angemeldet und kann ein/sein Mesh anlegen. Zum Mesh gehört grundsätzlich eine Weboberfläche, über die neue Ordner angelegt werden können und über die sich neue Rechner/Geräte leicht in das bestehende Mesh aufnehmen lassen. Zudem steht über den „Live Desktop“ eine Art Desktop im Browser zur Verfügung, über den sich Dateien öffnen lassen und sogar Bilder angeschaut und Video-/Audio-Dateien abgespielt werden können. Die Datenhaltung erfolgt zentral, d.h. die zu synchronisierenden Daten liegen auf einem Server in einem MS-Datencenter, wobei der eigene Account über 5GB kostenlosen Speicher verfügt. Sensible Daten sollte man hier vielleicht nicht ablegen, je nachdem wie sehr man Microsoft vertraut 🙂  Dies ist sicher auch der größte Knackpunkt der Applikation, da nicht jeder Benutzer seine Daten auf einem Microsoft-Server halten möchte…

Clientseitig wird auf jedem Rechner, der an der Datensynchronisation teilhaben soll, eine kleine Desktop-Applikation installiert. Nachdem man den Rechner über die Webplattform zum Mesh hinzugefügt hat, lassen sich über diese Applikation schnell zu synchronisierende Ordner auswählen. Insgesamt erhält also jeder Client eine lokale Kopie der Ordner, die im Mesh abgelegt sind. Auf dieser Kopie kann gearbeitet werden und Dateien können geändert werden. Die Clientapplikation sorgt dann automatisch dafür, dass Änderungen zum zentralen Server hochgeladen werden und dann an die anderen angeschlossenen Geräte verteilt werden. So wird dafür gesorgt, dass die Dateien ohne Zutun des Benutzers auf dem aktuellen Stand sind. Benutzerfreundlicher kann man es kaum gestalten. Derzeit lassen sich Windows- und Mac-Rechner zum Mesh hinzufügen, bald sollen auch Mobiltelefone integriert werden können.

Nur zur Synchronisation?

Synchronisationssoftwares gibt es viele, Live Mesh bietet aber noch ein bisschen mehr. Über den Live Desktop kann man auch von Rechnern, die nicht an der Synchronisation teilhaben, auf Dateien zugreifen und neue Dateien hinzufügen. Die Webapplikation ist komplett in Silverlight geschrieben und gestaltet sich entsprechend intuitiv und benutzerfreundlich. Weiterhin kann man andere einladen und ihnen Zugriff auf einen Ordner gewähren. Somit lassen sich bestimmte Ordner leicht austauschen und gleichzeitig synchron halten. Eine News-Seite informiert dabei über aktuelle Änderungen und erlaubt auch ein Hinterlassen eigener Nachrichten. Eine interessante Möglichkeit ist zudem die Option, einen Remote-Zugriff via Live Mesh zu gewähren. Rechner, die online sind, können so von einem selbst über Live Mesh ferngesteuert werden, wenn man dies erlauben möchte.

Zukunftsstrategie

Microsoft hat ja bereits mit Groove eine Synchronisationssoftware im Programm, was macht Mesh also für einen Sinn bzw. besonders? Insgesamt kann man Live Mesh bzw. das, was derzeit davon zu sehen ist, guten Gewissens als Spitze eines riesigen Eisberges sehen. Ziel ist vielmehr die Vereinigung der 4 Elemente Applikationen, Geräte, Daten und Menschen. Den Zugriff und Austausch zwischen diesen Elementen zu vereinheitlichen ist das Ziel von Live Mesh. Unter der Oberfläche ist die Windows Azure Plattform angesiedelt, darauf setzen die sogenannten „Live Services“ auf (s. Bild unten). Diese stellen Dienste/Services für Live Mesh zur Verfügung und über das Live Framework kann bequem dagegen programmiert werden. Das alles sieht sehr elegant und einfach aus. Um z.B. alle zum Mesh gehörenden Devices abzurufen, genügen folgende Zeilen: „var livefx = new LiveOperatingEnvironment(); livefx.Connect(); foreach(var dev in livefx.Mesh.Devices) …“. Außer Devices gibt es noch allgemeine Mesh Objects, Data Feeds und Data Entries, Contacts, Profiles, News, Media Resources etc.. Das Live Framework ist als CTP bereits verfügbar, allerdings existiert für den Download eine Warteliste, die ziemlich lang sein dürfte (ich warte immernoch…). Microsoft hat weiterhin bereits damit begonnen, auf Basis des Live Frameworks einige Applikationen zu schreiben. Im Vorfeld wurde schonmal mit Tracker eine „Demo-App“ zur Aufgabenverwaltung vorgestellt. Man stelle sich aber vor, was mit dieser Technologie alles möglich ist! Eigene Applikationen können leicht alle Mesh-Features nutzen und Daten über das Mesh anderen Devices zur Verfügung stellen. Weiterhin sind Anwendungen denkbar, die sowohl auf dem Live Desktop als auch auf den Client-Rechnern laufen und somit von überall erreichbar sind.

Ich bin gespannt, was da demnächst auf uns zukommt – get meshified 🙂