Explicit Property Memento

Introduction

Some weeks ago I’ve introduced the Property Memento pattern as solution for temporarily saving the state of an entity’s property on creation of the Property Memento and restoring this initial state when the Property Memento disposes. I’ve shown that the using(){} syntax separates between the Property Memento initialization and the core logic which should be run inside the Memento with temporarily changed property values.

While I like the pattern and its usage, I see that the using syntax can be disturbing for developers who don’t know the internal implementation of the Property Memento. The implementation mixes using as common syntax with other semantics, which violates the uniformity principle of computer science.

A colleague suggested an implementation of the pattern, which makes the three steps in a Property Memento’s lifetime explicit:

  1. Memorize the current properties values.
  2. Invoke some actions.
  3. Restore the memorized values.

I’ve adopted his implementation to fit my needs of memorizing properties even for more than one entity.

Explicit Property Memento implementation

The implementation of the Explicit Property Memento projects the 3 steps of a Property Memento by the methods Memorize(), Invoke() and RestoreValues(). Thus the intention is revealed, which yields to a better understanding of the whole process. Instead of memorizing one property of just a single entity on the creation of the Property Memento, the ExplicitPropertyMemento has the ability to store several property values of more than one entity. This makes the implementation a bit heavier, but the usage very smart. First here’s the implementation part:

public class ExplicitPropertyMemento
{
    class MemorizedProperty
    {
        public string Name { get; set; }
        public object Value { get; set; }
    }

    private readonly IDictionary<object, List<MemorizedProperty>> _memorizedProperties
        = new Dictionary<object, List<MemorizedProperty>>();

    public static ExplicitPropertyMemento Create
    {
        get { return new ExplicitPropertyMemento(); }
    }

    public ExplicitPropertyMemento Memorize(
        object classInstance, params Expression<Func<object>>[] propertySelectors)
    {
        if(propertySelectors == null)
            return this;

        var properties = new List<MemorizedProperty>();
        foreach(var propertySelector in propertySelectors)
        {
            string propertyName = GetPropertyName(propertySelector);
            properties.Add(new MemorizedProperty
                {
                    Name = propertyName,
                    Value = GetPropertyValue(classInstance, propertyName)
                });
        }

        if(_memorizedProperties.ContainsKey(classInstance))
            _memorizedProperties[classInstance].AddRange(properties);
        else
            _memorizedProperties.Add(classInstance, properties);

        return this;
    }

    public ExplicitPropertyMemento Memorize<TProperty>(object classInstance,
        Expression<Func<object>> propertySelector, TProperty tempValue)
    {
        Memorize(classInstance, propertySelector);

        string propertyName = GetPropertyName(propertySelector);
        SetPropertyValue(classInstance, propertyName, tempValue);

        return this;
    }

    public ExplicitPropertyMemento Invoke(Action action)
    {
        try
        {
            action.Invoke();
        }
        catch
        {
            RestoreValues();
            throw;
        }

        return this;
    }

    public void RestoreValues()
    {
        foreach(var memorizedEntity in _memorizedProperties)
        {
            object classInstance = memorizedEntity.Key;
            foreach(var property in memorizedEntity.Value)
                SetPropertyValue(classInstance, property.Name, property.Value);
        }
    }

    private string GetPropertyName(Expression<Func<object>> propertySelector)
    {
        var body = propertySelector.Body;
        if (body.NodeType == ExpressionType.Convert)
            body = ((UnaryExpression)body).Operand;
        return ((MemberExpression)body).Member.Name;
    }

    private object GetPropertyValue(object classInstance, string propertyName)
    {
        return classInstance
            .GetType()
            .GetProperty(propertyName)
            .GetValue(classInstance, null);
    }

    private void SetPropertyValue(object classInstance, string propertyName, object value)
    {
        classInstance
            .GetType()
            .GetProperty(propertyName)
            .SetValue(classInstance, value, null);
    }
}

Usage example

The usage is explicitly projecting the Property Memento process by the methods Memorize(), Invoke() and RestoreValues() and it’s straightforward. In case of the introductory example it’s shown next:

private void SomeMethod(...)
{
    int oldValSumHeight = _valSum.Height;
    int newValSumHeight = 0;

    var newAnchor = AnchorStyles.Top | AnchorStyles.Left;
    ExplicitPropertyMemento.Create
        .Memorize(_valSum, () => _valSum.AutoSize, true)
        .Memorize(_valSum, () => _valSum.Anchor, newAnchor)
        .Memorize(_detailContent, () => _detailContent.Anchor, newAnchor)
        .Invoke(() =>
            {
                _valSum.SetValidationMessages(validationMessages);

                newValSumHeight = _valSum.Height;
                Height = Height - oldValSumHeight + newValSumHeight;
            })
        .RestoreValues();

    _valSum.Height = newValSumHeight;
}

Alternatively the ExplicitPropertyMemento implementation allows specifying more than one property as argument for the Memorize() method, if you don’t want to set the new property value directly by this method. Thus you could use some code like this:

private void SomeMethod(...)
{
    int oldValSumHeight = _valSum.Height;
    int newValSumHeight = 0;

    ExplicitPropertyMemento.Create
        .Memorize(_valSum, () => _valSum.AutoSize, () => _valSum.Anchor)
        .Memorize(_detailContent, () => _detailContent.Anchor)
        .Invoke(() =>
            {
                _valSum.AutoSize = true;
                _valSum.Anchor = AnchorStyles.Top | AnchorStyles.Left;
                _detailContent.Anchor = AnchorStyles.Top | AnchorStyles.Left;

                _valSum.SetValidationMessages(validationMessages);

                newValSumHeight = _valSum.Height;
                Height = Height - oldValSumHeight + newValSumHeight;
            })
        .RestoreValues();

    _valSum.Height = newValSumHeight;
}

Conclusion

Instead of the using syntax this article has shown an Explicit Property Memento implementation, which makes the whole Property Memento process explicit by defining methods for each step of the process. I still like the using syntax because it’s very dense, but in terms of intention revealing I also like the ExplicitPropertyMemento. What’s your opinion? At least feel free to choose the implementation you prefer 🙂

kick it on DotNetKicks.com

Follow-up: Property Memento pattern

In a previous blog post I introduced the Property Memento pattern as way to temporarily save the value of an entity’s properties and restoring the initial values when the memento disposes.

While I like the pattern itself I think the syntax can be done better. Here comes an implementation that uses just one type parameter for the property type and misses the reflection helper. The static class PropertyMemento helps to generate PropertyMemento<T> instances without specifying the type parameter (it’s inferred by the compiler):

public class PropertyMemento<TProperty> : IDisposable
{
    private readonly TProperty _originalPropertyValue;
    private readonly object _classInstance;
    private readonly string _propertyName;

    public PropertyMemento(object classInstance,
        Expression<Func<TProperty>> propertySelector)
    {
        _classInstance = classInstance;
        _propertyName = ((MemberExpression)propertySelector.Body).Member.Name;
        _originalPropertyValue = GetPropertyValue();
    }

    public PropertyMemento(object classInstance,
        Expression<Func<TProperty>> propertySelector, TProperty tempValue)
        : this(classInstance, propertySelector)
    {
        SetPropertyValue(tempValue);
    }

    public void Dispose()
    {
        SetPropertyValue(_originalPropertyValue);
    }

    private TProperty GetPropertyValue()
    {
        return (TProperty)_classInstance
            .GetType()
            .GetProperty(_propertyName)
            .GetValue(_classInstance, null);
    }

    private void SetPropertyValue(TProperty value)
    {
        _classInstance
            .GetType()
            .GetProperty(_propertyName)
            .SetValue(_classInstance, value, null);
    }
}

public static class Memento
{
    public static PropertyMemento<TProperty> From<TProperty>(object classInstance,
        Expression<Func<TProperty>> propertySelector)
    {
        return new PropertyMemento<TProperty>(classInstance, propertySelector);
    }

    public static PropertyMemento<TProperty> From<TProperty>(object classInstance,
        Expression<Func<TProperty>> propertySelector, TProperty tempValue)
    {
        return new PropertyMemento<TProperty>(classInstance, propertySelector, tempValue);
    }
}

And now its usage in the same scenario shown in the original blog post:

private void SomeMethod(...)
{
    int oldValSumHeight = _valSum.Height;
    int newValSumHeight = 0;

    AnchorStyles newAnchor = AnchorStyles.Top | AnchorStyles.Left;
    using (PropertyMemento.From(_valSum, () => _valSum.AutoSize, true))
    using (PropertyMemento.From(_valSum, () => _valSum.Anchor, newAnchor))
    using (PropertyMemento.From(_detailContent, () => _detailContent.Anchor, newAnchor))
    {
        _valSum.SetValidationMessages(validationMessages);

        newValSumHeight = _valSum.Height;
        Height = Height - oldValSumHeight + newValSumHeight;
    }

    _valSum.Height = newValSumHeight;
}

As said before: just a little improvement to the original code which slightly improves the implementation and usage of the pattern…

Simple message-based Event Aggregator

This blog post is more about implementation than in-depth description or background information. It’s covering an implementation of a simple Event Aggregator that I’ve developed and used in a non-trivial Silverlight project and which I found quite useful. A word of warning: It’s not fully fledged and not suited for all scenarios… and it’s not intended to be.

(Little) background

Event broker linkage, Copyright: Matthias JauernigThe first time I’ve seen an implementation of the event aggregator pattern was in the Prism framework. I like the idea behind this pattern. It’s decoupling event publishers and subscribers. It’s representing a kind of Hub. Each time a publisher publishs an event it gets into the hub and then it’s redirected to the subscribers. The publishers/subscribers don’t have to know each other, they just have to know the concrete event aggregator which is a mediator and mediates between both parties. Thus the event aggregator is realizing a useful indirection mechanism.

The event aggregator can be used in many scenarios and situations. I’ve mainly used it in UI-related scenarios, but it’s not limited to that. In the UI situation it helped me out e.g. at view synchronization and indirect communication: between multiple user controls/views, views and view models (both directions) or views and controllers. They don’t have to know each other and thus can be loosely coupled.

While I came across with the Prism event aggregator at first, after some investigation I didn’t like the implementation very much in view of the usage from a client’s perspective. You first have to get an event from the event aggregator and then you can subscribe to this event or publish the event with some event arguments. This is kind of duplicated work. When I know the type of the event arguments why do I have to know the event anymore (under the assumption that there’s a 1:1 relationship between both)? Others came across with this issue as well. A better approach in my opinion is a solely message-based event aggregator. A system that doesn’t distinguish between events and event arguments, but is based on messages people can subscribe to and publish.

Implementation

Let’s come to the bits’n’bytes of my implementatoin. My message-based event aggregator should be able to handle messages of type IMessage. This is just an empty interface for type-correctness in the other components. Additionally it makes the message type explicit which I like because a message should be a very specific type of your application domain:

public interface IMessage { }

Another important component is the ISubscription<TMessage> interface and its default implementation Subscription<TMessage>. A subscription is something the event aggregator stores internally when an action should be subscribed to a message. This subscription process results in an ISubscription<TMessage> object, which is returned to the caller. The caller will be able to unsubscribe from the event aggregator with this subscription object – there’s no need to reference the subscribed action, which is handy e.g. in situations where you want to use anonymous methods (via delegates or lambdas). Furthermore when the subscription disposes it’s unsubscribed from the event aggregator, which I found quite useful:

public interface ISubscription<TMessage> : IDisposable
    where TMessage : IMessage
{
    Action<TMessage> Action { get; }
    IEventAggregator EventAggregator { get; }
}

public class Subscription<TMessage> : ISubscription<TMessage>
    where TMessage : IMessage
{
    public Action<TMessage> Action { get; private set; }
    public IEventAggregator EventAggregator { get; private set; }

    public Subscription(IEventAggregator eventAggregator, Action<TMessage> action)
    {
        if(eventAggregator == null) throw new ArgumentNullException("eventAggregator");
        if(action == null) throw new ArgumentNullException("action");

        EventAggregator = eventAggregator;
        Action = action;
    }

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    protected virtual void Dispose(bool disposing)
    {
        if(disposing)
            EventAggregator.UnSubscribe(this);
    }
}

Note the reference to EventAggregator in the interface and its use in the implementation. This is necessary due to the disposing functionality. Of course if you don’t want this behavior in your scenario you can adapt the implementation.

At the end the IEventAggregator interface and its default implementation EventAggregator handle the whole message publish/subscribe mechanism. Clients can Subscribe() to specific types of messages with custom actions that are stored in an ISubscription<TMessage> object. Those clients can UnSubscribe() if they’re owning the ISubscription<TMessage> object. Other clients can Publish() concrete messages, which gets the subscribers of the message notified:

public interface IEventAggregator
{
    void Publish<TMessage>(TMessage message)
        where TMessage : IMessage;

    ISubscription<TMessage> Subscribe<TMessage>(Action<TMessage> action)
        where TMessage : IMessage;

    void UnSubscribe<TMessage>(ISubscription<TMessage> subscription)
        where TMessage : IMessage;

    void ClearAllSubscriptions();
    void ClearAllSubscriptions(Type[] exceptMessages);
}

public class EventAggregator : IEventAggregator
{
    private readonly IDictionary<Type, IList> _subscriptions = new Dictionary<Type, IList>();

    public void Publish<TMessage>(TMessage message)
        where TMessage : IMessage
    {
        if(message == null) throw new ArgumentNullException("message");

        Type messageType = typeof(TMessage);
        if(_subscriptions.ContainsKey(messageType))
        {
            var subscriptionList = new List<ISubscription<TMessage>>(
                _subscriptions[messageType].Cast<ISubscription<TMessage>>());
            foreach(var subscription in subscriptionList)
                subscription.Action(message);
        }
    }

    public ISubscription<TMessage> Subscribe<TMessage>(Action<TMessage> action)
        where TMessage : IMessage
    {
        Type messageType = typeof(TMessage);
        var subscription = new Subscription<TMessage>(this, action);

        if(_subscriptions.ContainsKey(messageType))
            _subscriptions[messageType].Add(subscription);
        else
            _subscriptions.Add(messageType, new List<ISubscription<TMessage>>{subscription});

        return subscription;
    }

    public void UnSubscribe<TMessage>(ISubscription<TMessage> subscription)
        where TMessage : IMessage
    {
        Type messageType = typeof(TMessage);
        if (_subscriptions.ContainsKey(messageType))
            _subscriptions[messageType].Remove(subscription);
    }

    public void ClearAllSubscriptions()
    {
        ClearAllSubscriptions(null);
    }

    public void ClearAllSubscriptions(Type[] exceptMessages)
    {
        foreach (var messageSubscriptions in new Dictionary<Type, IList>(_subscriptions))
        {
            bool canDelete = true;
            if (exceptMessages != null)
                canDelete = !exceptMessages.Contains(messageSubscriptions.Key);

            if (canDelete)
                _subscriptions.Remove(messageSubscriptions);
        }
    }
}

Usage

The usage of this event aggregator implementation is simple and straight forward. Clients can subscribe to messages they’re interested in:

// Option 1: Explicit action subscription
Action<MyMessage> someAction = message => { /*...*/ };
var subscription1 = eventAggregator.Subscribe(someAction);

// Option 2: Subscription via lambda
var subscription2 = eventAggregator.Subscribe<MyMessage>(message => { /*...*/ });

The clients get an ISubscription<TMessage> in return, from which they’re able to unsubscribe:

// Option 1: Unsubscribe by calling the event aggregator method
eventAggregator.UnSubscribe(subscription);

// Option 2: Unsubscribe by calling Dispose() on the subscription object
subscription.Dispose();

Other clients now are able to publish concrete messages and subscribers get informed about those messages:

eventAggregator.Publish(new MyMessage{ /*...*/ });

How to get an instance of IEventAggregator, you may ask? Well, that’s your decision! Implement a singleton for accessing an instance, use your favorite DI container, whatever…

Conclusion

That’s it. A simple message-based event aggregator implementation that can be used in a variety of situations. And which can be replaced by other implementations as well. Perhaps you want to persist or log messages, enable detached subscribers, allow async event processing or even further functionality like load balancing… It’s up to you to provide your own implementation. And feel free to connect the Latch Pattern 😉

While the presented EventAggregator perfectly fitted my needs, it’s not intended to be universally applicable. For example I know that Prism uses WeakReferences to simplify garbage collection. I say it again: feel free to do that in your own implementation. Besides there are many more syntactic ways to implement event aggregators/brokers. Paste your comments if you have further suggestions – you’re welcome!

kick it on DotNetKicks.com

The Property Memento pattern

Edit: Find an improved implementation of the pattern here.
Edit
: Find an explicit implementation of the pattern here.

This blog post shows the Property Memento as pattern for storing property values and later restoring from those. Constructive feedback is welcome everytime!

Problem description

It’s a common task in several scenarios: you want to store the value of an object’s property, then temporarily change it to perform some actions and at the end restore the original property value again. Especially with UI-related task you come into this situation quite often.

The last time I came across with this requirement is only some days/weeks ago and was a UI task as well. I created a generic detail view in WinForms, which was built up by Reflection from an object’s properties and optional metadata information. This form contained a control for the object’s values on the one side and a validation summary control for showing validation messages on the other side. The validation summary had to change its content and layout dynamically while performing validation.

Through the dynamic nature of the form unfortunately I couldn’t use the AutoSize feature and thus had to layout the control manually (calculating and setting control sizes etc.). But I still wanted to use some AutoSize functionality, at least for the validation summary. And here’s the deal: everytime the validation messages on the summary change, the control should AutoSize to fit its content. With the new size I recalculate the Height of the whole form. This task can be done by manually setting the AutoSize property temporarily. Additionally it’s necessary to temporarily set the Anchor property of the details control and the validation summary to complete the layouting process correctly.

Normally in the past I would have used a manual approach like this:

private void SomeMethod(...)
{
    int oldValSumHeight = _valSum.Height;
    int newValSumHeight = 0;

    bool oldValSumAutoSize = _valSum.AutoSize;
    AnchorStyles oldValSumAnchor = _valSum.Anchor;
    AnchorStyles oldDetailAnchor = _detailContent.Anchor;

    _valSum.AutoSize = true;
    _valSum.Anchor = AnchorStyles.Left | AnchorStyles.Top;
    _detailContent.Anchor = AnchorStyles.Left | AnchorStyles.Top;

    _valSum.SetValidationMessages(validationMessages);

    newValSumHeight = _valSum.Height;
    Height = Height - oldValSumHeight + newValSumHeight;

    _valSum.AutoSize = oldValSumAutoSize;
    _valSum.Anchor = oldValSumAnchor;
    _detailContent.Anchor = oldDetailAnchor;

    _valSum.Height = newValSumHeight;
}

What a verbose and dirty code for such a simple task! Saving the old property values, setting  the new values, performing some action and restoring the original property values… It’s even hard to find out the core logic of the method. Oh dear! While this solution works, it has a really poor readability. Moreover it’s not dealing with exceptions that could occur when actions are performed in between. Thus the UI could be left in an inconsistently layouted state and presented to the user in this way. What a mess! But what’s the alternative?

The Property Memento

Better solution? What do you think about that (edit: find a better implementation here):

private void SomeMethod(...)
{
    int oldValSumHeight = _valSum.Height;
    int newValSumHeight = 0;

    using (GetAutoSizeMemento(_valSum, true))
    using (GetAnchorMemento(_valSum, AnchorStyles.Top|AnchorStyles.Left))
    using (GetAnchorMemento(_detailContent, AnchorStyles.Top|AnchorStyles.Left))
    {
        _valSum.SetValidationMessages(validationMessages);

        newValSumHeight = _valSum.Height;
        Height = Height - oldValSumHeight + newValSumHeight;
    }

    _valSum.Height = newValSumHeight;
}

private PropertyMemento<Control, bool> GetAutoSizeMemento(
    Control control, bool tempValue)
{
    return new PropertyMemento<Control, bool>(
        control, () => control.AutoSize, tempValue);
}

private PropertyMemento<Control, AnchorStyles> GetAnchorMemento(
    Control control, AnchorStyles tempValue)
{
    return new PropertyMemento<Control, AnchorStyles>(
        control, () => control.Anchor, tempValue);
}

Notice that the logic of SomeMethod() is exactly the same as in the first code snippet. But now the responsibility of storing and restoring property values is encapsulated in Memento objects which are utilized inside a using(){ } statement. GetAutoSizeMemento() and GetAnchorMemento() are just two simple helper methods to create the Memento objects, which support readability in this blog post, but nothing more…

So how is the Property Memento working conceptually? On creation of the Property Memento it stores the original value of an object’s property. Optionally it’s possible to set a new temporary value for this property as well. During the lifetime of the Property Memento the value of the property can be changed by a developer. Finally when Dispose() is called on the Property Memento, the initial property value is restored. Thus the Property Memento clearly encapsulates the task of storing and restoring property values.

The technical implementation of the Property Memento in C# uses Reflection and is shown below (edit: find a better implementation here):

public class PropertyMemento<TClass, TProperty> : IDisposable
    where TClass : class
{
    private readonly TProperty _originalPropertyValue;
    private readonly TClass _classInstance;
    private readonly Expression<Func<TProperty>> _propertySelector;

    public PropertyMemento(TClass classInstance,
        Expression<Func<TProperty>> propertySelector)
    {
        _classInstance = classInstance;
        _propertySelector = propertySelector;
        _originalPropertyValue =
            ReflectionHelper.GetPropertyValue(classInstance, propertySelector);
    }

    public PropertyMemento(TClass memberObject,
        Expression<Func<TProperty>> memberSelector, TProperty tempValue)
        : this(memberObject, memberSelector)
    {
        SetPropertyValue(tempValue);
    }

    public void Dispose()
    {
        SetPropertyValue(_originalPropertyValue);
    }

    private void SetPropertyValue(TProperty value)
    {
        ReflectionHelper.SetPropertyValue(
            _classInstance, _propertySelector, value);
    }
}

This implementation uses the following ReflectionHelper class:

static class ReflectionHelper
{
    public static PropertyInfo GetProperty<TEntity, TProperty>(
        Expression<Func<TProperty>> propertySelector)
    {
        return GetProperty<TEntity>(GetPropertyName(propertySelector));
    }

    public static PropertyInfo GetProperty<T>(string propertyName)
    {
        var propertyInfos = typeof(T).GetProperties();
        return propertyInfos.First(pi => pi.Name == propertyName);
    }

    public static string GetPropertyName<T>(
        Expression<Func<T>> propertySelector)
    {
        var memberExpression = propertySelector.Body as MemberExpression;
        return memberExpression.Member.Name;
    }

    public static TProperty GetPropertyValue<TEntity, TProperty>(
        TEntity entity, Expression<Func<TProperty>> propertySelector)
    {
        return (TProperty)GetProperty<TEntity, TProperty>(propertySelector)
            .GetValue(entity, null);
    }

    public static void SetPropertyValue<TEntity, TProperty>(TEntity entity,
        Expression<Func<TProperty>> propertySelector, TProperty value)
    {
        GetProperty<TEntity, TProperty>(propertySelector)
            .SetValue(entity, value, null);
    }
}

As you can see, the Property Memento implementation is no rocket science. The constructor gets the class instance and an Expression which selects the property from this instance. Optionally you can provide a property value that should be set temporarily in place of the original value. Getting and setting the property value is done via the ReflectionHelper class. For the sake of shortness the implementation doesn’t have a good error handling mechanism. You could employ your own checks if you want. What I like is the use of generics. This eliminates many error sources and guides a developer with the usage of the Property Memento.

Really a pattern and a Memento?

I’ve used the Property Memento now in several different situations with success and thus I think it’s justified to call it a pattern.

But is it a Memento as well? Wikipedia says about the Memento pattern:

„The memento pattern is a software design pattern that provides the ability to restore an object to its previous state […]“

And this is true for the Property Memento, where the „object“ equals a property value on a class instance! The „previous state“ is stored on creation of the Memento and „restore an object“ is done on call of Dispose().

Finally

This blog post has shown the Property Memento as pattern for storing original property values and later on restoring them.

Compared to the „manual way“ the Property Memento has valuable advantages. It encapsulates the responsibility of storing an original value and restoring it when the using() block is finished. Client code remains clean and is condensed to the core logic, while delegating responsibilities to the Property Memento by an explicit using(){ } block. Thus the code becomes much more concise and readable. Last but not least the Property Memento employs an automatic exception handling mechanism. If an exception occurs inside a using() block of the Property Memento, finally the Dispose() method is called. Thus the Property Memento restores the original property state even in exceptional situations and the user gets a consistent layouted UI.

kick it on DotNetKicks.com

Latch me if you can!

In my first blog post I’ve shown a common UI problem when it comes to event handling and programmatic vs. user-triggered events on the one handside and to infinite loops due to recursive event chains on the other. With event detach/attach and boolean flags I’ve shown two simple solutions which are very common in many development projects. But they have their shortcomings and just don’t feel natural. This blog post shows an alternative which I found simple and beautiful.

Latch to the rescue

Jeremy Miller introduced the Latch design pattern as possible solution for the problem at hand. It’s a tiny beautiful, yet little know solution and the latter is a pitty. A Latch encapsulates the logic of executing an action depending on the current state of the Latch. If the Latch is in „Latched“ state, no action is executed. And actions can be executed inside of the Latch (changing the state to „Latched“), preventing other actions from executing.

Latch #1: Disposing Latch

Before reading Jeremy’s post, I’ve made a sort of Latch pattern for myself. Here the Latch implements IDisposable, and the Latched state is set on creation of the Latch and reset at a call of Dispose(). This allows the application of the using() { } syntax and the Latch state is reset automatically when exceptions occur. The Latch class would look like this:

public class Latch : IDisposable
{
    public bool IsLatched { get; private set; }

    public Latch()
    {
        IsLatched = true;
    }

    public void RunIfNotLatched(Action action)
    {
        if (IsLatched)
            return;

        action();
    }

    public void Dispose()
    {
        IsLatched = false;
    }

    public static Latch Latched
    {
        get { return new Latch(); }
    }
    public static Latch UnLatched
    {
        get { return new Latch { IsLatched = false }; }
    }
}

The RunIfNotLatched() method is just a little helper which executes an action given on the current state of the Latch. The actual application of the Latch in the example code from the previous post is shown here:

public partial class SomeControl : UserControl
{
    // ...

    private Latch _textSetByCodeLatch = Latch.UnLatched;

    private ViewData _viewData;
    public ViewData ViewData
    {
        get { return _viewData; }
        set
        {
            _viewData = value;
            if (value != null)
            {
                using (_textSetByCodeLatch = new Latch())
                {
                    _someTextBox.Text = value.SomeValue;
                }
                // other operations
            }
        }
    }

    private void OnSomeTextBoxTextChanged(object sender, EventArgs e)
    {
        _textSetByCodeLatch.RunIfNotLatched(() =>
        {
            // perform data/view update operations
        });
    }
}

At first sight I liked the using() { } syntax. It frees the application code from manually reseting the Latch to UnLatched state. At second sight I think there could be a cleaner solution. In my Latch implementation the using() { } syntax is kind of misused and could lead to irritations because there is implicit knowledge about the internal functionality of the Latch. Again, the intent is not explicitly revealed.

Latch #2: Boolean Latch

A cleaner solution with explicit methods for running actions inside of the Latch and missing action execution when other actions have entered the Latch could be the following Latch implementation:

public class Latch
{
    public bool IsLatched { get; private set; }

    public void RunLatched(Action action)
    {
        try
        {
            IsLatched = true;
            action();
        }
        finally
        {
            IsLatched = false;
        }
    }

    public void RunIfNotLatched(Action action)
    {
        if (IsLatched)
            return;

        action();
    }
}

Here the basic boolean logic behind matches with the Disposing Latch, but the syntax has changed. The Latch now contains two methods. RunLatched() executes an action inside the Latch and prevents actions from being executed in RunIfNotLatched(). Here’s the usage for this Latch type in our example:

public partial class SomeControl : UserControl
{
    // ...

    private readonly Latch _textSetByCodeLatch = new Latch();

    private ViewData _viewData;
    public ViewData ViewData
    {
        get { return _viewData; }
        set
        {
            _viewData = value;
            if (value != null)
            {
                _textSetByCodeLatch.RunLatched(() =>
                {
                    _someTextBox.Text = value.SomeValue;
                });
                // other operations
            }
        }
    }

    private void OnSomeTextBoxTextChanged(object sender, EventArgs e)
    {
        _textSetByCodeLatch.RunIfNotLatched(() =>
        {
            // perform data/view update operations
        });
    }
}

Now the Latch has a cleaner and more explicit syntax. And like the Disposing Latch it has a clean exception handling mechanism. That’s the good news. With that our Boolean Latch is applicable in most simple scenarios. But not in all! Imagine parallel execution of UI actions. Moreover imagine having two actions which should be run in RunLatched() of the same Latch object – again in parallel:

  1. Action 1 enters RunLatched() and the Latch changes its state.
  2. Action 2 enters RunLatched(), the Latch state remains in IsLatched.
  3. Action 1 leaves RunLatched() and the Latch changes its state to not latched.

Step 3 is the problem. Action 2 is still running inside the Latch, but due to the boolean logic the Latch is not latched any longer. Thus other actions are executed when given to RunIfNotLatched(), which is no help on the initial problem. This is not only true for the Boolean Latch, but for the Disposing Latch as well.

Latch #3: Counting Latch

This problem is solved by the Counting Latch, which is most similar to Jeremy’s Latch implementation. Instead of having just a boolean discriminator, it employs a counter for parallel RunLatched() calls. The IsLatched state is determined based on this counter. If it’s equal to 0, the Latch is not latched (because no method is currently running inside of RunLatched()). Else the Latch is treat as latched. Here’s the implementation of this Latch variant (edit: thanks nwiersma for the thread-safe hint):

public class Latch
{
    private readonly object _counterLock = new object();

    public int LatchCounter { get; private set; }

    public bool IsLatched
    {
        get { return (LatchCounter > 0); }
    }

    public void RunLatched(Action action)
    {
        try
        {
            lock(_counterLock) { LatchCounter++; }
            action();
        }
        finally
        {
            lock(_counterLock) { LatchCounter--; }
        }
    }

    public void RunIfNotLatched(Action action)
    {
        if (IsLatched)
            return;

        action();
    }
}

The usage in this case is equivalent to the Boolean Latch. You should note that each of these Latch implementations works, it’s just a matter of your requirements which of them you want to use. The Counting Latch as most generic Latch implementation above and applies to most situations.

Benefits of using the Latch

Using the Latch for the foresaid problems has clear advantages over the use of event detach/attach and boolean flags. First the Latch encapsulates the logic of running actions depending on a state, in this case the current execution of another action. Thus the purpose of a Latch is explicit, in contrast to the implicit intent of e.g. boolean flags. This increases code readability.

The second advantage comes with resetting the initial state. The Latch performs this task itself when an action leaves the RunLatched() method for example. With boolean flags and event detach/attach this is your task. It’s most likely getting problematic if exceptions are thrown inside an action. The Latch takes over the responsibility of automatically rolling back the state of the Latch on occurrence of an exception.

In conclusion the Latch is a pretty simple design pattern which increases readability and feels right for the problem of dependent action execution. For myself, at least I’ve found some nice solution for UI event reaction depending on the source of the trigger of the event and for infinite event loops, without relying on ugly boolean flags or event attach/detach.

kick it on DotNetKicks.com

A common UI problem

This blog post is all about a common UI-related programming problem. Ever since my first UI-based application I’ve come across with this problem and never found a satisfying way to resolve it (until now 😉 ). This first blog post is about describing the problem and my former solution approaches. In the next blog post I will demonstrate a more proper solution.

The code in these two blog posts is using WinForms and C#, but it should be easy adoptable to other UI technologies as well.

Problem description

It’s a common problem: you want to react on an event like the TextChanged event on a TextBox, but ONLY if the user changed the text and it’s not changed programmatically. For example you want to update some data, when the user changes the content of a TextBox, but this update process should not be triggered when the progam itself sets some data and therefore updates the text. The starting point for that could be something like the following code:

public partial class SomeControl : UserControl
{
    // ...

    private ViewData _viewData;
    public ViewData ViewData
    {
        get { return _viewData; }
        set
        {
            _viewData = value;
            if (value != null)
            {
                _someTextBox.Text = value.SomeValue;
                // other operations
            }
        }
    }

    private void OnSomeTextBoxTextChanged(object sender, EventArgs e)
    {
        // perform data/view update operations
    }
}

Here the OnSomeTextBoxTextChanged() method is bound to the _someTextBox.TextChanged event. ViewData is a class with arbitrary data that should show up in the UI.
Notice the problem: when the ViewData property is set programmatically the OnSomeTextBoxTextChanged() method is being executed, which is not what was intended (firing the event only when the user changes the value of the TextBox).

A similar problem arises with infinite loops. Imagine a user changes something in the UI and an event X is fired. You hook this event and start a complex UI workflow, which at the end fires the event X again. The proces is executed again and very easy you come into an infinite loop situation. If you are an UI developer you would very probably agree that UI event chains are often opaque and can get messy.

Let’s look at some obvious, however not very elegant, solutions.

Approach #1: Temporary event detach

One of those simple solution is the temporary detachment of the problematic event. For example if you don’t want to react on the TextChanged event when the TextBox.Text property is set programmatically, you could detach your event handler from TextChanged before setting the Text and afterwards attach it again. This approach is shown in the following example code:

public partial class SomeControl : UserControl
{
    // ...

    private ViewData _viewData;
    public ViewData ViewData
    {
        get { return _viewData; }
        set
        {
            _viewData = value;
            if (value != null)
            {
                _someTextBox.TextChanged -= OnSomeTextBoxTextChanged;
                _someTextBox.Text = value.SomeValue;
                _someTextBox.TextChanged += OnSomeTextBoxTextChanged;

                // other operations
            }
        }
    }

    private void OnSomeTextBoxTextChanged(object sender, EventArgs e)
    {
        // perform data/view update operations
    }
}

While this is working, it’s not a recommendable solution, because there are several shortcomings. First the manual event handling is not very intuitive and doesn’t explicitly reveal the intent of the programmer with this manual process. Thus your code is more difficult to read for others (and after some months for you as well). Moreover, you get undefined states when exceptions are thrown and caught in an outer component (and you miss a finally which attaches the event again). Then the event handler could be detached further on and the UI isn’t working properly afterwards. The whole event detach/attach process is getting very messy if you have complex views with many such problematic events. Last but not least this manual event handling process binds your code tightly to the view and you get trouble if you want to refactor several parts out.

Approach #2: Boolean flags

A similarly simple approach comes with boolean flags which indicate that a value is currently set programmatically and thus that Changed events should not be handled. The following code shows an example how this could solve our initial problem:

public partial class SomeControl : UserControl
{
    // ...

    private bool _isTextSetByProgram = false;

    private ViewData _viewData;
    public ViewData ViewData
    {
        get { return _viewData; }
        set
        {
            _viewData = value;
            if (value != null)
            {
                _isTextSetByProgram = true;
                _someTextBox.Text = value.SomeValue;
                _isTextSetByProgram = false;

                // other operations
            }
        }
    }

    private void OnSomeTextBoxTextChanged(object sender, EventArgs e)
    {
        if (!_isTextSetByProgram)
        {
            // perform data/view update operations
        }
    }
}

I think this is the most common solution that I’ve seen for the problem. For myself I must admit that I’ve mostly used this approach. But it has the same disadvantages like the event detach/attach solution (except the tight view coupling). Boolean variables don’t explicitly show the intent behind them and get likewise messy if used in complex situations where you could have dozens of those variables scattered around a view.

So while those solutions are very widespread and work they just don’t feel right and clean. But what’s the alternative? An interesting little one I will show you in the next post which comes shortly.

kick it on DotNetKicks.com

Some thoughts on Event-Based Components

The German software engineer Ralf Westphal currently spreads some knowledge about an alternative model for programming components and especially communication between them. Due to their nature they are called Event-Based Components. After some discussion with colleagues at SDX I want to share some of my thoughts on that with you (of course for further discussion as well).
The aim of Event-Based Components (EBC) is to create software components that are really composable without specific topological dependencies. You can compare EBCs with elements in electronic circuits. But first things first…

Interface-Based Components style

Normally we’re developing components in .NET as IBCs: Interface-Based Components. That means client classes have topological and functional dependencies to interfaces (or directly to other classes), which provide some sort of functionality. Well developed, such a dependency could be resolved with a Dependency Injection container like StructureMap:

class Program
{
    static void Main(string[] args)
    {
        // Bind (here: StructureMap)
        ObjectFactory.Initialize(x =>
        {
            x.For<IBusinessClient>().Use<BusinessClient>();
            x.For<IDataAccessComponent>().Use<DataAccessComponent>();
        });

        // Resolve and Run
        IBusinessClient client = ObjectFactory.GetInstance<IBusinessClient>();
        client.BusinessOperation(0);
    }
}

interface IBusinessClient
{
    void BusinessOperation(int personId);
}

class BusinessClient : IBusinessClient
{
    private readonly IDataAccessComponent _dataAccessComponent;

    public BusinessClient(IDataAccessComponent dataAccessComponent)
    {
        _dataAccessComponent = dataAccessComponent;
    }

    public void BusinessOperation(int personId)
    {
        Person p = _dataAccessComponent.GetPerson(personId);
        // do something ...
    }
}

interface IDataAccessComponent
{
    Person GetPerson(int id);
}

class DataAccessComponent : IDataAccessComponent
{
    public Person GetPerson(int id)
    {
        return  // ...some person...
    }
}

That’s pretty standard so far, isn’t it? In Ralf’s opinion this programming style lacks real composability of the components. Due to the topological dependency the clients is bound to a specific interface and no arbitrary component can perform the functionality. Instead a component has to implement the specific interface. You’re not able to use components which could provide the functionality, but don’t implement the interface…

Event-Based Components style

Ralf suggests Event-Based Components to the rescue. Components in this programming style can be compared to components in electronic circuits. Methods act as input pins of a component and can be called by other components. Events/delegates act as output pins and establish a connection to other components that should be used by the component or to provide calculation results. The output pins can be bound to any arbitrary method that meet the signature. Thus the dependency is still functional, but not topological any more.
The example above in EBC style could look as follows:

class Program
{
    static void Main(string[] args)
    {
        // Build
        var client = new BusinessClient();
        var dataAccess = new DataAccessComponent();

        // Bind
        client.GetPerson = dataAccess.GetPerson;

        // Run
        client.BusinessOperation(0);
    }
}

class BusinessClient
{
    public Func<int, Person> GetPerson { get; set; }

    public void BusinessOperation(int personId)
    {
        Person p = GetPerson(personId);
        // do something ...
    }
}

class DataAccessComponent
{
    public Person GetPerson(int id)
    {
        return  // ... some person ...
    }
}

This example shows the components BusinessClient and DataAccessComponent interacting as EBCs in a very simple form by using the Func<T> delegate type and thus enabling symmetric communication. Ralf encourages the use of standard input/output pins as Action<object>, which leads to asymmetric communication, because the DataAccessComponent would need to declare an output pin for providing the Person as result of GetPerson(). For the sake of simplicity I haven’t followed this principle here.

So the example uses a Func<T> delegate and no event. But you can still think of it as Event-Based Component, just because events are nothing more than multicast delegates. I could have used events instead of the simple delegate as well, but I’m quite fine, because I don’t need the functionality of multiple subscribers here.

As you can see from the example, just like IBCs the EBCs have some kind of initial Bootstrapper phase. This is the time when the components are composed. The output pins of a component’s client (BusinessClient in this example) are connected with the input pins of the component itself (here: DataAccessComponent).

Benefits

When I first saw EBCs I thought: „Dude, this is damn cool, isn’t it?“. Indeed this kind of programming style first feels strange and alternate and thus for me it’s really interesting. But are there some real benefits as well?

I think one big benefit of EBCs is their composability. A client hasn’t to know the interface of a component from which he wants to use some functionality. A component on the other side is not forced to implement an interface to provide some functionality, but it’s still retaining loose coupling. Even without interfaces the components are still independent from each other and have great testability.

Other benefits I see are the exchangeability and the topological independence. Components are not bound to a specific topological context in form of interfaces and thus are independent from topological changes on the interfaces. You can exchange the components easily by replacing the binding section with any other setup phase and can binding other methods to them. Especially your components are not forced to use (or implement) some kind of interface from which they will perhaps use just one single functionality…

Last but not least I see a very easy way to intercept calls and adding functionality without changing the components themselves. If you use events as output pins you can add some more event handlers in the binding phase. Thus you can easily integrate Logging, Tracing etc. into your applications. Of course you can achieve this with IBCs as well, I just say that EBCs are suiting very well for those requirements.

Drawbacks

Besides those benefits in my opinion there are some significant drawbacks as well.

First of all is the additional complexity which comes with EBCs. Composing EBCs can become complex, at least in projects of significant size. Due to binding methods and events together on the fine instead of interfaces on the coarse, there have to be much more binding statements. In fact you can think of an event’s signature as a one-method interface that has to be fulfilled from components. Furthermore (again especially in projects of a reasonable size) you will loose intelligibility and  overview over your system and the component interaction. Any arbitrary component can provide a functionality and there is no way to navigate between layers and components as clients and suppliers of functionality. Explicit interfaces are much more comprehensive than such „implicit“ specifications.  Perhaps in the future there will be tools that simplify composition between EBCs and navigation through EBCs, but until there’s such a tool I consider this as serious drawback.

Another drawback of EBCs is the loss of interfaces as formal contract of coherent functionality. Of course you can define interfaces and let your components implement them, but while clients are not compelled to use them they loose much of their value. Interfaces force components to implement a certain set of functionality completely and make this set explicit. Clients have to refer this contract explicitly. Explicit contracts lead to intention revealing and this is a good thing!

Conclusion

So in my opinion EBCs have benefits as well as shortcomings. I think they are worth investigating and knowing them, but at the moment I don’t see that they will establish well and become a replacement for IBCs. First there is the higher complexity, which could perhaps be solved by tools and some sort of „DI container“ for EBCs in the future. But second, being explicit and define formal contracts through explicit interfaces is no bad thing. Of course it’s not cheap as well, but I don’t see that this justifies the application of EBCs on the small scale. On the big scale there are other solutions like BizTalk, NServiceBus etc. to achieve the goal of pluggable components which have features like scalability as well. So perhaps there are delimited scenarios for using EBCs (like component topologies that change often), but I would not suggest to use them in general.

kick it on DotNetKicks.com

A look at: Contract Driven Development

Today I want to take a look at this paper (PDF) entitled „Contract Driven Development = Test Driven Development – Writing Test Cases„. Like the paper on SDD I found this essay on my research for a synergetic development approach that takes both DbC and TDD into account.

The paper was written by A. Leitner et al. at the ETH Zurich in 2007, when important steps towards automatic testing had been already made. One of the co-authors of the paper is Bertrand Meyer, the inventor of the Eiffel programming language which can be seen as cradle of DbC. Hence it’s not surprising that the paper and the described tool are based on Eiffel.

Content of the paper

The paper describes Contract Driven Development (CDD) as new approach to lower the effort for writing tests with intensive use of contracts. This approach is based on a mechanism to extract test cases completely automatically from failure-producing runs of a component, where contracts act as test oracle.

In the introduction the paper takes a short look at current developments towards automatic testing and shows the drawbacks that come with many tools. Automatic testing tools don’t know the semantics of a program and the insights a programmer has and thus those tools cannot distinguish between meaningful and meaningless inputs. CDD wants to overcome this drawback. It relies on the observation that developers are often writing implicit test cases when manually running a program to determine the correct behavior of their code. But many developers don’t extract those test cases as automatic unit tests and thus those tests cannot be run in a reproducible way.

The paper presents CDD as method that captures those implicit test cases automatically and extracts them into explicit tests. One important limitation is that it only captures test cases which produce a failure e.g. through a broken contract or other exceptions. With this approach the resulting test suite contains tests for mistakes made by the developers in the past. The developer builds up this test suite by running the application with corresponding input values. The authors state that the test suite would have similar properties to a suite as result from TDD when you write contracts before the implementation of a feature. You can find my personal thoughts on this below.

The paper describes a mechanism by which CDD observes program executions and detects the last uninfected state when a failure occurs. In this case CDD takes a snapshot of this state and a new test case for the failing component is created automatically. The snapshot serves as starting context for the test case by which the failure can be reproduced.

With this process the developer has to provide the inputs triggering a failure only once. Then CDD jumps in, saves the values that lead to the failure and extracts a test case to reproduce the failure. The developer can go forward with his implementation and can choose to fix the bug later on while the failure is saved as reproducible test case. Contracts can help a lot in this process since they can act as oracle for expected behavior and can express assertions that go beyond some technical exceptions. With this the benefit of CDD is mainly driven by a extensive utilization of contracts.

The authors finish with a conclusion of their work:

This article explains the fundamentals of the Contract Driven Development approach. A tool autonomously observes the developer while he is working on a program and extracts test cases from failures either provoked by the developer (in the spirit of test driven development) or by mistake (leading to a regression test). The approach is novel in that complete test cases are extracted not only from the information provided by the system under test, but also from non-permanent clues given by the programmer during development.

My thoughts

Personally I feel a little ambivalent about the CDD approach. First I like the idea of making implicit test cases which are run by the developer explicit by extracting them automatically. This includes the developer’s knowledge about the context of an application into the generated test cases and thus goes beyond the technical aspect of automatic testing. It takes some work on writing test cases from the developer’s shoulder, but it doesn’t have as many advantages as one may think…

Expected functional behavior can be defined by contracts (postconditions, invariants) which act as test oracle for CDD. But it’s a limitation as well. Important characteristics of code are difficult to define with contracts while they can be easily defined with explicit unit tests (aspect of expressiveness). Thus CDD is limited in its expressiveness as well.

Furthermore CDD extracts tests for failing runs only. But what about the other side of the medal? For successful runs no test case will be generated automatically which leads to the problem that if anything in the code changes there is no possibility to check the successful tests without running them manually again. Thus CDD is not suitable for complete regression testing.

Last but not least the authors state that a test suite created with CDD is very similar to TDD (beside other parallels to TDD). Moreover the title of the paper gets it to the simple formula „CDD = TDD – Writing Test Cases“. Here I strongly disagree! CDD and TDD have very different properties. TDD takes things like code design and specification into account, is well-suited for documentation purposes and creates a real regression testing suite. None of those aspects is in scope of CDD, so TDD goes far beyond CDD.

At the end I think CDD is an interesting approach, but the benefits are bought dearly and eaten up by the drawbacks. One of the most serious drawbacks for me (personally) is the design aspect. If you rely on CDD you are not driving your API design to be clear and lucid. There will be no unit tests for which you want to have loosely coupled components and furthermore CDD will everytime run hard against your concrete infrastructure.
To make a final advice: The only scenario where I would use CDD is for automatic test generation when running the software in production. If something fails in production with CDD you as developer could get immediate feedback about what went wrong and you could immediately reproduce the failure by running the extracted test case. This would be a nice support for bugfixing and it would be really valuable. But for testing purposes in the development process I don’t see a chance for CDD.

kick it on DotNetKicks.com

A look at: Agile Specification-Driven Development

While investigating the synergy of TDD and DbC I asked myself if there are not already development processes that combine both principles. And on my research I stumbled over this paper (PDF) named Agile Specification-Driven Development on which I want to look in this blog post.

Content of the paper

The paper (published in 2004) from J. Ostroff, D. Makalsky and R. Paige describes the agile approach of Specification-Driven Development which combines features of both TDD and DbC.

The paper starts with an introduction that states the TDD and DbC can enhance each other. Tests and contracts both are some kind of specification with benefits and drawbacks on each of them. This is equivalent to my points on types of specification some blog posts ago.

Further on the authors describe plan-driven development and DbC as plan-driven development approach. Thus they show DbC as process coupled to the design phase when mapping the complete set of requirements to the software. DbC with its contracts as specifications closes the gap between requirements and code. In the following section DbC in general with some of its benefits is described and the „quality-first“ DbC design method from Bertrand Meyer is briefly explained.

Chapter 3 describes TDD as agile development method to replace plan-driven up-front design by „stressing the development of working code over documentation, models and plans“. Striking aspects and benefits of TDD are shown as well as tests as a form of specification. The authors make a good point on what they call collaborative tests which address the interaction/collaboration of code components. Thus collaborative tests are related to UML sequence/collaboration diagrams. Collaborative tests go beyond the single-component-based TDD process which is an important insight. The next section shows the drawbacks of tests as example-driven approaches which I have shown before as well when discussing universality of tests and contracts. And the authors describe the „problem“ of contract checking: program verification as difficult task and runtime checking of the assertions which needs unit tests to stress the contracts. This again mirrors my thoughts on checking correctness.

The SDD process

Chapter 4 contains the description of the Specification-Driven Development (SDD) approach as combination of TDD and DbC. It starts with a motivation for this movement:

„There are surprising commonalities between TDD and DbC, particularly: both contracts and tests are specifications; both TDD and DbC seek to transform requirements to compilable constructs as soon as possible; both TDD and DbC are lightweight verification methods; both methods are incremental; and both emphasise quality first in terms of units of functionality. We claim that it is not necessary to choose between the two approaches a priori, and that there are substantial benefits to using TDD and DbC together in a project.“

Agile Specification-Driven Development

The picture on the left side shows the statechart of the SDD approach. It’s important to see that SDD doesn’t dictate where to start with development. The authors state that „it is the developer’s choice whether to start with TDD or DbC based on project context“ while „the emphasis is always on transforming customer requirements into compilable and executable code“.

Note that SDD takes three sub-processes into account: DbC on the right, TDD on the bottom and collaborative tests/specifications on the left. You can switch between these three processes at any time, it’s up to your interpretation and you are responsible to find a valid workflow…

The authors say that „SDD provides more than TDD or DbC individually, as it eliminates some of the limitations with each approach“. And for them „SDD is more than the sum of TDD and DbC, as there are synergies between the approaches“.

Another point is made on contracts as test amplifiers. Contracts can help to drive the production of tests since they show requirements to call a code component and the conditions that should hold in return. And tests should validate/invalidate pre- and postconditions to show the correctness and the appropriateness of contracts.

The description of SDD ends with some observations and the recommendation to prefer writing unit tests before contracts. However it doesn’t make a statement whether to start with TDD or with collaborative tests and how the process should look in concrete.

The paper closes with a short conclusion and a table as summarization of those conclusions.
Two synergetic advances of SDD are pointed out:

  1. Contracts are test amplifiers,
  2. Contractual and collaborative specifications provide lightweight verification of the design.

Thoughts on SDD

SDD mirrors some of my thoughts on the synergy of DbC and TDD and I think it’s a movement in the right direction.

Otherwise for me the paper makes too vague statements and it’s only scratching on the surface. SDD doesn’t describe a clear process of how to manage DbC, TDD and collaborative tests. The given statechart is nothing more than a collection of those three principles and doesn’t state how they can be used in combination. For me it has a too „academic“ touch and isn’t worth much for a practitioner. Just to say „use these parts in combination and decide for yourself“ doesn’t help, especially beginners need clear rules and a clear process to get started to use DbC and TDD in conjunction. Thus I can’t agree with the authors that SDD is more than the sum of the parts. Perhaps it is, but the paper doesn’t clarify this.

Furthermore the paper doesn’t make a strong statement about the combination of tests and contracts. One sentence says that tests should exercise the contracts by validating and invalidating each pre- and postcondition. With this the appropriateness and correctness of contracts should be ensured. But manually writing tests for the contracts doesn’t make sense in my opinion. This would mean to duplicate the contract-based specification logic with an equivalent test-based specification logic. The contracts already contain the necessary information for the tests and thus they should be used as test oracle by automatic test generators like Pex. Moreover the paper doesn’t give any usage advices for tests and contracts. It doesn’t clarify for which specifications you should write contracts and where you should use tests. But this is a central question which has to be addressed by a synergetic development process.

In summary SDD is a first step in the right direction and fits into my thoughts on the synergy of TDD and DbC. But SDD as described in the original paper is too vague to form a new development process based on TDD and DbC. It lacks clear statements about the combined usage of TDD and DbC, but just those statements are needed for the establishment of a new development process.

kick it on DotNetKicks.com

Comparison: DbC and TDD – Part 4

Let’s start the last part of my little blog post series regarding the comparison of DbC and TDD. This post is about code changes and the support of clean code principles by either DbC and TDD.

Code changes

Changing code is a big challenge for ensuring software quality and software design as well. It’s an important but often underestimated topic. If new features should be implemented or existing requirements change you should be able to change your code in a way that ensures correctness for the behavior of the new and the existing code base and you should be able to keep a consistent design.

With TDD you set up a nice test suite and tests can be run in a reproducible way. If requirements change or are extended and you adapt/extend your code, with TDD you first have to rewrite and/or extend your tests to fit the new set of requirements. But then by running your adapted test suite you can catch unexpected side effects from your code changes. This regression testing is a great safety net for those changes. Without such a test suite how could you be sure that the expected behavior still holds after the code has changed? Tests in a TDD way are great for continuous integration. When a developer in your team changes code and performs a check-in, with clear unit tests you are able to validate the changes during a continuous integration build. With a constraint on the minimum value of allowed code coverage this gives you a great confidence that a code change has no side effects which affect the expected behavior of your components. Moreover TDD ensures a more consistent API design over code changes. By writing tests before implementing new requirements you continuously adapt your design from a client view.

A drawback of tests is the effort to manually write and adapt tests. When behavior changes and old tests fail you have to investigate why those tests fail, thus what the old behavior has been and if this behavior has changed or if your implementation is simply wrong. With huge test suites maintainability can become a very time-consumptive task. Another important issue with TDD and automated tests in general is that any developer who joins your project must have a good comprehension for the TDD process and for automated tests. For many developers TDD is not very intuitive at the beginning and it’s overwhelming them by completely exchanging their development styles and habits. Becoming familiar with TDD could be an important gap for developers and giving all developers in a project the same comprehension for the process can be difficult (especially if they don’t see the benefits of TDD). That’s a question of accountability and project lead. There have to be project guidelines which include testing standards and there must be mechanisms to watch the adherence of those standards.

DbC can’t give you huge support for code changes out-of-the-box. If you only rely on runtime checking of the contracts you could not make a statement if an implementation still holds the contractual specification when code changes. Moreover you have to manually adapt your contracts when you change an implementation. There’s no direct mechanism besides the limited static checking that ensures the correctness of your implementations in terms of the defined contracts. The only solution is to set tests in place that validate the contracts for exemplaric input/output pairs. With such a test suite you can check in a reproducible way if your implementation still mirrors your contracts. But note: I don’t say that tests for contracts should be written manually. In my opinion it doesn’t make much sense, because both tests and contracts are a form of specification and with manually testing a contract you would duplicate the specification code! Moreover contracts already contain the relevant information which is needed to set up a test automatically. Solutions like Pex can jump in here to generate tests from scratch or from parameterized unit tests, using contracts as test oracles for accepted input values (preconditions) and expected output (postconditions).

Contracts have a gap for developers as well. Contracting can get messy and confusing if you are not able to give the developers on your project the very same comprehension of DbC: how and where to use contracts, limits for contract definitions and so on. As further drawback there is no possibility to check the percentage of contracting in a fashion of code coverage in TDD. A developer with no commonsense for the process could leave contracts and nobody would recognize and react. Again it’s the responsibility of the project lead to give guidelines on contracting and to observe the adherence of those guidelines.

Clean code principles

TDD and DbC both give great support to some important clean code principles and thus improve the software quality far beyond the bug prevention aspect.

  • YAGNI (You Ain’t Gonna Need It):
    DbC cannot support you here, but the TDD process greatly supports YAGNI. For each new feature you write a test which leads to an implementation that reflects the test suite and which contains only those features that are really necessary.
  • SRP/SoC (Single Responsibility Principle/Separation of Concerns):
    Both DbC and TDD support those principles. TDD has less impact, but in my opinion with TDD you early discover the dependencies of your components and thus you are encouraged to keep components clean. DbC is more offensive in terms of SRP and SoC. It’s difficult to write contracts for components with many responsibilities and thus DbC directly enforces a separation of concerns.
  • KISS (Keep It Simple and Stupid):
    Both TDD and DbC add some value here. With TDD due to the incremental evolution an API is kept simple and client-friendly. For DbC the same rules apply as for SRP/SoC: It’s easy to define simple and stupid methods with clear contracts, but it’s hard to define contracts on complex and messy components.
  • DRY (Don’t Repeat Yourself):
    I don’t see support from TDD, but DbC influences the DRY principle. Because DbC introduces contracts that clarify benefits and obligations in client/supplier communication it prevents clients from checking return values and parameters redundantly. For example a postcondition says „the return value will never be null“. Thus there is no need for a client of the method to check the return value before handing it to another method that expects a non-null value as argument. This would be encouraged when you do defensive programming, but with DbC you can really prevent such redundancies.
  • OCP (Open/Closed Principle):
    I don’t see big influence from both DbC and TDD here. OCP states: „Software entities should be open for extension, but closed for modification“. This means that your components should be easily extensible without the need for code modifications. OCP increases the level of abstraction and the code complexity, but it makes components easily extensible. With TDD you drive your API only for a current feature set and there seems to be no impact on extensibility. DbC with its contracts can’t help you here as well in my opinion.
  • ISP (Interface Segregation Principle):
    Both DbC and TDD support the ISP. DbC influences ISP by the enforcement of SRP/SoC. When you set contracts on an interface with many responsibilities/members it would be painful to write invariants that must be maintained by each interface member. But it’s  pretty easy if you have clear and simple interfaces. For TDD the impact is less, but it’s there since you early discover dependencies and you drive your API in a „segregated“ direction when implementing tests for new features.
  • DIP (Dependency Inversion Principle):
    Support for this principle mainly comes from TDD, but DbC can add little value as well. With TDD to test a component in isolation you have to exchange dependencies of this component with test doubles like mocks or stubs. Thus you need abstractions/interfaces for those dependencies which enforces DIP. Then with Dependency Injection you are able to inject a test double at runtime of a test. The influence of DbC is more subtle. DbC encourages the use of interfaces at whole by enforcing uniform behavior over all implementations of an interface. Thus the use of interfaces instead of concrete implementations in terms of DIP is encouraged as well.
  • LSP (Liskov Substitution Principle):
    TDD can’t help you here, but DbC adds great value. The LSP states that in class hierarchies it must be possible to treat a derived object as if it would be an object of the base class. Thus the specialized object must behave in the same way as the base class object. This principle of clear sub-classing is a basic part of DbC. Inheritors of a base class or implementors of an interface are not allowed to add new preconditions, but they can extend postconditions and invariants with additional contracts. Thus DbC guarantees uniform behavior in terms of the LSP.
  • CQS (Command-Query Separation):
    TDD doesn’t encourage you to do CQS. But with DbC you need to separate commands as pure methods and queries in order to use commands in contracts. Thus DbC leads to command-query separation which is a good thing since it keeps your API clean (SoC) and your components simple (KISS).

Conclusion

This and the last blog posts gave a comparison of DbC and TDD by taking several aspects into account: specification, design, documentation, code coupling, universality, expressiveness, correctness checking, code changes and influence on clean code principles. While writing this little series of blog posts things became much clearer to me. When starting with DbC some months ago I just thought: „Well, forget TDD, DbC with static checking is all what we need and what we should use„. But that was thought too short. TDD goes far beyond testing and writing unit tests. At its heart TDD is more about design, documentation and specification and it’s really valuable in those (and other) terms.

Both DbC and TDD have advantages and drawbacks and both act on their own terrains with interesting overlaps. And it’s absolutely not a mutual exclusive choice between both principles. However TDD and DbC should be used in conjunction to utilize the advantages of both principles. In fact there should be a clear TDD process that makes use of contracts. As the previous blog posts showed DbC can support TDD, but there’s no possibility to replace it in any way.

kick it on DotNetKicks.com