Showing posts with label architecture. Show all posts
Showing posts with label architecture. Show all posts

Thursday, February 4, 2010

Back to basics: Why should I use interfaces?

So I had this interesting discussion with a colleague about having a clean architecture for a small software he is doing. Since it's his first step among SOLID, I wanted to take it easy see how things were laid out. Since the program was mostly already written, I immediately noticed the lack of pattern and the direct data access in the event of his WinForm application. The conversation went a bit like this:
Me: What is this code with data access on the "OnClick" of your button? Him: Well it's the information I need to execute this command. Me: Do you know the Model-View-Presenter pattern? Because right now, you are mixing "Presentation", "Data Access" and "Business Logic" Him: I've used it before but it's been a while. How do you implement it?
So after showing him the pattern and explaining the basic implementation (because there is a lot of different way to implement this pattern), he asked me the following question:
"Of course, you don't need to use interface everywhere, right?"
But then I went on to explain testability and such but there is something different I wanted to bring to this small discussion and that I wanted to share a bit. When my class have dependencies injected through the constructor, I have 2 choices. Either I depend upon the implementation or the abstraction (interface/abstract). What's the difference and why is it so important?

MyClass depending upon the abstraction of "MyClassDataAccess"

When your class depend upon the abstraction, it can take any class that implement that abstraction (be it abstract class or interface). The implementation can easily be replaced by something else and that is essential in unit testing your logic.

MyClass depending upon the implementation of "MyClassDataAccess"

When your class depend upon the implementation directly, the only that can be sent to this class is this specific implementation. Anything else must derive from this class. This implementation couple the Caller and the Callee really tightly.

Why is it important ?

When you have a class that access services, slow resource (database, disk, etc.) or even a class that you haven't coded yet... an interface should be used. Of course it's not a law. You apply interface/abstract class when you need to decouple an implementation of a system from another system. That allows me to send mocked object and test my requirements/logic. This also bring another advantage that might not be evident at first. Customer changing his mind. When the customer change his mind and do not want to store information in an XML but instead want a database. Or when a customer say to not implement "this part of the system" because it will be available through a service. Etc Etc... Using interfaces and abstract class is the oil that makes the engine of your software turn smoothly and allow you to replace parts by better/different parts without hell breaking loose because of tightly coupled implementation.
Submit this story to DotNetKicks

Tuesday, December 8, 2009

Simple explication of the MVC Pattern

Since the last time I wrote a blog post was more than a few months ago, I would like to start by saying that I’m still alive and well. I had changes in my career and my personal life that required some attention and now I’m back on track.

So for those that know me, I was participating to the TechDays 2009 in Montreal and presenting the “Introduction to ASP.NET MVC” session. I will also be presenting the same session in Ottawa (in fact this blog post is written on the way to Ottawa with Eric as my designated driver).

So what is exactly ASP.NET MVC? It’s simply the Microsoft’s implementation of the MVC pattern that was first described in 1979 by Trygve Reenskaug (see Model-View-Controller for the full history).

So in more details, the MVC is the acronym of Model, View and Controller. We will see each component and the advantages of having them separated properly.

Model

The model is exactly what you would expect. It’s your business logic, your data access layer and whatever else is part of your application logic. This I don’t really have to explain. It’s where your business logic will sit and therefore should be the most tested part of your application.

The model is not aware of the view or of the controller.

View

The view is where sit your presentation layer for your application. In a web framework, this is mostly ASPX pages with logic that is limited to showing the model. This layer is normally really thin and only focused on displaying the model. The logics are mostly limited to encoding, localization, looping (for grids) and such.

The view is not aware of which controller invokes it. The view is only aware of the model to display.

Controller

The controller is the coordinator. It will retrieve data from the model and hand it over to the view to display. The controller can also be associated to other cross-cutting concerns such as logging, authorization and performance monitoring (performance counter, timing each operations, etc.).

Advantages

Now, why should you have to care about all that? First, there is a clear cutting separation between WHAT is displayed to the user and HOW you get the information to display. In the example of a web site, it become clearly possible to display different views based on the browser, the device, the capabilities of the device (javascript, css, etc…) and any other information available to you at the moment.

Among the other advantages, it’s the ability to test your controller separated from your view. If your model is properly done too (coding against abstraction and not implementations), you will be able to test your controller separated from your model and your view.

Disadvantages

Mostly a web pattern than a WinForm pattern. There is currently no serious implementation of the MVC pattern for anything else other than web frameworks. The MVC pattern is hence found in ASP.NET MVC, FubuMVC and other MVC Framework. Thus it limits your choices to the web.

If you take a specific platform like ASP.NET MVC, other disadvantages (that could be seen as advantages) slips in. Mostly, you lose any drag and drop support for server controls. Any grids are now required to be hand-rolled and built manually instead the more usual abstraction offered by the original framework.

Conclusions

Since we mostly require to have a more fine grained control over our view, the abstraction offered by the core .NET Framework are normally not extensible/customizable enough for most web designers. Some abstraction might even become unsupported in the future and thus bringing us to a more precise control of our views. The pattern also allow us greater testability than what is normally offered by default in WebForms (Page Controller with Templating Views).

My recommendation will effectively be a “it depend”. If an application is already built with WebForms and doesn’t have any friction, there is no point in redoing the application completely in MVC. However, for any new greenfield project, I would recommend at least taking a look at ASP.NET MVC.

Submit this story to DotNetKicks

Monday, June 22, 2009

Improving code quality – 2 ways to go

I’ve been thinking about this for at least a week or two. In fact, it’s been since I started (and finished) reading the book “Clean Code” by Robert C. Martin. There is probably only two way to go.

Fix the bad code

This method is called refactoring and “cleaning” the code. This of course, you can’t truly know what code is bad without having a Static Analyser tool or programmers working on the code. The tool will allow you to spot piece of code that could bring bugs and/or be hard to work with. The problem, refactoring code or cleaning up code is really expensive on a business perspective. The trick is to fix it as you interact with the code. It is probably impossible to request time from your company to fix code that could cause bugs. If you ask your company to fix the code, you will probably receive this answer: “Why did you write it badly in the first place?”. Which brings us to the other way to improve the code quality.

Don’t write it

If you don’t write the bad code in the first place, you won’t have to fix it! That sounds simple to an experienced programmer that improved his craft with years but rookies will definitely leave bad code behind. Eventually, you will have to encounter this bad code. So how do you avoid the big refactoring of mistakes (not just rookies)? I believe that training might be a way to go. When I only had 1 year of experience in software development, I was writing WAY too many bad code. I still do. Not that I don’t see it go through. Sometimes, things must be rushed, I don’t understand fully the problem and some small abstraction mistakes gets in. I write way less bad code then when I started. However, this bad code is not just magically disappearing. It’s stays there.

What about training?

I think that training and/or mentoring might be the way to go. Mentoring might be hard to sell but training is definitely not that hard to sell. Most employees got an amount of money related to their name within a company that represent training expenses that can be spent on them. What I particularly recommend is some courses in Object-Oriented design or Advanced Object-Oriented Design. Hell, you might even consider an xDD course (and by xDD… I mean TDD, BDD, DDD, RDD, etc.). Any of those courses will improve your skill and bring you closer to identifying clean code from bad code. Other training that will form you specific framework (like ASP.NET MVC or Entity Framework) will only show you how to get things done with those framework. The latter can be learned on your own or through a good  book.

So? What do you all thinks? Do you rather have a framework course or a “Clean Code” course?

Submit this story to DotNetKicks

Saturday, June 13, 2009

My baby steps to PostSharp 1.0

So… you downloaded PostSharp 1.0 and you installed it and are wondering… “What’s next?”.

Well my friends, let me walk you through the first steps of PostSharp. What could we do that would be simple enough? Hummm… what about writing to a debug window? That sounds simple enough! Let’s start. So I created a new Console Application project and I added the reference to PostSharp.Laos and PostSharp.Public. As a requirement, the class must be tagged with “Serializable” attribute and implement OnMethodBoundaryAspect (not in all case but let’s start small here).

Next, I have a few methods I can override. The two that we are interested in right now is “OnEnter” and “OnExit”. Inside of it, we’ll say which method we are entering and which one we are exiting. Here are my Guinea pig classes:

public class FooBar
{
    [DebugTracer]
    public void DoFoo()
    {
        Debug.WriteLine("Doing Foo");
    }

    [DebugTracer]
    public void DoBar()
    {
        Debug.WriteLine("Doing Bar");
    } 
}

[Serializable]
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Property)]
public class DebugTracer : OnMethodBoundaryAspect
{
    public override void OnEntry(MethodExecutionEventArgs eventArgs)
    {
        Debug.WriteLine(string.Format("Entering {0}", eventArgs.Method.Name));
    }

    public override void OnExit(MethodExecutionEventArgs eventArgs)
    {
        Debug.WriteLine(string.Format("Exiting {0}", eventArgs.Method.Name));
    }
}

See how simple this is? But… does it work? Let’s see the trace of calling each methods:

Entering DoFoo
Doing Foo
Exiting DoFoo
Entering DoBar
Doing Bar
Exiting DoBar

Isn’t that wonderful? Compile execute and enjoy. But… what about the community you say? Of course if the tool is not open source there is probably nothing built around it, right? Wrong!

Here is a few resources for PostSharp that include pre-made attributes that are ready to be used:

That was everything I could find. Do you know any others?

Submit this story to DotNetKicks

PostSharp – The best way to do AOP in .NET

Who knows about Aspect-Oriented Programming (AOP)? Common! Don’t be shy! Ok, now lower your hands. My prediction is that a lot of you didn’t raise their hands. So let’s resume what AOP is:

Aspect-oriented programming is a programming paradigm that increases modularity by enabling improved separation of concerns. This entails breaking down a program into distinct parts (so called concerns, cohesive areas of functionality). […]

So what does it mean truly? Well, it’s a way to declare part of your software (methods, classes, assembly) to have a “concern” applied to them. What is a concern? Logging is one. Exception handling is another one. But… let’s go wild… what about caching, impersonation, validation (null check, bound check), are all concerns. Do you mix them with your code? Right now… you are forced to do it.

The state of current AOP

Alright for those who raised their hands earlier, what are you using for your AOP concerns? If you are using patterns and practices Policy Injection module, well, you are probably not happy. First, all your objects need to be constructed by an object builder and need to inherit from MarshalByRefObject or implement an interface.

This is not the best way but it’s been done in the “proper” way without hack.

What is PostSharp bringing?

PostSharp might be a “hack” if think so. Of course, it does require you to have it installed on your machine while compiling for it to work. But… what does PostSharp does exactly? It does what every AOP should do. Inject the code before and after the matching method at compile time. Not just PostSharp methods but any methods that is inherited from the base class PostSharp is ofering you. Imagine what you could do if you could tell the compiler to inject ANY code before/after your method on ANY code you compile. Think of the possibilities. I’ll give you 2 minutes for all this information to sink in… (waiting)… got it? Start to see the possibility? All you need to do is put attributes on your methods/attributes like this:

[NotNullOrEmpty]
public string Name { get; set; }

[Minimum(0)]
public int Age { get; set; }

Now look at that code and ask yourself what it do exactly. Shouldn’t be hard. The properties won’t allow any number under “0” to be inserted inside “Age” and “Name” will not allow any null or empty string. If there is any code that try to do that, it will throw a ValidationException.

Wanna try it?

Go download PostSharp immediatly and it’s little friend ValidationAspects on Codeplex. After you have tried, try to build your own and start cleaning your code to achieve better readability.

And yes… both are Open-Source and can be used at no fee anywhere in your company.

Suggestion to CLR Team

Now, PostSharp force us to have it installed with the MSI for it to work because it needs to install a Post-Compile code injector (like some obfuscation tools). What would be really nice, is to be able to do the same thing built-in with the compiler. The compiler is already checking for some attribute already… I would love to have this “internal working” exposed to the public so that we can build better tools and, more importantly, better code.

UPDATE: I want to mention that PostSharp is NOT open-source. However is free unless you need to package it with your tool.

Submit this story to DotNetKicks

Tuesday, May 26, 2009

RefCardz - Little known reference at the tip of your finger

DISCLAIMER: I am not an employee of DZone Inc. and I am not paid for talking about the RefCardz. RefCardz are available for Free (as in free beer) online on their website. RefCardz in printed format can be obtained at different events or by contacting DZone directly.

Maven Refcardz just got released today. Everybody that use Maven might be hyped over that but… did you know the other Refcardz?

My favourite is the “Design Patterns” Refcardz. There is easy diagram to understand the organization of your classes as well as 2 interesting sections. “Use When” contains indication on when to use this particular pattern and “Example” contains a small problem that would require this pattern. As an example, let’s take the Adapter Pattern.

image

The “use when” in this section mention that an adapter should be used when:

  • you need to adapt a class to your interface
  • complex conditions with behaviour and states are present
  • transitions between states need to be explicit

I didn’t use the Adapter pattern a lot so I just knew about the first two. Still thinking about the third one.

Those RefCardz are offered for free in PDF format on DZone or you might find them in glossy hard paper when going to conference/code camp/etc. Most of the developers I know have DZone on their RSS reader but have never taken the time to look at the RefCardz that are available.

The RefCardz can be found on here.

Here is a few of my favourites:

Submit this story to DotNetKicks

Monday, April 13, 2009

Report a Bug – The feature all products should have

I was writing acceptance test for project today and I needed to include a file inside my Excel 2007 sheet. After fiddling around a little, I found “Insert Object”. This allowed me to insert a file and have it available to anyone opening the Excel 2007 document. When I dragged and dropped the file to my desktop, the file was truncated! I couldn’t believe it! I found a bug in Excel 2007! Being a developer, I wanted to properly report that bug.

So I started with a small Google search for “report Microsoft bug” which lead me to this, this and that.

Let me bring you the title of the last article : “Mission: Impossible. Submitting a Bug Report to Microsoft”.

This had me afraid. The article is dated 2002 and the other ones are 2006. We are in 2009 at the time of writing this article. So I searched more and more. I literally found nothing.

How can an application ship without having an easy way to report a bug? Apple does it. Ubuntu does it. OpenOffice does it. Firefox does it. Google Chrome does it. Why can’t I report a bug for Internet Explorer, Office or Vista (soon Windows 7)?

It goes without mention that if you are building a product, there should be an easy way to report a bug. Be it by a small contact form, an official bug reporting software, etc. But please don’t charge your user 35$ to fill out bug report.

Microsoft has recently (in the last few years) opened up and launched Connect to help user report bug inside the .NET Framework. They should extend it to all their product line. It seems that developer working on the .NET framework are reachable and have blogs everywhere and that the Excel team is just higher up in the sky and untouchable by customer feedback.

So? Does your application have a way to easily report bug? Because if the process to report a bug is long and tedious, you will have a “bug-less” product with marginal amount of users that tolerates the bug or that are hard-core enough to take the time to report it.

Submit this story to DotNetKicks

Sunday, April 5, 2009

5 reasons why you should use ASP.NET MVC

I’ll be fair with you readers. I’ve only toyed with the ASP.NET MVC framework. It looks great as of now but it’s the first full blown MVC framework that we have that is backed by Microsoft. However, there is a lot of opposition nowadays that tend to be formulated like this:

Why should I use ASP.NET MVC? WebForms works well.

Other problems come from the lack of server controls. When a developer look at that and he then wonder why he should have to write HTML and Javascript when before he could have retrieved all that beautiful information with a simple postback.

So without ranting any further, here is 5 reasons why you should use ASP.NET MVC.

1. Testability

When the MVC model is properly applied, it allow for a better separation of your business logic and your presentation code. If the view is not included inside your model, you can easily test without requiring a web server. By default, when starting a new MVC project, Visual Studio offer to create a new Unit Test project based on Microsoft’s Unit Test framework. Other Unit tests framework can also be configured to be used by default instead of Microsoft’s solution.

The way the code is also made, the controller is the one handling the calls from the route. They can be instantiated outside of a web request which makes them easy to test too.

2. Perfect control of the URLs

ASP.NET MVC use URL Routing to better control the request and forward them to your controllers. Instead of 1 to 1 mapping, they allow pattern matching. The default being “{controller}/{action}/{id}” with the default being “Home/Index”. This technically allow you to set the URLs exactly how you want. You don’t have to create folder for every level deep it goes. The URL routing allows you to make clean URL that will be easy to remember.

Would you rather try to remember http://localhost/Sales/DisplayProduct.aspx?ProductID=23213 or http://localhost/Product/Detail/23213 ? Even better, if you are an e-Commerce site and want some fast link…. you can directly bind those URL to http://localhost/23213 to make it more easy to remember. Doing that in WebForm while keeping all this unit testable would just be too time consuming now is it?

3. Better Mobility Support

In WebForm, you would have to detect on each page that the browser is a mobile and adapt your rendering for the mobile on each and every form. You could also redirect the user to different page when it’s a mobile. What is excellent with MVC is that it’s not the view that is receiving the request. It’s the controller. The controller can then dynamically decide which view to render while keeping the same URL. So to see a product view, you don’t even need to send different URL to different provider. You just detect which device you are handling and redirect it to the proper view. As you support more and more mobile device, you can keep on adding view that are more specific to each device. Want to support this new HTC? Create a view, detect the browser and ensure the right view is displayed. Want to support some iPhone goodness with some device specific HTML? Create the necessary view, reuse the browser detection and display the view.

You can keep on doing that ad infinitum and as much as you want depending on your audience. Having Mobile support now is more convenient than it has ever been.

4. View Engines

Now if you only built ASP.NET WebForms, this term might be weird for you. Let’s just say that you have been using the same view engine all this time without wondering if you could choose. The WebForm view engine is… well… what you have been using all this time. This includes server tags (<% %>), binding tags (<%# %>) as well as control tag (<asp:TextBox … />).

The Spark Engine is a good example. MvcContrib also offer 4 different view engine (Brail, NHaml, NVelocity, XSLT). Each of those engine are created to fix some specific problems. Different view engines can be used on different view. One page could be handled with WebForm view engine, one with Spark Engine, one with XSLT, etc. Different view, different problem different solution.

You might not have to use those, but the simple fact that they are available will make your life easier if they are needed.

5. Built-in and shipped jQuery support

Let’s keep the best for the end. jQuery is shipped with any new project instance of ASP.NET MVC. Since Microsoft announced support for jQuery, it’s been the big buzz in the javascript world. Since ASP.NET MVC don’t rely on Postback, a strong javascript framework is needed to provide for all the UI the previous server control were offering. jQuery easily offer you AJAX, DOM manipulation, event binding and  this across browser.

Of course, jQuery is not an advance to MVC itself. But it is a serious offering from ASP.NET MVC. No more download or “I’ll write my own” stuff. I don’t know for all of you but if I have to do javascript, I normally do a document.getElementById. This will work in most browsers but as soon as you start going funky, some browser will misbehave. jQuery simply allow you to write $(“#myControlId”) or many more shortcuts to simply do what you need across browsers. Just by having jQuery available stops me from writing incompatible code.

Conclusions

Lots of point goes toward MVC. Way more could be added. You certainly don’t want to miss Kazi Manzur Rashid’s blog about ASP.NET MVC Best Practices (part 1, part 2). Scott Hanselman, Phil Haack also have great posts about ASP.NET MVC.

Don’t be fooled. Web Forms are not necessarly evil. They just aren’t leading you to a pit of success.

Submit this story to DotNetKicks

Sunday, March 29, 2009

Anti-Pattern: The Gas Factory or Unnecessary complexity

Just as in any system, when you start coding some structure, you always try to make it as generic as possible to make it easy to later reuse those parts.Just like petrolum complex, things starts getting complex There is normal complexity when you build your code and as you go, complexity adds up. However, one of the main problem of this anti-pattern is when it’s done consciously.

Let’s give a quick example here. You start building collections and one of your collection need a special feature. The problem is when you extract this functionality and try to make it as generic as possible so that anyone could reuse your class. That is the problem of the unnecessary complexity. You see, as you are making stuff generic for a single class, you are adding structures inside your code. Some of those structure might be proof tested for this specific class but might fail on other classes. There is also the problem that maybe no other class will ever require this functionality.

How do I solve this anti-pattern?

By following YAGNI and Lean Software Development, you delay code and unnecessary complexity until you actually require the complexity. If  you have 2 classes that require the same functionality, it is now time to extract this functionality inside a different class and make those 2 classes inherit from it (or any other patterns that are required).

And here, I’m not just talking about inheritance. I’m also talking about unnecessary design patterns. If you built a pipeline component to calculate discount but you only have one discount at the moment, it might be actually relevant to implement it anyway since you are sure you might require it later. However, the client is the one that is supposed to drive the requirements and if you don’t require the pipeline immediately… well… don’t built it!

It doesn’t mean to leave your code in a fixed state. It just means to keep your code clean to ease the implementation of the pipeline.

The best line of codes are those we don’t need to write.

Submit this story to DotNetKicks

Thursday, March 26, 2009

Software development is not an art. It’s a craft.

When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.
R. Buckminster Fuller - US architect & engineer

Forget about the “creation” part of the job. As you are developing your first application, you are not creating something. You are building something. Mind you, it’s not like building a bridge as in engineering or a plane.

The main difference between the art and the craft is the people inside the profession. Art is completely subjective. Anyone can be improvised artist and some may succeed. The difference between artist and programmers is artist that don’t sell get another job. Self-improvised programmers that fails simply find another company to hire him.

I’m not complaining about self thought programmers. What I’m worried about is the quality of the code we find in today’s software. Who saw some serious bad code here? Raise your hand! You’re not alone. Those code were probably made by people who don’t care about software or by people who never had the chance to have proper training. That’s where it’s important to realize that our profession is way more of a craft than an art.

Art is based on inspiration. Craft is based upon sets of rules and experience that will bring you quality every time you follow them. Artistic software development doesn’t care about the rules. Let’s pick a simple metaphor. Japanese sword maker. There is a reason why there work is recognized and of such quality. Their craft had hundreds of year to accumulate rules on how to properly melt the metal, forge the blade, etc.. The master know a good sword on sight and knows that there is very few way to reach quality. Young apprentice will train for years under their master to gain the knowledge they acquired and be able to reproduce success.

It will be harsh but there is only 2 ways to change the behaviour of those that don’t care about the code.

  1. Train them (mentorship, classes, etc.)
  2. Get rid of them

If you want to train a programmer, he have to attend conferences, user group and spend some time off work to learn about good practices, design pattern and software principles. Company like Object Mentor do offer training to raise the quality of existing programmer.

Our industry might not be as old as Japanese sword maker but we must start to get rules/principles as much as possible and encourage “artistic sword maker” to follow rules and principles. Otherwise, the only thing you will get is a cheap rip-off that will break on the first hit. We’ve all seen software that breaks on the first hit and we need to improve the average level of our profession.

UPDATE: Added a quotation that I think represent a part of what we are doing. Solving problems and then making it nice. But foremost, solving problems.

Submit this story to DotNetKicks

Sunday, March 8, 2009

Software Developer and Software Engineer are not opposite, they are the same

Of course, Software engineering is defined as:

Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches.

This term was created in 1968 in the hope of bringing a more “civilized” way of coding. What is interesting however, is that most of the time, people and companies won’t make a difference between the developer and the engineer.

A Software Developer is defined as :

A software developer is a person or organization concerned with facets of the software development process wider than design and coding, a somewhat broader scope of computer programming or a specialty of project managing including some aspects of software product management. […]

Other names which are often used in the same close context are software analyst and software engineer.

If we stick to the definition, we can say that a software engineer developer, operate and maintain software and that he will study what he did to make sure it’s the best in the industry. As for the “systematic, disciplined, quantifiable”, it’s perfectly understandable that a software engineer should always do his best and follow standards.

What about the software developer? If we stick to the definition, a software developer design, code, manage, test and participate in the release of his software. Honestly, it’s basically what I do everyday. I’m not given a class diagram and said “code this”. I receive a requirement and I have to ensure that it will go in the application. I have to answer those following question:

  • Can it be done?
  • When will it be done?
  • Will it impact something else that you have to do?

No body ask me “How will you implement it?”. I have the responsibility to design, organize, implement and test my code. Of course I can go cowboy and go straight for the implementation.

But it’s not because I’m a software developer. Software developers is just a title. It’s not because some cowboy coders put all the Data Access Logic inside the view that all software developers are a useless bunch of monkey coder.

I’ve signed the software craftsmanship manifesto because I value the work that I do. I’ve signed it because quality is important, maybe not at the moment but it will eventually. I’ve signed it because I believe in good software.

What do you think? Is there such a gap between software developers and software engineers?

Submit this story to DotNetKicks

Thursday, March 5, 2009

Implementing a Chain-of-responsibility or “Pipeline” in C#

Anti-Patterns are interesting in showing you what you are doing wrong. However, patterns are also interesting in showing you how to do it well.

This time, I want to show how to implement a simple Chain-of-responsibility pattern. Our example is going to be based on a simple e-Commerce data model.

The Domain Model

Product which will have some basic attributes like a price, a name and a collection of applied discounts.

Discount which is going to be an actual discount implementation. More class are going to be derived from this base class.

That is all we are going to need for this pattern. However, it would be smart to have a class that would assign discounts to product based on certain rules.

Let’s start by writing our Product class and our Discount interface:

public class Product
{
    private readonly List<IDiscount> _appliedDiscount = new List<IDiscount>();
    
    public string ProductName { get; private set; }
    public decimal OriginalPrice { get; private set; }
    
    public decimal DiscountedPrice
    {
        get
        {
            decimal discountedPrice = OriginalPrice;
            return discountedPrice;
        }
        
    }

    public Product(string productName, decimal productPrice)
    {
        ProductName = productName;
        OriginalPrice = productPrice;
    }

    public List<IDiscount> AppliedDiscount
    {
        get
        {
            return _appliedDiscount;
        } 
    }
}

public interface IDiscount
{
    decimal ApplyDiscount(decimal productPrice);
}

Right now, the "DiscountedPrice” is simply returning our “OriginalPrice”. Let’s implement the proper discount commands:

public decimal DiscountedPrice
{
    get
    {
        decimal discountedPrice = OriginalPrice;
        
        foreach (IDiscount discount in _appliedDiscount)
            discountedPrice = discount.ApplyDiscount(discountedPrice);

        return discountedPrice;
    }
}

Now that we have an algorithm that will apply all discounts, let’s create a few Discount class:

public class PercentageDiscount : IDiscount
{
    public decimal PercentDiscount { get; set; }

    public PercentageDiscount(decimal percentDiscount)
    {
        PercentDiscount = percentDiscount;
    }

    public decimal ApplyDiscount(decimal productPrice)
    {
        return productPrice - (productPrice*PercentDiscount);
    }
}

public class FixPriceDiscount : IDiscount
{
    public decimal PriceDiscount { get; set; }

    public FixPriceDiscount(decimal priceDiscount)
    {
        PriceDiscount = priceDiscount;
    }

    public decimal ApplyDiscount(decimal productPrice)
    {
        return productPrice - PriceDiscount;
    }
}

So now we have a class that implement a percentage discount and another one that impose a fixed rate discount. Of course, our current implementation should NEVER be used in a real system as it is now. Validations must be done for a positive price and maybe some extra verification that we are not underselling the item.

Let’s use this current implementation:

// Creating a product worth 50$
Product currentProduct = new Product("Simple product", 50.0M);

Console.WriteLine(string.Format("Original Price: {0}", currentProduct.OriginalPrice));

// Give a 10% rebate on the product
currentProduct.AppliedDiscount.Add(new PercentageDiscount(0.1M));
Console.WriteLine(string.Format("Discounted Price: {0}", currentProduct.DiscountedPrice));

//Give an extra 10$ off on the product
currentProduct.AppliedDiscount.Add(new FixPriceDiscount(10.0M));
Console.WriteLine(string.Format("Discounted Price: {0}", currentProduct.DiscountedPrice));

This will output in order 45.00$ and 35.00$.  It’s important to be aware that the discount interfaces are not aware that they are being applied to a product. They could be reused in any other model that accepts an IDiscount.

Conclusion

By chaining Strategy Pattern (the discount algorithms), we can increase the amount of flexibility inside our model and increase the reuse of common algorithms. It would also be easy with a simple rule engine to apply discounts to product that match certain rules.

Other uses of a chain-of-responsibility could be when dealing with objects that could have multiple rules applied to them based on different conditions. The conditions would then be moved from the object itself to a “Command” and then reused exactly the same way we did here.

Submit this story to DotNetKicks

Wednesday, March 4, 2009

Waterfall development just work as great

Waterfall development is still a valid way to develop software. Setting up the requirements, making proper analysis, coding and then testing works just as fine. However… not for ever changing software like a website.

If I were to build an e-Commerce website, I would never choose to go Waterfall. I would love to go SCRUM or XP. Agile development have the advantage of including the client inside the development process. It allows the client to change his mind on some things that seemed good at first but that were finally a bad idea. There is some ideas that can only be rated as “bad” when you are working to develop them. Of course, if some features take longer, it’s easier to find out quickly that the project is going to be late and that some features are going to be left out.

Agile development as worked well so far in custom application development, websites, e-Commerce, product development, etc. However, as good as Agile might be… there is one strong point where I think the waterfall approach is still relevant.

Some software need to be bug free and have detailed specifications about the features. What are those software? One example is what a group of people inside Lockheed Martin is developing for the Space Shuttle which FastCompany talks about. This software need to be bug free. Of course, it needs to be thoroughly tested and must pass the strongest inspections. Every changes to the specifications must be approved by multiple persons and any change inside the code base without a valid reason is not allowed. They do not do agile. They do waterfall. What about the bugs?

This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.

So how come Waterfall works in this case? Of course, because the software is based on pieces of hardware that rarely need to change, it reduce the amount of compatibility problem. The software doesn’t need to work on 100 types of space shuttle. It has only one physical requirement. It also works because once specifications are written, they are to be followed at all cost. Changes are expansive and must be approved every time. The software is also never updated while in use. Of course you’ll never see this in an e-Commerce website!

So let’s resume:

  • Hardware that rarely change
  • Precise specifications
  • Expansive change
  • Once deployed, can’t be changed

Does that bring other examples? The simple things I could think would be any piece of hardware that have software inside. Let’s go with the Microwave. Your microwave have a software inside. Let’s see how many points it meet shall we? First, the microwave hardware will NEVER change. Nobody is pimping out his microwave so we can assume that the hardware stays the same for all it’s usable life. The specification for a microwave software rarely change (timer, defrost settings, power levels, etc.). If there is a change that must be made, it’s probably because of a hardware change which is expansive. And finally, no microwave is Wi-Fi enabled or have a USB connection to update it’s firmware.

We can safely assume that the Waterfall model must have been among the first software development process to be used. People first programmed chips, board, “simple” OS or OS with limited distribution. Of course, back then the formula worked great because of exactly the same four points I mentioned. The model started to break when building software for computers that varied largely in configuration (RAM, CPU, etc.). The model tried to be used but suddenly, development time sky-rocketed through the roof. A new model was seen as necessary.

So please, unless you are reprogramming your microwave for some evil plans, don’t use Waterfall. The main weakness of waterfall was the lack of user inputs. Even the Sashimi model is not enough. We need rapid feedback and constant testing. We are not developing perfect software that must never fails. But make sure it does before hitting the client.

Submit this story to DotNetKicks

Saturday, February 28, 2009

TDD: How I applied TDD to a simple problem

A month or two ago I had to built a component that had to analyse a string a return some information out of it. The best result for this was a Regular Expression and I was sure. So I started writing what kind of input would be valid and which one should not be allowed.

When I started writing this code, I already read many blog posts about it and wanted to give it a try. Since I wanted a simple scenario which would easily be applied, when I had to parse this string I knew I had a simple problem to which I could apply it.

If anyone has ever worked with regular expressions, it is widely known that it’s easy to make something match. However, it’s really hard to make it match what you want and not what you don’t want. For proof of that, try seeing how many regular expression there is to parse a phone number. The non-matches is as important as the matches.

So I started a test project and added my first test. The test was to test the perfect scenario. Of course the test didn’t even compiled. I then proceeded to create the missing classes and made the test compile. Of course, all the class had the following inside them:

throw new NotImplementedException();

This ensured that the test “went red”. I then proceeded to implement the minimum necessary to make it pass and make it “go green”. And I kept on rolling until it worked in all my specified cases. Sometimes, previous tests went and failed. Sometimes, everything stayed green but I kept on going.

For experienced TDD-er it’s common and normal. But for me, it was weird. I made sure to follow EXACTLY what it said. When it said “minimum necessary to make it pass”, I made sure that I returned a constant if I could or proceeded to create/modify the regular expression as needed. And as I kept increasing my amount of test, the constant all went away, the regular expression got more precise and every time that I broke a test I came back to fix it. It’s a weird feeling but when I gave my class for usage, it was working as perfectly as it could.

So why the ruckus? Because since I wrote this piece of code, I haven’t seen one bug report. The code is perfect for what it is doing. When a bug will come my way, I’ll try the TDD-er way of doing thing. Add a test that reproduce the bug and fix my code to make sure it works on all test.

Result of everything? One bug ridden class, 20 something tests and one developer who learned the essence of TDD.

I would love to get my teeth on more complex code now.

Submit this story to DotNetKicks

Anti-Pattern: Anemic Domain Model

Here is an anti-pattern Martin Fowler will agree with. In fact, it’s Martin Fowler that first described this anti-pattern in November 2003. Like Fowler said, it looks like a model, it smells like a model but there is no behaviour inside.

The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters.

The problem with the anemic domain model is that all the logic is not with the associated object. It’s located in the objects that use them. See the problem? So unless you are using the objects that have the behaviours, having the anemic domain model won’t bring you any good. In fact, they just getters and setters with barely enough behaviour to call them objects.

Of course, you gain a good separation of concerns and a gain in “flexibility” of behaviours if ever needed. You also gain the ability to generate those domain models from a modeling tools without having to break a sweat. If there is so many benefits, where’s the catch?

3 times nothing! You just need to separate the business logic multiple time so that every part of the business get their own. Objects can’t self validate since the validation logic is located outside of the object. Everyone need a reference to that specific model DLLs and any shared entities which increase the coupling of the classes. It also increase the code duplication since many part of the business will essentially reuse many parts that other part of the business need. Don’t forget the maintenance! Since the business logic is spread across the business, all the common business logic will need to be updated all at once and validated against their respective service and validation. And that, is if this part of the business want to update. It’s like dealing with many mini-companies within the same companies.

Got enough? So put some business logic inside your domain model and make it easy to understand. If a certain part of the business need so “special” behaviour, it will have to be incorporated inside the main domain model.

But what if you have to maintain an anemic domain model and you want to fix this anti-pattern? Of course you can always rewrite the software but it’s an expensive solution. The solution is that every time a new requirements arrive, put it inside the domain model and DO NOT put it inside the many service class. This is what Greg Young described as “making bubbles”. By making bubbles of great code that is going to be easy to maintain/reuse, and by maintaining the current system, you will finish by replacing everything.

Anything worth doing is worth doing well.

Submit this story to DotNetKicks

Thursday, February 26, 2009

Anti-Pattern: The god object

Because it’s easier to recognize evil if you have a mug shot, here’s one simple for all of you. The god object is a class that knows too much. It’s a severe violation of the Single Responsibility Principle and probably a lot of the other principle of SOLID depending of the implementation.

A basic principle in programming is to divide a problem in subroutines to make the problem easier to solve. It’s also know as “divide and conquer”. The object becomes so aware of everything or all the objects become so dependant of the god object that when there is a change or a bug to fix, it become a real nightmare to implement.

I tend to call those objects “Death Stars”. Death Star Why? Because just like the death star, if someone get to mess up with the core, it will explode. In fact, any modification to this god object will cause ripple of changes everywhere inside the software and will end in lots of bugs.

You can easily recognize of these object by the fear any developers have when getting near it.

So how to solve it? By refactoring of course! The goal is to separate in as much subroutine as possible. Once this is done, you move those subroutines to different classes. Of course, trying to follow the SOLID principles will definitely help.

Resolving a god object is of course really different depending on how omnipotent your god object is. But one thing for sure, you have to separate the “powers” (read: responsibility) and make sure that Single Responsibility Principle is applied.

Submit this story to DotNetKicks

Wednesday, February 25, 2009

We all produce code that we really aren’t proud of

Everyone produce code. Some peoples give birth to some beautiful, elegant and maintainable code sometime in their career. But pretty much all developers will one day or another give code that is so horrible, inelegant and un-maintainable.

I’m trying as much as possible to separate concerns inside my application so that I don’t have to care about problematic maintenance. I’ve recently had to review some code I did more than a month ago and… I’m not proud. I’m really not proud of the code I saw. It seems that every once in a while we write some code that we won’t be proud of.

Of course, we improve our coding skills every day and every time we encounter new technology and new idea. Hell, I like the SOLID principles and the TDD ideas. I love the concept of separation of concerns and modularization of an application. So how come I wrote this code?

I can’t remember exactly why honestly. Maybe I was rushed or I wanted to rush through it. How knows… what is important however is to know that the code that you wrote taste like a cheap wine. It’s important to take a note of it and make sure to “maintain your code garden”. This is at the moment that the title of my blog actually makes sense. Even with the best intention, code quality decay and if nothing is done, you finish with a half rotten application that nobody will care about.

So what to do about it? Make sure to improve the quality of existing code everyday. If we write bad code 10% of the time (number that I just picked from the top of my head) and good code 90% of the time, imagine if we take only a few hours per week to correct mistakes.

You probably won’t have the time to fix it. Who does? But there will be a day where you will finish an hour early and that you will wonder if you should start implementing a new feature or go home early. If you took note of those classes that need improvements, you could easily take this hour to maintain your code garden and be proud of the code that you did.

So, the first “time off” that I have will be used to fix this part of code that I’m not proud of.

Anyone else who wrote some shameful code?

Submit this story to DotNetKicks

Monday, February 23, 2009

Model View Presenter Revisited

The MVP pattern is a pattern that came into being in the early 1990s by Taligent. This pattern is mostly used inside WinForms and WebForms.

The View normally don’t do anything. The official implementation is described as the following.

The view instantiate the presenter with an instance of itself. The constructor parameter of the presenter must be an interface of the view. When events of the view happens, they must call the presenter without any parameter/return value. If the presenter need data, the presenter will get the data from the view interface without the view giving the data directly. Changes to the view must be done through the presenter.

Of course, this is a literal implementation from 1990. Of course, today we have more advanced paradigm that works quite nice. What is interesting is, with proper data binding, that we can change the value on the views without even calling methods of the view.

It is possible to add a databinding on a property of the presenter. Once the databinding is done, it’s possible to just change the presenter’s property to fire events on the view that will automatically updates the control.

It removes some implementation details of the MVP and make the pattern easier to implement.

Need a sample? Here it goes to implement an “auto-notify” property when it change inside a presenter:

public class MyPresenter : IPresenter, INotifyPropertyChanged
{
   private readonly IView view;
   private int randomNumber;

   public int RandomNumber
   {
       get { return randomNumber; }
       set
       {
           if (randomNumber == value) return;

           randomNumber = value;
           RaiseEvent("RandomNumber");
       }
   }

   #region Implementation of INotifyPropertyChanged

   // custom method to ease the change of event
   public void RaiseEvent(string propertyName)
   {
       PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
   }

   public event PropertyChangedEventHandler PropertyChanged = delegate { };

   #endregion

   public MyPresenter(IView view)
   {
       this.view = view;
       this.view.InitializeBindings(this);
   }

   public void GenerateRandomNumber()
   {
       Random rnd = new Random(DateTime.Now.Millisecond);
       RandomNumber = rnd.Next(0, 100);
   }
}

This will raise an event every time that a DIFFERENT value will be assigned to the property RandomNumber or when the event is fired. Now for the view, it look like this:

public partial class frmMain : Form, IView
{
   private readonly IPresenter presenter;

   public frmMain()
   {
       InitializeComponent();
       presenter = new MyPresenter(this);
   }

   public void InitializeBindings(IPresenter currentPresenter)
   {
       textBox1.DataBindings.Add("Text", currentPresenter, "RandomNumber", false, DataSourceUpdateMode.Never);
   }

   private void button1_Click(object sender, EventArgs e)
   {
       presenter.GenerateRandomNumber();
   }
}

The method InitializeBindings is called in the constructor of the presenter and will ensure that the binding are made only once. This will NOT require additional methods inside the view to assign the generated number inside the TextBox. This implementation respect the model definition while using the latest binding technology of .NET.

This reduce the amount of useless method while keeping the framework in charge of the bindings.

Here is the resulting interfaces from the implementation:

public interface IPresenter
{
   int RandomNumber { get; set; }
   void GenerateRandomNumber();
}

public interface IView
{
   void InitializeBindings(IPresenter currentPresenter);
}
Submit this story to DotNetKicks

Wednesday, February 18, 2009

Why should I use mocking objects in my Unit Test?

If we cut out any "fanboy" or favouritism toward certain framework and that we try to keep it in a one liner... I would say: "To simulate behaviours of objects that are impractical or impossible to incorporate inside a unit test".

The Wikipedia's article about Mock Object mention some reason an object should be mocked.

The object...

Supplies Non-Deterministic Results

By "non-deterministic" we mean everything from time, currency rate, shipping rate, etc. Any value that could be changing because of a specific implementation such as algorithm should be mocked. Mocked object allows you to return predetermined value that are independent of the algorithm/time/etc.

This allows to more easily test the state of the System Under Test (SUT) after running some methods.

Has States that are difficult to create or reproduce

The example given by Wikipedia is "network error". It's difficult to reproduce this kind of situation on every developers station. Other situation might include security, location of the test on disk, network availability (not just errors). If some objects that the SUT is using require any of those, the tests WILL fail somewhere and somehow. If it's not on a developer's machine, it's going to be on the build machine.

Mocking those objects and giving them proper behaviour will remove any required "settings" that are necessary to run a Unit Test.

Is Slow

Databases, network, file access (up to a point) are all slow. If your SUT is an ObjectService that is using a Repository and you are hitting directly on the databases, it is bound to be slow. Of course the database can cope with it. But as you had more tests, the unit will soon take HOURS to run. With a small in-memory database will save the day and run those tests in less than a few minutes.

A mocked repository might just keep a collection of saved object so that when a "Get" method is called, it's readily available in this collection. This kind of mock is called a "fake" in the world of Mocking. It implements more complex behaviour but allows for easy initialization and more timely responses.

Does not yet exist or may change behaviour

If the only thing that is currently available within your system boundaries are contracts (Interfaces in C#), it's more easy to mock the interface that the SUT is requiring and go with this temporarily while the component is being developed. This allow testing and coding at the same time.

Conclusion

Mocking is an excellent tool to test a specific object under controlled conditions. Of course, those conditions are bound to change and tests are going to be maintained. Why I particularly like is when I use a mocking framework, I don't need to create 1000+ objects (exaggerating here) with some specific behaviours or create "too intelligent" mock that will have to be maintained. I dynamically declare a mock with my favourite mocking framework with the expected call and the expected returns and I go through with that.

What normally happens is I have considerably less mocking objects inside my Unit Tests and the only objects that are left standing are some in-memory database objects with simple implementation that would be to hard to define with a mocking framework.

Submit this story to DotNetKicks

Tuesday, February 17, 2009

When would you use delegates in C#?

This is a valid question. Before C# 3.0, you could use delegates or declare full methods to bind to events. Now we can declare event directly through lambda. (See this post on many different examples on how to bind event handlers).

Jon Skeet answered me the following:

    • Event handlers (for GUI and more)
    • Starting threads
    • Callbacks (e.g. for async APIs)
    • LINQ and similar (List.Find etc)
    • Anywhere else where I want to effectively apply "template" code with some specialized logic inside (where the delegate provides the specialization)

Delegates is a keyword that can be used to declare inline methods. This inline code can be stored inside variables and then executed when necessary. This is exactly what happens when you are binding methods to events. You are storing the signature of the method inside a variable that will store multiple methods signature and call them when an event is happening.

Of course, it's limiting to think about delegates only as events. If we check the standard definition for the word delegation:

In its original usage, delegation refers to one object relying upon another to provide a specified set of functionalities. [...] Delegation is the simple yet powerful concept of handing a task over to another part of the program.

As I already demonstrated inside the StreamProxy class, we can easily give another software the tools  to answer it's solution. But sometimes, the call might not be necessary. Just like when you are sending a data repository to a service class to save a model, delegate is basically just sending any method that match the accepted signature  instead of complete class.

One of the most recent use of Lambda's inside C# is inside Mocking tools. Moq use those to easily describe expectations, returned values, and so on. This allow Moq to be type safe instead of relying on Reflection and string comparison. This has brought us compile time check rather than runtime check.

There is a lot of use for delegates and they are being used more and more. Lots of languages support some form of delegation (.NET, C++, JAVA, and many more)

Hope delegates are not as foreign as they were 1 year ago.

Submit this story to DotNetKicks