Thursday, December 10, 2009

ASP.NET MVC – How does the Html.ValidationMessage actually work?

When you create a basic ASP.NET MVC application, you normally will have “Html.ValidationMessage” inserted automatically for you in the Edit and Create views. Of course, if you try to type strings in a number field, it will fail. Same things for dates and such. The good question now is… how does it do it?

Well, the ValidationMessage method only look to see if the model you gave him with the name given have received errors. If it did, it will display the specified message. So now that we covered the “How”, I’ll show you where it does that.

The answer lies within the DefaultModelBinder that comes activated by default with ASP.NET MVC. The ModelBinder do a best guest to fill your model with the values sent from a post. However, when it can match a property name but can’t set the value (invalid data), it will catch the exception and add it as an error in a ModelStateDictionary. The ValidationMessage then picks up data from that dictionary and will find errors for the right property of your model.

That’s it! Of course it’s pretty simple validation and I would still recommend you to use a different validation library. There is already a few available on the MVCContrib project on CodePlex.

Submit this story to DotNetKicks

Tuesday, December 8, 2009

Simple explication of the MVC Pattern

Since the last time I wrote a blog post was more than a few months ago, I would like to start by saying that I’m still alive and well. I had changes in my career and my personal life that required some attention and now I’m back on track.

So for those that know me, I was participating to the TechDays 2009 in Montreal and presenting the “Introduction to ASP.NET MVC” session. I will also be presenting the same session in Ottawa (in fact this blog post is written on the way to Ottawa with Eric as my designated driver).

So what is exactly ASP.NET MVC? It’s simply the Microsoft’s implementation of the MVC pattern that was first described in 1979 by Trygve Reenskaug (see Model-View-Controller for the full history).

So in more details, the MVC is the acronym of Model, View and Controller. We will see each component and the advantages of having them separated properly.

Model

The model is exactly what you would expect. It’s your business logic, your data access layer and whatever else is part of your application logic. This I don’t really have to explain. It’s where your business logic will sit and therefore should be the most tested part of your application.

The model is not aware of the view or of the controller.

View

The view is where sit your presentation layer for your application. In a web framework, this is mostly ASPX pages with logic that is limited to showing the model. This layer is normally really thin and only focused on displaying the model. The logics are mostly limited to encoding, localization, looping (for grids) and such.

The view is not aware of which controller invokes it. The view is only aware of the model to display.

Controller

The controller is the coordinator. It will retrieve data from the model and hand it over to the view to display. The controller can also be associated to other cross-cutting concerns such as logging, authorization and performance monitoring (performance counter, timing each operations, etc.).

Advantages

Now, why should you have to care about all that? First, there is a clear cutting separation between WHAT is displayed to the user and HOW you get the information to display. In the example of a web site, it become clearly possible to display different views based on the browser, the device, the capabilities of the device (javascript, css, etc…) and any other information available to you at the moment.

Among the other advantages, it’s the ability to test your controller separated from your view. If your model is properly done too (coding against abstraction and not implementations), you will be able to test your controller separated from your model and your view.

Disadvantages

Mostly a web pattern than a WinForm pattern. There is currently no serious implementation of the MVC pattern for anything else other than web frameworks. The MVC pattern is hence found in ASP.NET MVC, FubuMVC and other MVC Framework. Thus it limits your choices to the web.

If you take a specific platform like ASP.NET MVC, other disadvantages (that could be seen as advantages) slips in. Mostly, you lose any drag and drop support for server controls. Any grids are now required to be hand-rolled and built manually instead the more usual abstraction offered by the original framework.

Conclusions

Since we mostly require to have a more fine grained control over our view, the abstraction offered by the core .NET Framework are normally not extensible/customizable enough for most web designers. Some abstraction might even become unsupported in the future and thus bringing us to a more precise control of our views. The pattern also allow us greater testability than what is normally offered by default in WebForms (Page Controller with Templating Views).

My recommendation will effectively be a “it depend”. If an application is already built with WebForms and doesn’t have any friction, there is no point in redoing the application completely in MVC. However, for any new greenfield project, I would recommend at least taking a look at ASP.NET MVC.

Submit this story to DotNetKicks

Friday, October 16, 2009

Back from vacation and personal changes

Alright! It’s been a while. Here is what happened during all that time. I went on vacation in August and I had some changes in my life on the way back.

Just to inform everyone now that the posting should be more common. I’m planned to be on the Visual Studio Talk show somewhere by the end of October. Also, don’t miss me at the TechDays 2009 in Montreal and Ottawa! Some pretty nice subjects are talked about!

Talk to you all later!

Submit this story to DotNetKicks

Monday, June 29, 2009

Is your debugger making you stupid?

What is one of the greatest advance of Visual Studio since the coming of .NET? You might think it is the Garbage Collector or the IL which allows interoperability between languages? I think of one the great advance of Visual Studio 2003 (all the way through Visual Studio 2010) is the debugger. Previously, debugger were hardly as powerful as Visual Studio. And that is the problem.

What is debugger?

The quote from Wikipedia is “a debugger is a computer program that is used to test and debug other programs”. The debugger is used to find bugs and figure out how to fix them.

The debugger is used to go step by step, step backward, step-into, etc. All this, in the hope of reproducing a bug.

Why does it make me stupid?

It might be one of the most powerful tools that you have under your hand. But it’s also one of the most dangerous. It encourages you to test your software yourself without having tools do the job for you. Once you are done debugging a module, you will never debug it again unless there is a bug in it. Then you start building around this module and you test the new modules against the first module. And then starts the fun. You start modifying the first module but don’t test again the first scenario that you built. Now, the next time another developer build something based on the same module, there is two thing that can happen. The first is that the developer is going to be afraid to change the module and will duplicate the code in another module to make sure he doesn’t break anything. The second option is that the developer is  going to change the module anyway and rerun the application to make sure he didn’t break anything obvious. Okay, there is a third option that consist in adding test to address the new behaviour but we’re not interested in the good stuff. Just the bad.

Ripples of the Debugger

Okay. After I just told you this nice little story, what do you think will happen in the future? As the other developers goes into the code, they will add over the modules again and again. As the modifications keep on coming, the module will keep on changing. As the changes goes “debugger tested” only, bugs will start to appear in modules that never had bugs before. To “test” the right behaviour, the team will start to add test script to execute manually to make sure no bugs are left behind. This will require interns or QA people to run the test.

The solution?

Infect your code with test and stop using the debugger. That’s simple. I know that Ruby doesn’t have a debugger integrated inside the main UI it’s using. Ruby developer still manage to deliver quality code without the integrated debugger. In fact, lots of developer manage to make great software without debugger. Running without debugger and test however is NOT the solution. You must ensure that your code is covered with code as much as possible. When you find a bug, make a test that reproduce the bug and fix the production code. As your code gets tested, additional modules will not make break unless they break a test. This is the solution. This is the way to make good and clean code.

Submit this story to DotNetKicks

Sunday, June 28, 2009

The cost of Bad Code

Every developer writes code. Every developer works or has worked on a Brownfield project. Working on a Brownfield project often makes developer complain about the code being poorly written and hard to maintain. That surely sounds familiar, right?

This is basically a pledge to good code. Bad code makes things worse and cost business money.

How much are we talking about?

There is no scientific study about this. Primarily because most projects are private and won’t allow studies and there is still no clear metric that represent clean code. Mostly, metrics can’t represent bad code. So how much money can be saved? Well… bad code hinder maintenance, comprehension and scare programmers of changing a class that was working well before. I don’t think we can calculate now, but I think that Cyclomatic Complexity, LOC per functions and Code Coverage represent a big indicator of code that are hard to understand and difficult to make changes.

Code that have high cyclomatic complexity and huge LOC per functions scares programmers into making changes. Why? Because we all know that if we change something inside one of those method, the ripples of change will make something else break. This fear can be neutralized by high code coverage of those big methods and/or by splitting them up.

Time for totally unscientific numbers. I think that complex code will require more double the time to make modifications to. Why? Well… let’s say that the developer will have to spend a considerate amount of times in the debugger instead of running tests. Tests for a (big) module should take less than 10-15 seconds to run (including the test runner initialization). Debugging the same module to verify a behaviour will take normally a minute or two. Rinse and repeat at least a dozen times and you find yourself at 1 minutes for running the tests and 12 minutes for debugging an application. This is just the beginning. If there is no tests, a huge and complex method will take literally take at least 10 minutes to understand (depend of context). A test “infected” code base will allow for quick failure verification without having to spend hours in the debugger. Calculate as much as you want but… as Robert C. Martin said:

The only way to go fast is to go well.

So are you saving time in your company or are you costing your company money? I think we can all earn something from writing clean code. Companies will save on maintenance cost, programmer will improve their craft and become better programmer that are proud of what they do.

Submit this story to DotNetKicks

Monday, June 22, 2009

Improving code quality – 2 ways to go

I’ve been thinking about this for at least a week or two. In fact, it’s been since I started (and finished) reading the book “Clean Code” by Robert C. Martin. There is probably only two way to go.

Fix the bad code

This method is called refactoring and “cleaning” the code. This of course, you can’t truly know what code is bad without having a Static Analyser tool or programmers working on the code. The tool will allow you to spot piece of code that could bring bugs and/or be hard to work with. The problem, refactoring code or cleaning up code is really expensive on a business perspective. The trick is to fix it as you interact with the code. It is probably impossible to request time from your company to fix code that could cause bugs. If you ask your company to fix the code, you will probably receive this answer: “Why did you write it badly in the first place?”. Which brings us to the other way to improve the code quality.

Don’t write it

If you don’t write the bad code in the first place, you won’t have to fix it! That sounds simple to an experienced programmer that improved his craft with years but rookies will definitely leave bad code behind. Eventually, you will have to encounter this bad code. So how do you avoid the big refactoring of mistakes (not just rookies)? I believe that training might be a way to go. When I only had 1 year of experience in software development, I was writing WAY too many bad code. I still do. Not that I don’t see it go through. Sometimes, things must be rushed, I don’t understand fully the problem and some small abstraction mistakes gets in. I write way less bad code then when I started. However, this bad code is not just magically disappearing. It’s stays there.

What about training?

I think that training and/or mentoring might be the way to go. Mentoring might be hard to sell but training is definitely not that hard to sell. Most employees got an amount of money related to their name within a company that represent training expenses that can be spent on them. What I particularly recommend is some courses in Object-Oriented design or Advanced Object-Oriented Design. Hell, you might even consider an xDD course (and by xDD… I mean TDD, BDD, DDD, RDD, etc.). Any of those courses will improve your skill and bring you closer to identifying clean code from bad code. Other training that will form you specific framework (like ASP.NET MVC or Entity Framework) will only show you how to get things done with those framework. The latter can be learned on your own or through a good  book.

So? What do you all thinks? Do you rather have a framework course or a “Clean Code” course?

Submit this story to DotNetKicks

Sunday, June 14, 2009

“If you build it, they will come” – Or how to start a community

I’ve always found that the best practices inside my field were not always respected. Doctors always wash their hands, architect follow all the rules to have a building that is safe for the people living/working inside it. However, with software, anyone can improvise himself “Software Architect” or “Software Developer” without having any problem to find a job. Most people in the .NET community will follow what is given to them by Microsoft. Be it SharePoint, Entity Framework, Linq To SQL, Visual Studio, or whatever. Sometimes, alternative is good because they offer you a different view on the state of things.

When I met Greg Young for the first time, it was in .NET Montreal Community meeting where he was doing a presentation on DDD. We took a beer together and talked about improving the level of those in Montreal. Improving the level of average developer in Montreal is a hell of a task. First, there is people like me, Greg and Eric De Carufel who are passionate with their craft and are not satisfied with the status quo. We believe in ALT.NET but are most of the time called “passionate programmer”. The people like me and Eric are the easy one to help. Then there is those that want to improve themselves but that doesn’t have time (life, family, house, etc.). They are not easy to attract and the best way to instruct them is to do it internally (official training or coworkers). Then there are those that don’t care about their craft. Those are of no interest to me.

When I took a beer with Greg Young, he talked about action on what would be needed to improve the level. That is the reason why I started (or at least… still trying) to start the ALT.NET Montreal Community. We started a month a ago. We were only 7 back then. It was small but friendly. Now, on June 25th, we will hold our second Coding Dojo of the ALT.NET Montreal Community.

What is important to remember when starting a community I think is, to start! So, if there is anyone from Montreal who wants to help us boot start a community… the ALT.NET Montreal Community, you are all welcome to our next Coding Dojo on June 25th.

Submit this story to DotNetKicks

Saturday, June 13, 2009

My baby steps to PostSharp 1.0

So… you downloaded PostSharp 1.0 and you installed it and are wondering… “What’s next?”.

Well my friends, let me walk you through the first steps of PostSharp. What could we do that would be simple enough? Hummm… what about writing to a debug window? That sounds simple enough! Let’s start. So I created a new Console Application project and I added the reference to PostSharp.Laos and PostSharp.Public. As a requirement, the class must be tagged with “Serializable” attribute and implement OnMethodBoundaryAspect (not in all case but let’s start small here).

Next, I have a few methods I can override. The two that we are interested in right now is “OnEnter” and “OnExit”. Inside of it, we’ll say which method we are entering and which one we are exiting. Here are my Guinea pig classes:

public class FooBar
{
    [DebugTracer]
    public void DoFoo()
    {
        Debug.WriteLine("Doing Foo");
    }

    [DebugTracer]
    public void DoBar()
    {
        Debug.WriteLine("Doing Bar");
    } 
}

[Serializable]
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Property)]
public class DebugTracer : OnMethodBoundaryAspect
{
    public override void OnEntry(MethodExecutionEventArgs eventArgs)
    {
        Debug.WriteLine(string.Format("Entering {0}", eventArgs.Method.Name));
    }

    public override void OnExit(MethodExecutionEventArgs eventArgs)
    {
        Debug.WriteLine(string.Format("Exiting {0}", eventArgs.Method.Name));
    }
}

See how simple this is? But… does it work? Let’s see the trace of calling each methods:

Entering DoFoo
Doing Foo
Exiting DoFoo
Entering DoBar
Doing Bar
Exiting DoBar

Isn’t that wonderful? Compile execute and enjoy. But… what about the community you say? Of course if the tool is not open source there is probably nothing built around it, right? Wrong!

Here is a few resources for PostSharp that include pre-made attributes that are ready to be used:

That was everything I could find. Do you know any others?

Submit this story to DotNetKicks

PostSharp – The best way to do AOP in .NET

Who knows about Aspect-Oriented Programming (AOP)? Common! Don’t be shy! Ok, now lower your hands. My prediction is that a lot of you didn’t raise their hands. So let’s resume what AOP is:

Aspect-oriented programming is a programming paradigm that increases modularity by enabling improved separation of concerns. This entails breaking down a program into distinct parts (so called concerns, cohesive areas of functionality). […]

So what does it mean truly? Well, it’s a way to declare part of your software (methods, classes, assembly) to have a “concern” applied to them. What is a concern? Logging is one. Exception handling is another one. But… let’s go wild… what about caching, impersonation, validation (null check, bound check), are all concerns. Do you mix them with your code? Right now… you are forced to do it.

The state of current AOP

Alright for those who raised their hands earlier, what are you using for your AOP concerns? If you are using patterns and practices Policy Injection module, well, you are probably not happy. First, all your objects need to be constructed by an object builder and need to inherit from MarshalByRefObject or implement an interface.

This is not the best way but it’s been done in the “proper” way without hack.

What is PostSharp bringing?

PostSharp might be a “hack” if think so. Of course, it does require you to have it installed on your machine while compiling for it to work. But… what does PostSharp does exactly? It does what every AOP should do. Inject the code before and after the matching method at compile time. Not just PostSharp methods but any methods that is inherited from the base class PostSharp is ofering you. Imagine what you could do if you could tell the compiler to inject ANY code before/after your method on ANY code you compile. Think of the possibilities. I’ll give you 2 minutes for all this information to sink in… (waiting)… got it? Start to see the possibility? All you need to do is put attributes on your methods/attributes like this:

[NotNullOrEmpty]
public string Name { get; set; }

[Minimum(0)]
public int Age { get; set; }

Now look at that code and ask yourself what it do exactly. Shouldn’t be hard. The properties won’t allow any number under “0” to be inserted inside “Age” and “Name” will not allow any null or empty string. If there is any code that try to do that, it will throw a ValidationException.

Wanna try it?

Go download PostSharp immediatly and it’s little friend ValidationAspects on Codeplex. After you have tried, try to build your own and start cleaning your code to achieve better readability.

And yes… both are Open-Source and can be used at no fee anywhere in your company.

Suggestion to CLR Team

Now, PostSharp force us to have it installed with the MSI for it to work because it needs to install a Post-Compile code injector (like some obfuscation tools). What would be really nice, is to be able to do the same thing built-in with the compiler. The compiler is already checking for some attribute already… I would love to have this “internal working” exposed to the public so that we can build better tools and, more importantly, better code.

UPDATE: I want to mention that PostSharp is NOT open-source. However is free unless you need to package it with your tool.

Submit this story to DotNetKicks

Friday, June 12, 2009

So I just finished reading Clean Code by Uncle Bob

This must have been the most enlightening book I’ve ever read. It’s filled with “evident” knowledge. Of course, some of them you have never thought about… but some that you just can’t avoid nodding in approval.

As everyone know, I’m a .NET Developer and Uncle Bob is a Java developer (not exactly but the book have code in Java). There is some recommendation in the books that are targeted at Java developer and that don’t apply to .NET.

So? If I had to tell what the book is about, what should I say?

I would say:

  • Humans are good at mixing abstraction level
  • Keep the variable/class/function clear and concise
  • Commenting must be done with care otherwise it just clutter the code
  • Refactor, refactor, refactor. A code base is never perfect but if you follow the Boy scout rule, the code base will always be better in the end
  • Code should always have test and high coverage

Am I hitting the bulls eye here? What do you think?

Submit this story to DotNetKicks

Tuesday, May 26, 2009

RefCardz - Little known reference at the tip of your finger

DISCLAIMER: I am not an employee of DZone Inc. and I am not paid for talking about the RefCardz. RefCardz are available for Free (as in free beer) online on their website. RefCardz in printed format can be obtained at different events or by contacting DZone directly.

Maven Refcardz just got released today. Everybody that use Maven might be hyped over that but… did you know the other Refcardz?

My favourite is the “Design Patterns” Refcardz. There is easy diagram to understand the organization of your classes as well as 2 interesting sections. “Use When” contains indication on when to use this particular pattern and “Example” contains a small problem that would require this pattern. As an example, let’s take the Adapter Pattern.

image

The “use when” in this section mention that an adapter should be used when:

  • you need to adapt a class to your interface
  • complex conditions with behaviour and states are present
  • transitions between states need to be explicit

I didn’t use the Adapter pattern a lot so I just knew about the first two. Still thinking about the third one.

Those RefCardz are offered for free in PDF format on DZone or you might find them in glossy hard paper when going to conference/code camp/etc. Most of the developers I know have DZone on their RSS reader but have never taken the time to look at the RefCardz that are available.

The RefCardz can be found on here.

Here is a few of my favourites:

Submit this story to DotNetKicks

Monday, May 25, 2009

Redefining ALT.NET or rather, rediscovering it’s meaning

I’ve heard about ALT.NET about a year ago. At first, I thought that it was about using alternatives to Microsoft or to avoid Microsoft software. ALT.NET was supposed to be about going “alternative” and being against “The Man” and being for “The People”. Well, I must agree that I wasn’t totally right with that. I mean, Microsoft make some mess but it also does a lot of great tools and particularly a great IDE with lots of extensibility point.

Then, I did what I should have done in the beginning. I looked up the definition. On the ALT.NET website, we have this:

We are a self-organizing, ad-hoc community of developers bound by a desire to improve ourselves, challenge assumptions, and help each other pursue excellence in the practice of software development.

Hum… that’s a totally different story now. The emphasis is mine and helps get the key points. First, I would have never gotten into a field that I hate and I love to learn. That makes the desire to improve ourselves done. I always challenge assumptions and try to find the better tool for the job. I know that Microsoft makes some great tools but sometimes they just don’t cut it. They will someday… but they not always will. Most of the time, you can’t wait for Microsoft to build a tool that will help you finish a software… so you get what works for you at the moment.

Finally and not last, “help each other pursue excellence”. That is the hardest one. Of course, I participate in the .NET Montreal Usergroup, but… I felt that more could be done. I then started to speak with Greg Young and other passionate programmers in Montreal. Something that Greg kept repeating during our “Beer Meeting” was always: “But what concretely can we do to improve the level of the people in Montreal?”. This stayed in my mind for weeks.

Since I wanted to help improve my fellow programmers and I thought that we learn best while coding… I started searching for a way to improve everyone while coding. It happens that it already exist and it’s named a Coding Dojo.

Last Thursday, I organized the very first Coding Dojo for the ALT.NET Montreal group. We were few but learned a lot. We also had a lot of practice in learning TDD. It was hard since I never did a Coding Dojo before. I learned a lot, and our fellow programmers learned a lot. I’ll try to get one Dojo per month and to get more and more people to join the group. As Kevin Coster was told… “if you build it, they will come”.

So my last word goes to Scott Bellware. All hope is not gone Scott. People around the world is still organizing to teach other people best practices and to try to raise the bar of everyone. Our group is small… but if it had to start somehow it had to be small. I hope all hope is not gone on your side Scott. Passionate programmers love to learn and we are trying to offer them a way to learn and improve themselves and at the same time… propagate their knowledge to the workers that didn’t cared enough to come.

Submit this story to DotNetKicks

Thursday, May 14, 2009

Participating in the community and improving yourself

This post is sadly not going to be about code so much that it’s going to be about the profession. Some professions have it easy. You can cut hairs without having to learn something new every 2-3 months. You can build houses without having to learn new methods every year. Of course, all professions are evolving and new ways to do the same task more effectively are created.

What is really different when you are coding is that new languages are coming every 1-2 years. Methodologies are coming every 5 years. New frameworks are coming every 5-6 months. I might be off on those numbers but I feel pretty confident about it. WPF came in less than 2 years. C# 3.0 came in a year and a half ago. C# 4.0 is due to 2010.

All those technology require a lot of our time to learn. This is the main reason why there is so much conference, workshop and community event happening in a single city. This year, I’m going to the .NET Montreal User group , I’m trying to organize a Coding Dojo, I’m attending the Montreal Codecamp 2009 and I’m presenting at this same Codecamp. My question here is… is it too much?

I feel that I won’t learn enough in my lifetime to but a great programmer but that if I work enough, I might just be good enough to be proud of my work and make changes in the way that we do our job (The Daily WTF anyone?).

However, even if the amount of changes are tremendous… I don’t see any desire to participate in community events. It doesn’t mean a lack of desire to improve. I just think that a lot of people like to learn on their own off a book or by doing the trip themselves.

How is it on your side? How is it like in the US? India? Slovakia (yeah! got a lot of those readers!) ?

What is your take on participating inside the community and self improvement?

Submit this story to DotNetKicks

Wednesday, May 6, 2009

LyricWiki.org – How to retrieve lyrics for your mp3 in C#?

I have an iPhone and I got a lot of songs on it. I don’t particularly love to sing but I got a few bands that have tendencies to say their lyrics in a… obscure way. Since I’m really curious, I’m always on Google searching for the lyrics. I wanted to save some time and avoid unnecessary browsing. The iPhone have the capacity of displaying lyrics if they are included inside the mp3 metadata. Updating the lyrics is something but where will I retrieved thousands song lyric?

I recently found out about a website called LyricWiki.org. What is interesting is that they have a web service (for free) available. I then decided to share how I did it. First thing is to start a project. You will then add a service reference like this:

image

Click on Go, change namespace and click on OK. Once  this is done, the easy part is done. Please note that the service URL has been stored under App.config with all the related settings.

Now to retrieve the actual lyrics, no more than a little bit of code:

// create a client to connect to the service
LyricWikiPortTypeClient client = new LyricWikiPortTypeClient();

// retrieve a song. Creed is good. Love this band.
LyricsResult song = client.getSong("Creed", "One");

// display the artist and the lyrics
Console.WriteLine(string.Format("Found lyrics for \"{0} - {1}\":\r\n\r\n" +
                                "{2}", song.artist, song.song, song.lyrics));

// let you read it :)
Console.ReadLine();

That was easy now wasn’t it? Next post, how to update your mp3 with iTunes! Keep reading!

Submit this story to DotNetKicks

Tuesday, May 5, 2009

EntLib 4.0 – ExceptionPolicy.HandleException is not thread safe

We've faced this problem recently where Enterprise Library was crashing. Not everywhere... just on this little line of code. We were trying to save to a database asynchronously but if the database was not available, it threw an exception which Enterprise Library was supposed to catch.

However, this never happened. Enterprise library crashed on us. Mind you, we didn't figure out this problem until recently because the problem was just happening in production.

When we managed to reproduce the exception in a development environment (where debugging is possible), we got an error similar to "An item with the same key has already been added". Here's the beginning of the stack trace:

Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionHandlingException: The current build operation (build key Build Key[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl, LogException]) failed: The current build operation (build key Build Key[Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter, null]) failed: An item with the same key has already been added.

That lead me to this thread where it was mentioned that an issue was logged. I found this issue which mentioned concurrency issue. Since our code was multi-threaded, we had two options. Replace Enterprise Library 4.0 by Enterprise Library 4.1 or make a thread safe wrapper for the call to the log exception.

The first option required reconfiguring the build and all development machine, the second one consisted of coding a dirty patch. We went with the first one (what else did they fix in 4.1?) for obvious reason.

So, if you have this kind of problem with Enterprise Library 4.0, you might want to upgrade to Enterprise Library 4.1. Since the problem happen in the error handling section, it makes this kind of bug hard to understand and erratic (due to concurrency).

Submit this story to DotNetKicks

Monday, May 4, 2009

Find and Reproduce Heisenbugs – CHESS is the tool for you

Microsoft is mainly known for 2 things. Windows and Office. However, for programmers,  Microsoft is also know for many more projects/product like .NET, Enterprise Library, ASP.NET MVC, Team Foundation Server, SharePoint, etc.

Among the few tools that are not really known and publicised at the moment are the projects inside Microsoft Research. This is the land of Beta or “software-never-to-be-released”. Either the project is too crazy(read: inovative?) or not useful for people at the time. But there is time where a tool stands out and need to be  talked about.

My friend Eric De Carufel recently talked to me about this tool called CHESS from Microsoft Research. He tested it and it allows for testing for those rare bugs that are only found when there is enough concurrency. Those bugs were tested before with stress test. Stress test are not supposed to be used for finding concurrency bugs but they are traditionally used for that too because when a system is under stress, concurrency happens.

CHESS is there to solve this problem. It includes itself among your unit test and try every possible permutation of code that happens asynchronously. This include race condition, dead lock, and what else exists (not an expert in multi-threaded applications).

Of course, I expect Eric to blog more about this amazing software that is coming from Microsoft’s Trenches.

Want more information? Get on the CHESS website. If you want to find more interesting projects that Microsoft’s genius are working on, visit Microsoft Research.

Submit this story to DotNetKicks

CodeCamp Montreal 2009 – Unit testing with Moq

I’m happy to announce you that I’ve been selected to present a session at the Montreal CodeCamp.

My session is going to be focussed on Moq and unit testing. Since simply looking at a framework can be done simply, I’ll be going over the framework and many “how-to”. I’ll also try to fit a few interesting demo with some “refactoring for unit test”. Of course, this will all be centered around “best practices” since it’s the theme of this year.

See you all there!

Submit this story to DotNetKicks

Sunday, May 3, 2009

Adding lyrics to your MP3 via iTunes

I found tonight an interesting site that built an application that insert the lyrics inside your mp3 collection.

It requires iTunes and the .NET framework.

It can be found here. I will rewrite the application in C#, improve the source and contact the author if we can put it in Google Code or Codeplex.

Enjoy!

Submit this story to DotNetKicks

Friday, May 1, 2009

Coding Dojo – Thursday, May 21st

Alright, after our small ALT.NET meeting (with beer) at O’Reagan Irish Pub, we decided to organize a coding dojo. If anybody is interested, please leave a comment on the blog or contact me via Twitter (@MaximRouiller).

The Coding Dojo will happen on Thursday, May 21st. It will start at 5:30pm and may include free pizza. No guarantee but I’ll try. We don’t have a local yet but I’ll keep you all updated as soon as we find one.

The Coding Dojo will be oriented around TDD and will take the format of a RandoriKata.

Who will participate?

Submit this story to DotNetKicks

Thursday, April 30, 2009

ALT.NET Montreal Meeting @ O'Regans Irish Pub – 5:30pm

Tonight we will be holding a little ALT.NET meeting with a beer. I’ll be at the pub at around 5:30pm-5:45pm.

See you all tonight!

If you are looking for the Pub, it’s right there:


View Larger Map
Submit this story to DotNetKicks

Monday, April 13, 2009

Which bug reporting method should I choose?

Alright, my previous post was lot of ranting… I have to make it up. I hate ranting because it never bring any solutions, only problems. When someone is building a software, bugs will happen as true as the sun rise everyday. Sometimes it will be simple change request but there will be bug that are going to be logged in.

It’s important that the user of the software has an easy way of reporting the bug. Depending on the kind of software you are building, there will be different kind of way to actually report a bug. There is 4 main kind of software that programmer works on. Open-source software, product, internal software and public web sites. They all require different kind of bug reporting due to their reach and needs.

Open-source software

Open-source software are normally better with 2 kind of bug reporting. The first is an actual bug reporting tool that is linked to a source control software like Google Code or Codeplex. The other kind is a discussion list/forum. The first one is to actually report bug and the second one refer more to actual support by the community. It might sound confusing but they both highlight bugs as the consumer have a way to contact the project leader or contributors to the project.

Examples includes: Firefox, Moq, BlogEngine.NET

Product

Product are either websites or actual executables that are sold to consumers. Since they don’t display the source and especially not their bug lists, they need a way for the consumer to actually report their bugs. Since products are normally backed by a company with staffing, bug reporting can go from the simplest form to a complicated bug submission tools. Sometimes, the integration of the bug report are even more advanced. Some software even catch crash in the software and send them when the software can successfully restart. Whatever the way the bugs are caught, company should always offer a free way of sending bug report even if it’s through a support@example.com kind of email.

Examples include: FogBugz, Copilot, StackOverflow, Apple Mac OSX

Internal Software

Those are whole different kind of game. Those are software that you are hired to build for another company or that you build for your own company. Those kind of software mostly stay internal and are never going to be released into the wild. Those includes, CMS, ERP, wiki, etc. Those are normally what most developer will face. Direct contact with the user about the bugs, Team System for the bug reporting (or any other kind of bug reporting software used internally). The bug list is internal and new bugs are often reported by a direct email. phone call or a conversation with the program manager/architect/team lead/developer.

Websites

Websites normally don’t have specific bug reporting system. Most websites works or the user leaves. The main bug reporting system in those system is heavy logging for any errors that happens and IIS logs for anything that doesn’t go through the .NET runtime. However, some websites that offer service might offer some way to report bugs but they are  the exception… not the rule.

Examples include: StackOverflow

Conclusion

Bug reporting is not what you want your user to use (unless you are selling bug reporting software). However, when a user is seeking to you to report a bug, you must offer them a painless process to report bugs. Without this, you will only be able to find out bugs from other forums where people are complaining(if you have a large user base) but mostly, you will never get any notice that the software doesn’t work as planned.

So please… if I want to report a bug… make my life easy and make it a process that don’t take more than a few clicks.

Thanks

Submit this story to DotNetKicks

Report a Bug – The feature all products should have

I was writing acceptance test for project today and I needed to include a file inside my Excel 2007 sheet. After fiddling around a little, I found “Insert Object”. This allowed me to insert a file and have it available to anyone opening the Excel 2007 document. When I dragged and dropped the file to my desktop, the file was truncated! I couldn’t believe it! I found a bug in Excel 2007! Being a developer, I wanted to properly report that bug.

So I started with a small Google search for “report Microsoft bug” which lead me to this, this and that.

Let me bring you the title of the last article : “Mission: Impossible. Submitting a Bug Report to Microsoft”.

This had me afraid. The article is dated 2002 and the other ones are 2006. We are in 2009 at the time of writing this article. So I searched more and more. I literally found nothing.

How can an application ship without having an easy way to report a bug? Apple does it. Ubuntu does it. OpenOffice does it. Firefox does it. Google Chrome does it. Why can’t I report a bug for Internet Explorer, Office or Vista (soon Windows 7)?

It goes without mention that if you are building a product, there should be an easy way to report a bug. Be it by a small contact form, an official bug reporting software, etc. But please don’t charge your user 35$ to fill out bug report.

Microsoft has recently (in the last few years) opened up and launched Connect to help user report bug inside the .NET Framework. They should extend it to all their product line. It seems that developer working on the .NET framework are reachable and have blogs everywhere and that the Excel team is just higher up in the sky and untouchable by customer feedback.

So? Does your application have a way to easily report bug? Because if the process to report a bug is long and tedious, you will have a “bug-less” product with marginal amount of users that tolerates the bug or that are hard-core enough to take the time to report it.

Submit this story to DotNetKicks

Sunday, April 5, 2009

5 reasons why you should use ASP.NET MVC

I’ll be fair with you readers. I’ve only toyed with the ASP.NET MVC framework. It looks great as of now but it’s the first full blown MVC framework that we have that is backed by Microsoft. However, there is a lot of opposition nowadays that tend to be formulated like this:

Why should I use ASP.NET MVC? WebForms works well.

Other problems come from the lack of server controls. When a developer look at that and he then wonder why he should have to write HTML and Javascript when before he could have retrieved all that beautiful information with a simple postback.

So without ranting any further, here is 5 reasons why you should use ASP.NET MVC.

1. Testability

When the MVC model is properly applied, it allow for a better separation of your business logic and your presentation code. If the view is not included inside your model, you can easily test without requiring a web server. By default, when starting a new MVC project, Visual Studio offer to create a new Unit Test project based on Microsoft’s Unit Test framework. Other Unit tests framework can also be configured to be used by default instead of Microsoft’s solution.

The way the code is also made, the controller is the one handling the calls from the route. They can be instantiated outside of a web request which makes them easy to test too.

2. Perfect control of the URLs

ASP.NET MVC use URL Routing to better control the request and forward them to your controllers. Instead of 1 to 1 mapping, they allow pattern matching. The default being “{controller}/{action}/{id}” with the default being “Home/Index”. This technically allow you to set the URLs exactly how you want. You don’t have to create folder for every level deep it goes. The URL routing allows you to make clean URL that will be easy to remember.

Would you rather try to remember http://localhost/Sales/DisplayProduct.aspx?ProductID=23213 or http://localhost/Product/Detail/23213 ? Even better, if you are an e-Commerce site and want some fast link…. you can directly bind those URL to http://localhost/23213 to make it more easy to remember. Doing that in WebForm while keeping all this unit testable would just be too time consuming now is it?

3. Better Mobility Support

In WebForm, you would have to detect on each page that the browser is a mobile and adapt your rendering for the mobile on each and every form. You could also redirect the user to different page when it’s a mobile. What is excellent with MVC is that it’s not the view that is receiving the request. It’s the controller. The controller can then dynamically decide which view to render while keeping the same URL. So to see a product view, you don’t even need to send different URL to different provider. You just detect which device you are handling and redirect it to the proper view. As you support more and more mobile device, you can keep on adding view that are more specific to each device. Want to support this new HTC? Create a view, detect the browser and ensure the right view is displayed. Want to support some iPhone goodness with some device specific HTML? Create the necessary view, reuse the browser detection and display the view.

You can keep on doing that ad infinitum and as much as you want depending on your audience. Having Mobile support now is more convenient than it has ever been.

4. View Engines

Now if you only built ASP.NET WebForms, this term might be weird for you. Let’s just say that you have been using the same view engine all this time without wondering if you could choose. The WebForm view engine is… well… what you have been using all this time. This includes server tags (<% %>), binding tags (<%# %>) as well as control tag (<asp:TextBox … />).

The Spark Engine is a good example. MvcContrib also offer 4 different view engine (Brail, NHaml, NVelocity, XSLT). Each of those engine are created to fix some specific problems. Different view engines can be used on different view. One page could be handled with WebForm view engine, one with Spark Engine, one with XSLT, etc. Different view, different problem different solution.

You might not have to use those, but the simple fact that they are available will make your life easier if they are needed.

5. Built-in and shipped jQuery support

Let’s keep the best for the end. jQuery is shipped with any new project instance of ASP.NET MVC. Since Microsoft announced support for jQuery, it’s been the big buzz in the javascript world. Since ASP.NET MVC don’t rely on Postback, a strong javascript framework is needed to provide for all the UI the previous server control were offering. jQuery easily offer you AJAX, DOM manipulation, event binding and  this across browser.

Of course, jQuery is not an advance to MVC itself. But it is a serious offering from ASP.NET MVC. No more download or “I’ll write my own” stuff. I don’t know for all of you but if I have to do javascript, I normally do a document.getElementById. This will work in most browsers but as soon as you start going funky, some browser will misbehave. jQuery simply allow you to write $(“#myControlId”) or many more shortcuts to simply do what you need across browsers. Just by having jQuery available stops me from writing incompatible code.

Conclusions

Lots of point goes toward MVC. Way more could be added. You certainly don’t want to miss Kazi Manzur Rashid’s blog about ASP.NET MVC Best Practices (part 1, part 2). Scott Hanselman, Phil Haack also have great posts about ASP.NET MVC.

Don’t be fooled. Web Forms are not necessarly evil. They just aren’t leading you to a pit of success.

Submit this story to DotNetKicks

Tuesday, March 31, 2009

System.Diagnostics.Process and xcopy… why it doesn’t work?

We spent 30 minutes on a spiny issue. We execute some commands directly from a C# program but we want to make sure that everything executed. We did unit test on some code and everything seemed to work.

Today, we tried it live with some real case scenario where we execute some XCOPY. And then…. nothing.

We didn’t receive any ExitCode that was 0. We didn’t catch any exception. And we didn’t catch anything in the error stream.

Here is the code that we used:

private void ExecuteProcess()
{
    // setup the process info
    var startInfo = new ProcessStartInfo(@"cmd.exe", @" /c xcopy c:\file*.txt c:\file*.bck")
                        {
                            UseShellExecute = false,
                            CreateNoWindow = true,
                            WorkingDirectory = @"C:\",
                            RedirectStandardError = true,
                            RedirectStandardOutput = true
                        };

    
    using(Process proc = Process.Start(startInfo))
    {
        proc.OutputDataReceived += ((sender, e) => MessageBox.Show("Data: " + e.Data ?? String.Empty));
        proc.ErrorDataReceived +=
            (sender, e) => MessageBox.Show(string.Format("Error: {0}", e.Data ?? string.Empty));
        proc.BeginErrorReadLine();
        proc.BeginOutputReadLine();

        proc.WaitForExit();
    }

}

Wow. Nice piece of code isn’t it? It will work for everything you have to execute. But if you execute XCOPY it won’t work. We found out that it’s missing something that should not be needed in our case but that cause XCOPY to just execute with no output and no results.

You want XCOPY to work? Just add this line before starting the process:

startInfo.RedirectStandardInput = true;

That’s it! This error will only happen if you have “UseShellExecute” set to false.

Hope it help some clueless programmer that is wondering why the XCOPY won’t execute.

Submit this story to DotNetKicks

Sunday, March 29, 2009

Anti-Pattern: The Gas Factory or Unnecessary complexity

Just as in any system, when you start coding some structure, you always try to make it as generic as possible to make it easy to later reuse those parts.Just like petrolum complex, things starts getting complex There is normal complexity when you build your code and as you go, complexity adds up. However, one of the main problem of this anti-pattern is when it’s done consciously.

Let’s give a quick example here. You start building collections and one of your collection need a special feature. The problem is when you extract this functionality and try to make it as generic as possible so that anyone could reuse your class. That is the problem of the unnecessary complexity. You see, as you are making stuff generic for a single class, you are adding structures inside your code. Some of those structure might be proof tested for this specific class but might fail on other classes. There is also the problem that maybe no other class will ever require this functionality.

How do I solve this anti-pattern?

By following YAGNI and Lean Software Development, you delay code and unnecessary complexity until you actually require the complexity. If  you have 2 classes that require the same functionality, it is now time to extract this functionality inside a different class and make those 2 classes inherit from it (or any other patterns that are required).

And here, I’m not just talking about inheritance. I’m also talking about unnecessary design patterns. If you built a pipeline component to calculate discount but you only have one discount at the moment, it might be actually relevant to implement it anyway since you are sure you might require it later. However, the client is the one that is supposed to drive the requirements and if you don’t require the pipeline immediately… well… don’t built it!

It doesn’t mean to leave your code in a fixed state. It just means to keep your code clean to ease the implementation of the pipeline.

The best line of codes are those we don’t need to write.

Submit this story to DotNetKicks

Thursday, March 26, 2009

Software development is not an art. It’s a craft.

When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.
R. Buckminster Fuller - US architect & engineer

Forget about the “creation” part of the job. As you are developing your first application, you are not creating something. You are building something. Mind you, it’s not like building a bridge as in engineering or a plane.

The main difference between the art and the craft is the people inside the profession. Art is completely subjective. Anyone can be improvised artist and some may succeed. The difference between artist and programmers is artist that don’t sell get another job. Self-improvised programmers that fails simply find another company to hire him.

I’m not complaining about self thought programmers. What I’m worried about is the quality of the code we find in today’s software. Who saw some serious bad code here? Raise your hand! You’re not alone. Those code were probably made by people who don’t care about software or by people who never had the chance to have proper training. That’s where it’s important to realize that our profession is way more of a craft than an art.

Art is based on inspiration. Craft is based upon sets of rules and experience that will bring you quality every time you follow them. Artistic software development doesn’t care about the rules. Let’s pick a simple metaphor. Japanese sword maker. There is a reason why there work is recognized and of such quality. Their craft had hundreds of year to accumulate rules on how to properly melt the metal, forge the blade, etc.. The master know a good sword on sight and knows that there is very few way to reach quality. Young apprentice will train for years under their master to gain the knowledge they acquired and be able to reproduce success.

It will be harsh but there is only 2 ways to change the behaviour of those that don’t care about the code.

  1. Train them (mentorship, classes, etc.)
  2. Get rid of them

If you want to train a programmer, he have to attend conferences, user group and spend some time off work to learn about good practices, design pattern and software principles. Company like Object Mentor do offer training to raise the quality of existing programmer.

Our industry might not be as old as Japanese sword maker but we must start to get rules/principles as much as possible and encourage “artistic sword maker” to follow rules and principles. Otherwise, the only thing you will get is a cheap rip-off that will break on the first hit. We’ve all seen software that breaks on the first hit and we need to improve the average level of our profession.

UPDATE: Added a quotation that I think represent a part of what we are doing. Solving problems and then making it nice. But foremost, solving problems.

Submit this story to DotNetKicks

Sunday, March 8, 2009

Software Developer and Software Engineer are not opposite, they are the same

Of course, Software engineering is defined as:

Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches.

This term was created in 1968 in the hope of bringing a more “civilized” way of coding. What is interesting however, is that most of the time, people and companies won’t make a difference between the developer and the engineer.

A Software Developer is defined as :

A software developer is a person or organization concerned with facets of the software development process wider than design and coding, a somewhat broader scope of computer programming or a specialty of project managing including some aspects of software product management. […]

Other names which are often used in the same close context are software analyst and software engineer.

If we stick to the definition, we can say that a software engineer developer, operate and maintain software and that he will study what he did to make sure it’s the best in the industry. As for the “systematic, disciplined, quantifiable”, it’s perfectly understandable that a software engineer should always do his best and follow standards.

What about the software developer? If we stick to the definition, a software developer design, code, manage, test and participate in the release of his software. Honestly, it’s basically what I do everyday. I’m not given a class diagram and said “code this”. I receive a requirement and I have to ensure that it will go in the application. I have to answer those following question:

  • Can it be done?
  • When will it be done?
  • Will it impact something else that you have to do?

No body ask me “How will you implement it?”. I have the responsibility to design, organize, implement and test my code. Of course I can go cowboy and go straight for the implementation.

But it’s not because I’m a software developer. Software developers is just a title. It’s not because some cowboy coders put all the Data Access Logic inside the view that all software developers are a useless bunch of monkey coder.

I’ve signed the software craftsmanship manifesto because I value the work that I do. I’ve signed it because quality is important, maybe not at the moment but it will eventually. I’ve signed it because I believe in good software.

What do you think? Is there such a gap between software developers and software engineers?

Submit this story to DotNetKicks

Thursday, March 5, 2009

Implementing a Chain-of-responsibility or “Pipeline” in C#

Anti-Patterns are interesting in showing you what you are doing wrong. However, patterns are also interesting in showing you how to do it well.

This time, I want to show how to implement a simple Chain-of-responsibility pattern. Our example is going to be based on a simple e-Commerce data model.

The Domain Model

Product which will have some basic attributes like a price, a name and a collection of applied discounts.

Discount which is going to be an actual discount implementation. More class are going to be derived from this base class.

That is all we are going to need for this pattern. However, it would be smart to have a class that would assign discounts to product based on certain rules.

Let’s start by writing our Product class and our Discount interface:

public class Product
{
    private readonly List<IDiscount> _appliedDiscount = new List<IDiscount>();
    
    public string ProductName { get; private set; }
    public decimal OriginalPrice { get; private set; }
    
    public decimal DiscountedPrice
    {
        get
        {
            decimal discountedPrice = OriginalPrice;
            return discountedPrice;
        }
        
    }

    public Product(string productName, decimal productPrice)
    {
        ProductName = productName;
        OriginalPrice = productPrice;
    }

    public List<IDiscount> AppliedDiscount
    {
        get
        {
            return _appliedDiscount;
        } 
    }
}

public interface IDiscount
{
    decimal ApplyDiscount(decimal productPrice);
}

Right now, the "DiscountedPrice” is simply returning our “OriginalPrice”. Let’s implement the proper discount commands:

public decimal DiscountedPrice
{
    get
    {
        decimal discountedPrice = OriginalPrice;
        
        foreach (IDiscount discount in _appliedDiscount)
            discountedPrice = discount.ApplyDiscount(discountedPrice);

        return discountedPrice;
    }
}

Now that we have an algorithm that will apply all discounts, let’s create a few Discount class:

public class PercentageDiscount : IDiscount
{
    public decimal PercentDiscount { get; set; }

    public PercentageDiscount(decimal percentDiscount)
    {
        PercentDiscount = percentDiscount;
    }

    public decimal ApplyDiscount(decimal productPrice)
    {
        return productPrice - (productPrice*PercentDiscount);
    }
}

public class FixPriceDiscount : IDiscount
{
    public decimal PriceDiscount { get; set; }

    public FixPriceDiscount(decimal priceDiscount)
    {
        PriceDiscount = priceDiscount;
    }

    public decimal ApplyDiscount(decimal productPrice)
    {
        return productPrice - PriceDiscount;
    }
}

So now we have a class that implement a percentage discount and another one that impose a fixed rate discount. Of course, our current implementation should NEVER be used in a real system as it is now. Validations must be done for a positive price and maybe some extra verification that we are not underselling the item.

Let’s use this current implementation:

// Creating a product worth 50$
Product currentProduct = new Product("Simple product", 50.0M);

Console.WriteLine(string.Format("Original Price: {0}", currentProduct.OriginalPrice));

// Give a 10% rebate on the product
currentProduct.AppliedDiscount.Add(new PercentageDiscount(0.1M));
Console.WriteLine(string.Format("Discounted Price: {0}", currentProduct.DiscountedPrice));

//Give an extra 10$ off on the product
currentProduct.AppliedDiscount.Add(new FixPriceDiscount(10.0M));
Console.WriteLine(string.Format("Discounted Price: {0}", currentProduct.DiscountedPrice));

This will output in order 45.00$ and 35.00$.  It’s important to be aware that the discount interfaces are not aware that they are being applied to a product. They could be reused in any other model that accepts an IDiscount.

Conclusion

By chaining Strategy Pattern (the discount algorithms), we can increase the amount of flexibility inside our model and increase the reuse of common algorithms. It would also be easy with a simple rule engine to apply discounts to product that match certain rules.

Other uses of a chain-of-responsibility could be when dealing with objects that could have multiple rules applied to them based on different conditions. The conditions would then be moved from the object itself to a “Command” and then reused exactly the same way we did here.

Submit this story to DotNetKicks

Wednesday, March 4, 2009

Waterfall development just work as great

Waterfall development is still a valid way to develop software. Setting up the requirements, making proper analysis, coding and then testing works just as fine. However… not for ever changing software like a website.

If I were to build an e-Commerce website, I would never choose to go Waterfall. I would love to go SCRUM or XP. Agile development have the advantage of including the client inside the development process. It allows the client to change his mind on some things that seemed good at first but that were finally a bad idea. There is some ideas that can only be rated as “bad” when you are working to develop them. Of course, if some features take longer, it’s easier to find out quickly that the project is going to be late and that some features are going to be left out.

Agile development as worked well so far in custom application development, websites, e-Commerce, product development, etc. However, as good as Agile might be… there is one strong point where I think the waterfall approach is still relevant.

Some software need to be bug free and have detailed specifications about the features. What are those software? One example is what a group of people inside Lockheed Martin is developing for the Space Shuttle which FastCompany talks about. This software need to be bug free. Of course, it needs to be thoroughly tested and must pass the strongest inspections. Every changes to the specifications must be approved by multiple persons and any change inside the code base without a valid reason is not allowed. They do not do agile. They do waterfall. What about the bugs?

This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.

So how come Waterfall works in this case? Of course, because the software is based on pieces of hardware that rarely need to change, it reduce the amount of compatibility problem. The software doesn’t need to work on 100 types of space shuttle. It has only one physical requirement. It also works because once specifications are written, they are to be followed at all cost. Changes are expansive and must be approved every time. The software is also never updated while in use. Of course you’ll never see this in an e-Commerce website!

So let’s resume:

  • Hardware that rarely change
  • Precise specifications
  • Expansive change
  • Once deployed, can’t be changed

Does that bring other examples? The simple things I could think would be any piece of hardware that have software inside. Let’s go with the Microwave. Your microwave have a software inside. Let’s see how many points it meet shall we? First, the microwave hardware will NEVER change. Nobody is pimping out his microwave so we can assume that the hardware stays the same for all it’s usable life. The specification for a microwave software rarely change (timer, defrost settings, power levels, etc.). If there is a change that must be made, it’s probably because of a hardware change which is expansive. And finally, no microwave is Wi-Fi enabled or have a USB connection to update it’s firmware.

We can safely assume that the Waterfall model must have been among the first software development process to be used. People first programmed chips, board, “simple” OS or OS with limited distribution. Of course, back then the formula worked great because of exactly the same four points I mentioned. The model started to break when building software for computers that varied largely in configuration (RAM, CPU, etc.). The model tried to be used but suddenly, development time sky-rocketed through the roof. A new model was seen as necessary.

So please, unless you are reprogramming your microwave for some evil plans, don’t use Waterfall. The main weakness of waterfall was the lack of user inputs. Even the Sashimi model is not enough. We need rapid feedback and constant testing. We are not developing perfect software that must never fails. But make sure it does before hitting the client.

Submit this story to DotNetKicks

Saturday, February 28, 2009

TDD: How I applied TDD to a simple problem

A month or two ago I had to built a component that had to analyse a string a return some information out of it. The best result for this was a Regular Expression and I was sure. So I started writing what kind of input would be valid and which one should not be allowed.

When I started writing this code, I already read many blog posts about it and wanted to give it a try. Since I wanted a simple scenario which would easily be applied, when I had to parse this string I knew I had a simple problem to which I could apply it.

If anyone has ever worked with regular expressions, it is widely known that it’s easy to make something match. However, it’s really hard to make it match what you want and not what you don’t want. For proof of that, try seeing how many regular expression there is to parse a phone number. The non-matches is as important as the matches.

So I started a test project and added my first test. The test was to test the perfect scenario. Of course the test didn’t even compiled. I then proceeded to create the missing classes and made the test compile. Of course, all the class had the following inside them:

throw new NotImplementedException();

This ensured that the test “went red”. I then proceeded to implement the minimum necessary to make it pass and make it “go green”. And I kept on rolling until it worked in all my specified cases. Sometimes, previous tests went and failed. Sometimes, everything stayed green but I kept on going.

For experienced TDD-er it’s common and normal. But for me, it was weird. I made sure to follow EXACTLY what it said. When it said “minimum necessary to make it pass”, I made sure that I returned a constant if I could or proceeded to create/modify the regular expression as needed. And as I kept increasing my amount of test, the constant all went away, the regular expression got more precise and every time that I broke a test I came back to fix it. It’s a weird feeling but when I gave my class for usage, it was working as perfectly as it could.

So why the ruckus? Because since I wrote this piece of code, I haven’t seen one bug report. The code is perfect for what it is doing. When a bug will come my way, I’ll try the TDD-er way of doing thing. Add a test that reproduce the bug and fix my code to make sure it works on all test.

Result of everything? One bug ridden class, 20 something tests and one developer who learned the essence of TDD.

I would love to get my teeth on more complex code now.

Submit this story to DotNetKicks

Anti-Pattern: Anemic Domain Model

Here is an anti-pattern Martin Fowler will agree with. In fact, it’s Martin Fowler that first described this anti-pattern in November 2003. Like Fowler said, it looks like a model, it smells like a model but there is no behaviour inside.

The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters.

The problem with the anemic domain model is that all the logic is not with the associated object. It’s located in the objects that use them. See the problem? So unless you are using the objects that have the behaviours, having the anemic domain model won’t bring you any good. In fact, they just getters and setters with barely enough behaviour to call them objects.

Of course, you gain a good separation of concerns and a gain in “flexibility” of behaviours if ever needed. You also gain the ability to generate those domain models from a modeling tools without having to break a sweat. If there is so many benefits, where’s the catch?

3 times nothing! You just need to separate the business logic multiple time so that every part of the business get their own. Objects can’t self validate since the validation logic is located outside of the object. Everyone need a reference to that specific model DLLs and any shared entities which increase the coupling of the classes. It also increase the code duplication since many part of the business will essentially reuse many parts that other part of the business need. Don’t forget the maintenance! Since the business logic is spread across the business, all the common business logic will need to be updated all at once and validated against their respective service and validation. And that, is if this part of the business want to update. It’s like dealing with many mini-companies within the same companies.

Got enough? So put some business logic inside your domain model and make it easy to understand. If a certain part of the business need so “special” behaviour, it will have to be incorporated inside the main domain model.

But what if you have to maintain an anemic domain model and you want to fix this anti-pattern? Of course you can always rewrite the software but it’s an expensive solution. The solution is that every time a new requirements arrive, put it inside the domain model and DO NOT put it inside the many service class. This is what Greg Young described as “making bubbles”. By making bubbles of great code that is going to be easy to maintain/reuse, and by maintaining the current system, you will finish by replacing everything.

Anything worth doing is worth doing well.

Submit this story to DotNetKicks

Thursday, February 26, 2009

Anti-Pattern: The god object

Because it’s easier to recognize evil if you have a mug shot, here’s one simple for all of you. The god object is a class that knows too much. It’s a severe violation of the Single Responsibility Principle and probably a lot of the other principle of SOLID depending of the implementation.

A basic principle in programming is to divide a problem in subroutines to make the problem easier to solve. It’s also know as “divide and conquer”. The object becomes so aware of everything or all the objects become so dependant of the god object that when there is a change or a bug to fix, it become a real nightmare to implement.

I tend to call those objects “Death Stars”. Death Star Why? Because just like the death star, if someone get to mess up with the core, it will explode. In fact, any modification to this god object will cause ripple of changes everywhere inside the software and will end in lots of bugs.

You can easily recognize of these object by the fear any developers have when getting near it.

So how to solve it? By refactoring of course! The goal is to separate in as much subroutine as possible. Once this is done, you move those subroutines to different classes. Of course, trying to follow the SOLID principles will definitely help.

Resolving a god object is of course really different depending on how omnipotent your god object is. But one thing for sure, you have to separate the “powers” (read: responsibility) and make sure that Single Responsibility Principle is applied.

Submit this story to DotNetKicks

Wednesday, February 25, 2009

We all produce code that we really aren’t proud of

Everyone produce code. Some peoples give birth to some beautiful, elegant and maintainable code sometime in their career. But pretty much all developers will one day or another give code that is so horrible, inelegant and un-maintainable.

I’m trying as much as possible to separate concerns inside my application so that I don’t have to care about problematic maintenance. I’ve recently had to review some code I did more than a month ago and… I’m not proud. I’m really not proud of the code I saw. It seems that every once in a while we write some code that we won’t be proud of.

Of course, we improve our coding skills every day and every time we encounter new technology and new idea. Hell, I like the SOLID principles and the TDD ideas. I love the concept of separation of concerns and modularization of an application. So how come I wrote this code?

I can’t remember exactly why honestly. Maybe I was rushed or I wanted to rush through it. How knows… what is important however is to know that the code that you wrote taste like a cheap wine. It’s important to take a note of it and make sure to “maintain your code garden”. This is at the moment that the title of my blog actually makes sense. Even with the best intention, code quality decay and if nothing is done, you finish with a half rotten application that nobody will care about.

So what to do about it? Make sure to improve the quality of existing code everyday. If we write bad code 10% of the time (number that I just picked from the top of my head) and good code 90% of the time, imagine if we take only a few hours per week to correct mistakes.

You probably won’t have the time to fix it. Who does? But there will be a day where you will finish an hour early and that you will wonder if you should start implementing a new feature or go home early. If you took note of those classes that need improvements, you could easily take this hour to maintain your code garden and be proud of the code that you did.

So, the first “time off” that I have will be used to fix this part of code that I’m not proud of.

Anyone else who wrote some shameful code?

Submit this story to DotNetKicks

"Utilisations des mocks avec Moq" - Using mocks with Moq

For those who attended my presentation at the Montreal .NET User Group, here is a list of links that I mentioned inside my presentation:

Submit this story to DotNetKicks

Monday, February 23, 2009

Model View Presenter Revisited

The MVP pattern is a pattern that came into being in the early 1990s by Taligent. This pattern is mostly used inside WinForms and WebForms.

The View normally don’t do anything. The official implementation is described as the following.

The view instantiate the presenter with an instance of itself. The constructor parameter of the presenter must be an interface of the view. When events of the view happens, they must call the presenter without any parameter/return value. If the presenter need data, the presenter will get the data from the view interface without the view giving the data directly. Changes to the view must be done through the presenter.

Of course, this is a literal implementation from 1990. Of course, today we have more advanced paradigm that works quite nice. What is interesting is, with proper data binding, that we can change the value on the views without even calling methods of the view.

It is possible to add a databinding on a property of the presenter. Once the databinding is done, it’s possible to just change the presenter’s property to fire events on the view that will automatically updates the control.

It removes some implementation details of the MVP and make the pattern easier to implement.

Need a sample? Here it goes to implement an “auto-notify” property when it change inside a presenter:

public class MyPresenter : IPresenter, INotifyPropertyChanged
{
   private readonly IView view;
   private int randomNumber;

   public int RandomNumber
   {
       get { return randomNumber; }
       set
       {
           if (randomNumber == value) return;

           randomNumber = value;
           RaiseEvent("RandomNumber");
       }
   }

   #region Implementation of INotifyPropertyChanged

   // custom method to ease the change of event
   public void RaiseEvent(string propertyName)
   {
       PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
   }

   public event PropertyChangedEventHandler PropertyChanged = delegate { };

   #endregion

   public MyPresenter(IView view)
   {
       this.view = view;
       this.view.InitializeBindings(this);
   }

   public void GenerateRandomNumber()
   {
       Random rnd = new Random(DateTime.Now.Millisecond);
       RandomNumber = rnd.Next(0, 100);
   }
}

This will raise an event every time that a DIFFERENT value will be assigned to the property RandomNumber or when the event is fired. Now for the view, it look like this:

public partial class frmMain : Form, IView
{
   private readonly IPresenter presenter;

   public frmMain()
   {
       InitializeComponent();
       presenter = new MyPresenter(this);
   }

   public void InitializeBindings(IPresenter currentPresenter)
   {
       textBox1.DataBindings.Add("Text", currentPresenter, "RandomNumber", false, DataSourceUpdateMode.Never);
   }

   private void button1_Click(object sender, EventArgs e)
   {
       presenter.GenerateRandomNumber();
   }
}

The method InitializeBindings is called in the constructor of the presenter and will ensure that the binding are made only once. This will NOT require additional methods inside the view to assign the generated number inside the TextBox. This implementation respect the model definition while using the latest binding technology of .NET.

This reduce the amount of useless method while keeping the framework in charge of the bindings.

Here is the resulting interfaces from the implementation:

public interface IPresenter
{
   int RandomNumber { get; set; }
   void GenerateRandomNumber();
}

public interface IView
{
   void InitializeBindings(IPresenter currentPresenter);
}
Submit this story to DotNetKicks

Sunday, February 22, 2009

Part 3 – Advanced mocking functionalities of Moq

See also: Part 1Part 2

When you have some simple scenario like “when the method “GetTax” is called, return 5$” it’s a simple scenario that a lot of mockers will see. However, there is some rarer scenario that people will wonder how to do it.

One of those scenario is with event handlers. The scenario would be like “if a Product is added to a ShoppingCart, a ProductAdded event should be fired”.

Let’s start with the basic class bellow which implement our scenario:

namespace MoqSamples
{
   public interface IProduct
   {
       bool IsValid { get; }
   }

   public class ProductEventArgs : EventArgs
   {
       public ProductEventArgs(IProduct product)
       {
           Product = product;
       }

       public IProduct Product { get; private set; }
   }

   public class ShoppingCart
   {
       private readonly List<IProduct> Products = new List<IProduct>();
       public event EventHandler<ProductEventArgs> ProductAdded = delegate { };

       public void Add(IProduct product)
       {
           if (product.IsValid)
           {
               Products.Add(product);
               ProductAdded(this, new ProductEventArgs(product));
           }
       }
   }
}

Event Handlers

What we want to test here, is every time we add a valid product an event ProductAdded should be fired.

I have played with Moq a bit trying to get it to work with ShoppingCart. As I tried to mock the event, I tried to create mocks and use the instructions on Moq site but wasn’t able to make it happen. If I tried to mock the class itself it wouldn’t allow me to do expectations even if I extracted an interface out of it. If I mock the interface, I lose the logic inside my class. I was thinking about creating a mocked event handlers and see if it ever get called but… you need a mock to create a mocked event handler. With this, we’ll have to wait for Moq 3.0 (which is in beta at the moment of writing this article). Here is the test I came up with that didn’t work :

[Test]
public void Adding_A_Valid_Product_Fire_Event()
{
   // Setup our product so that it always returns true on a IsValid verification
   Mock<IProduct> product = new Mock<IProduct>();
   product.Expect(currentProduct => currentProduct.IsValid).Returns(true);

   // setup an event argument for our event
   ProductEventArgs productEventArgs = new ProductEventArgs(product.Object);

   // setup a mocked shopping cart to create our mocked event handler and a true shopping cart to test
   Mock<ShoppingCart> mockedShoppingCart = new Mock<ShoppingCart>();

   //creating the event a mocked event
   MockedEvent<ProductEventArgs> mockedEvent = mockedShoppingCart.CreateEventHandler<ProductEventArgs>();
   mockedShoppingCart.Object.ProductAdded += mockedEvent;
   mockedShoppingCart.Expect(shopping => shopping.Add(product.Object)).Raises(mockedEvent, productEventArgs).Verifiable();

   //making the test
   IShoppingCart myShoppingCart = mockedShoppingCart.Object;
   myShoppingCart.Add(product.Object);

   mockedShoppingCart.Verify();
}

And here is my simple fix to test this:

[Test]
public void Adding_A_Valid_Product_Fire_Event()
{
   // Setup our product so that it always returns true on a IsValid verification
   Mock<IProduct> product = new Mock<IProduct>();
   product.Expect(currentProduct => currentProduct.IsValid).Returns(true);

   // setup an event argument for our event
   ProductEventArgs productEventArgs = new ProductEventArgs(product.Object);

   // creating our objects and events
   ShoppingCart myShoppingCart = new ShoppingCart();
   bool isCalled = false;
   myShoppingCart.ProductAdded += (sender, e) => isCalled = true;

   // Testing the Add method if it fire the event
   myShoppingCart.Add(product.Object);

   // make sure the event was called
   Assert.AreEqual(isCalled, true);
}

Way more small and more efficient with the mocking. Sometimes, it’s better not to try to bend the framework and find the shortest solution that works.

Moq Factories

Moq have factories to help centralize the mocking configuration. The only two configuration available is CallBase and DefaultValue. Every mock created with the factories will allow you to reuse the configuration and reduce the amount of line for setting up the mock.

Here’s a sample for the factory initialization:

[Test]
public void Moq_Test_With_Factories()
{
   // Initialize factories with default behaviours
   MockFactory mockFactory = new MockFactory(MockBehavior.Default);

   // Setup parameters for mocking
   mockFactory.CallBase = true;
   mockFactory.DefaultValue = DefaultValue.Mock;

   // create mocks with the factory
   Mock<IProduct> product = mockFactory.Create<IProduct>();
}

This is of course really easy but… what about the parameters?

CallBase

CallBase is defined as “Invoke base class implementation if no expectation overrides the member. This is called “Partial Mock”. It allows to mock certain part of a class without having to mock everything.

DefaultValue

There is 2 possible values here. One of them is “Empty” which return default value of the class. The one used in the example is “Mock” which allows “automocking”. If a property is mockable, a mock is automatically returned.

Constructor

The constructor of the MockFactory needs a MockBehaviour parameter. 3 values are possible, Default, Loose and Strict.

Strict mock makes mocked object throw an exception for every call to a mocked object that doesn’t have an expectation. Loose (which is also Default) will always return default values or empty arrays or null.

By using the factory properly, it’s possible to set one style of mocking and reuse theses settings without having to rewrite 1 or 2 more lines per mock.

Submit this story to DotNetKicks