Category Archives: Tech

TFS Installation: No Longer Rocket Science

I think one of the best things that I’ve observed in playing around with TFS 2010 was how easy it was to install.  This was a pretty big hurdle in previous versions, but 2010 has installation pared down to a “Next, Next, Finish” level of complexity in some simple scenarios.  In particular, the “Basic” installation, which doesn’t include the SharePoint or Reporting Services components, is brain-dead simple.

In addition, TFS can now be installed on client OSes (Vista and above), and use the free SQL Server Express.  It will even go so far as to install SQL Express for you if you don’t already have it installed (you probably already do if you’ve installed Visual Studio).  You can download Beta 2 of TFS 2010 from here.

However, if you don’t want to sully your pristine machine with Beta products, there’s a fully configured Virtual PC image available for download here.

In other words, it’s pretty trivial now to try out TFS yourself if you’re stuck on SourceSafe and are looking to try out all the other mainstream options, or (like me) if your shop’s already using TFS and you’d like to try your hand at some of the administrative features that are behind lock and key.

A Plan for 2010

Everyone on my blogroll is taking the opportunity of the new year to take stock of 2009 and make public plans for 2010, so I thought I would join in.

In 2009, I made an effort to start speaking a little bit more.  I spoke at my first code camp, the Northwest Arkansas Code Camp, on the Harding and LSU campuses for recruiting trips for my employer, as well as the Baton Rouge .NET User Group.  I hope to continue ramping that up in 2010.  I’m presenting at my home meeting, the Shreveport .NET User Group, later this month, and I have a trip scheduled to the northwest Arkansas area in March.  I’d like to make it down to southern Louisiana at some point, too; the guys down in Lafayette and New Orleans in particular need some love.

I also need to step up in promoting the SDNUG this year.  I felt like I was able to kind of coast for a lot of 2009 with our current set of attendees, and didn’t really make as much of an effort as I could have to make more people aware of the group’s existence.  I’ve already begun to remedy that this year by seeking out area software companies and making phone calls, which will hopefully yield some more members.  I need to get on top of speaker scheduling, too, both within the group and without.  There are several group members that I think would make great presenters, I just need to convince them that it’s a good idea!

I want to start blogging more, as well.  One post a month just isn’t where I want to be.  I think having a regular schedule will help with that, so I’m pledging right now in public to post something on my blog at least once a week.  That plan may take a hit when my daughter is born in February, but I’m going to give it my best effort.

I think something else that may help with blogging is having an area of focus.  I’ve really been interested in application lifecycle management lately, especially after all the stuff I saw at PDC, so at least for now, I think I’ll focus on Team System and TFS for a little while.  I’ve been listening to the back catalog of Radio TFS, which has been great, and I plan to seek out other additional resources for ideas. I’ve recently installed TFS on a virtual machine to play around with, so hopefully that will lead to some ideas, as well.

I hope you had a great 2009; here’s to 2010!

PDC09 Debrief

I’ve just returned from my first major conference, Microsoft’s Professional Developers’ Conference in Los Angeles, California.  I have to say, things were a bit different than I expected, but in a good way.  More on why later; first, the play-by-play!

Day 0

The air travel wan’t as bad as I expected it to be.  A quick hop to Houston, and then 3 hours to LAX.  When I finally got to the hotel (the Westin Bonaventure in downtown LA), I met up with Mike Huguet, a fellow user group leader from Baton Rouge.  We ended up down at the Figueroa Hotel with Chris Koenig, a Developer Evangelist from the South Central district, and enjoyed the open bar they had set up for PDC attendees.  I got to meet Dave Bost, the guy behind the Thirsty Developer podcast, and several other Microsoft employees.  We got a little glimpse into Microsoft culture by talking with these guys, and confirmed that the structure of myriad customer liaisons that Microsoft employs is a bit confusing, even to the insiders!  I had planned to go the Party with Palermo, and still rather wished that I had, but we were having such a good time at the Fig, we decided to stay.  We also ended up eating at this great little greasy spoon that only took cash, and has been open continuously, 24 hours a day, since 1924!

Day 1

The first day at PDC was pretty much all about Azure.  This was a bit disappointing to me, since the current application that I work on couldn’t really make much use of the cloud.  We have a relatively small user base (a thousand or so users), so scaling out really wouldn’t buy us much.  I went to the Future of C# and VB session with Luca Bolongese (who has an amazing Italian accent), but wasn’t really surprised by much there.  After that, I spent waaaay too much time standing in line to do an Azure hands-on lab.  I missed lunch and a session, and didn’t even get to finish the lab before the expo hall closed at 3:00.  That did, however, secure me a coveted badge stamp that would get me a free Flip video camera the following day (which my wife is loving for taking quick videos of our daughter Molly).  I went to a session on SQL Azure because I figured the database might be one place that we could actually use the scaling, but afterward I concluded that the sharding required to use it for large sets of data would create too large of an impact on our application.  The last session of the day, on PEX and code contracts, was interesting, and perhaps applicable since we’re looking to start using unit tests soon, but both technologies are still in the research stage, and may never actually make it into the framework proper.  All in all, a bit of a disappointing first day at the conference, but better things were in store for the next day.

That evening, I attended an “Influencers” party with Mike and a couple of guys from my old stomping grounds of Northwest Arkansas, Jay Smith and Jon Oswald.  I got to catch up with Colin Neller, a former co-worker at Data-Tronics and fellow Harding alum, as well as meet other community members that I’d only heard on podcasts before: Jon Galloway, Chris “Woody” Woodruff, and even Scott Hanselman.  It was cool to be able to put faces with names, and to see that those people are just human beings like you and me.  Jon in particular comes off as just about the friendliest guy in the world, really cheerful and willing to chat with anybody.  Some pretty nice food at that place, too, sushi and shish kabobs. Nom nom!

Day 2

This is when things really started to get interesting.  The Steven Sinofsky did a pretty good job with his part of the keynote, and of course, the announcement that we would all be getting free laptops certainly made him some friends. 😉  The Gu was great, as usual, despite the quadruple iPhone fail.  With the features he described about Silverlight 4, it’s really starting to look like a compelling platform.  The Silverlight team have really been killing it. They’re on a lightning-fast release pace, and not just fluff releases either.  They’re taking customer feedback, even going so far as to add elevation so that applications can do things outside the normal Silverlight sandbox, which at one point they said they’d never do.

The sessions were great that day, too.  Scott Hanselman’s MVC talk was great edu-tainment, and it was great to see some of the new templating features he showed off.  I went to an open source panel, and got to meet Miguel de Icaza, which was pretty cool.  I also had an interesting conversation with some of the people on the Entity Framework team.  We’re starting to think  about integrating an ORM into our product, and we were leaning toward NHibernate.  I asked the team members point blank why I should use EF instead.  They were pretty frank with me, and basically said “NHibernate is a mature product, and we’re still relatively new to the ORM space, but we’re making a lot of big strides in version 4.”  Between POCO support, transparent lazy loading, and the code-only (read “Fluent”) configuration model, most of the things on my wish list have been met.  It might be worth some further scrutiny at this point.  This is when I realized that all the stuff about getting to interact with the product teams was real, and not just conference marketing.

That evening, rather than going to the big “Underground @ PDC” party (for which there was an enormous line), Mike, Chris, Jay, John, and I hung out at the ESPN Zone, kind of a sports bar/restaurant a la Buffalo Wild Wings.  We had some great discussions about managing communities, Microsoft culture, the MVP program, and the role of Developer Evangelists.  I’m starting to get the feeling that this is the kind of thing one needs to go to conferences for.  Community leaders can talk via Twitter or email all the time, but it’s only during conference time that we get to take advantage of the high bandwidth of in-person communication.

Day 3

The third day was all about ALM tools for me.  I went to presentations on MSDeploy, the new Test and Lab Manager, Team System process customization, and a kind of roll-up presentation about starting from a project that just compiles to one that’s under CI with tests (unit and coded UI).  I also spoke with Microsoft employees from several different teams about our particular difficulties with database deployments.  I’ve got several ideas now, and I’m looking forward to seeing if we can reduce some of the pain that we’re experiencing in that area right now.  By this point in the week, I was pretty wiped out, so I headed to the hotel and crashed.

Takeaways

  1. Prefer interactions with product team members and community leaders to attending sessions. You can watch the sessions online later if you miss one you really wanted to see.
  2. Leave your laptop in the hotel room (locked up if you feel it’s necessary).  You really won’t use it that much, and carrying it around can start to get painful after a couple of days, particularly if you’re walking to the convention center from your hotel like I was.
  3. Don’t go out of your way to get swag. You’ll probably end up with a bunch of it without even trying anyway, and if you put a dollar value on your time, you’ll quickly realize that the gizmo you’re in line for.

Overall, it was a great experience.  Given how expensive it was, I don’t think it’s something I’m going to do again soon, but hopefully I can attend some smaller conferences next year. Since TechEd is in New Orleans this year, I think the Louisiana community may try to cook something up for just before or after.  Stay tuned for more info!

Single-Project Areas in ASP.NET MVC 2

The ASP.NET MVC framework brought a lot of benefits to the Microsoft web developer, such as easier automated testing and better separation of concerns, but it did come with its share of pain points.  One of those pain points was the difficulty of organizing the files in your web project.  By default, all folders that contained views, which were named after the controllers in your application, had to be directly under the “Views” folder that was provided by the framework.  For large applications, the number of folders underneath “Views” could get quite unwieldy.

To help alleviate this, in the first preview of version 2 of the MVC framework, the concept of “Areas” was introduced to allow an additional hierarchical level to controller and view organization.  It was implemented in Preview 1 using separate web projects for each area, each with its own “Controllers” and “Views” folder.

This was a definite improvement, but there was some pretty quick feedback from the community about the implementation.  Having a separate project for each area means that large solutions would end up with quite a few projects, which can dramatically impact compilation time.  I can speak from experience; the main solution I work on had over 80 projects in it when I first joined my current team.  Build time  was usually about 10 minutes, and that was just compilation, no tests or other things going on in the build.  When we reduced it to three projects, build time went down to about 10 seconds.  Needless to say, as our team starts thinking about doing some MVC work, we don’t want to go back to that place.

Thankfully, in preview 2, the MVC team provided the ability to create all your areas within a single web project.  This provides all the organizational benefits without the impact to compilation time.  To add areas to your MVC web project, follow these steps:

  1. Add a folder named “Areas” to your web project.
  2. Add a folder underneath the Areas folder with the name of the area you want to create, “Accounting” for example.
  3. Add a folder called “Controllers” under the Accounting folder.  Now, when you right-click on this folder, you’ll get the “Add Controller” context menu option.
  4. Add a folder called “Views” under the Accounting folder.  This will work just like the Views folder that gets created as part of the MVC project template.  You’ll have one folder inside the Views folder for each controller in your area.
  5. Add a new class  file to the Accounting folder named “Routes.cs”.  This class will need to inherit from AreaRegistration and override the AreaName property.  It should end up looking something like this:
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Mvc;
    
    namespace MyProject.Areas.Accounting
    {
        public class Routes : AreaRegistration
        {
            public override string AreaName
            {
                get { return "accounting"; }
            }
    
            public override void RegisterArea(AreaRegistrationContext context)
            {
                context.MapRoute(
                    "accounting_default",
                    "accounting/{controller}/{action}/{id}",
                    new { controller = "Invoices", action = "ShowUnpaid", id = "" }
                );
            }
        }
    }
  6. You’ll also need to add a line to your Global.asax.cs file.  Simply call “AreaRegistration.RegisterAllAreas();” just before the first call to routes.MapRoute().

That’s it!  Well, almost.  Since you can have more than one area with the same controller name, when you create an ActionLink or something similar, you have to specify which area you intend to link to.  For instance, if you wanted to link to the ShowUnpaid action of the Invoices controller in the Accounting area from some other area, you’d do it like so:

    <%= Html.ActionLink("Unpaid Invoices", "ShowUnpaid", "Invoices", new {area = "accounting"}, null) %>

Note that if you’re linking to a controller from a view within the same area, you don’t have to specify it in the ActionLink call.

I think this is a great feature, and should allow us to maintain the current level of logical partitioning within our application.  Thanks to the MVC team for putting this in!

Getting Around Some Anonymous Type Headaches with Dynamic LINQ

Here’s something that totally slipped under my radar when it first came out:  Dynamic LINQ.  This lets you specify LINQ predicates (where clauses, order bys, etc.) using strings instead of strongly typed lambda expressions.  For example, you can rewrite the following LINQ query:

var orders = from o in db.Orders
     where o.OrderDate > DateTime.Now.AddDays(-30)
     select o.OrderID, o.OrderDate;

as something like this:

var orders = db.Orders.Where("OrderDate > @0",
     DateTime.Now.AddDays(-30)).Select("new(OrderID, OrderDate)");

Now, I know what you’re saying.  “Why would I want to give up all that strongly-typed goodness?  Isn’t that what LINQ was all about?”  Well, at least for me, this came in handy when dealing with anonymous types.  Since the only way you can pass a LINQ to SQL result (which is usually an IQueryable  anonymous type) to a function is as an IQueryable<T>, there’s no way in that function to specify a strongly typed predicate.  The compiler only knows what properties an anonymously typed object has inside the scope in which the variable is declared.

Since the overload of “Where()” provided by the Dynamic LINQ library (DLL, anyone?) will work with any IQueryable<T>, you can specify predicates on LINQ to SQL results passed into functions, as long as you know what properties the object you passed in has.  Using it reminds me of using the HQL syntax in NHibernate.

You can get the library from the LINQ samples provided by Microsoft here.  It’s just a C# or VB file named Dynamic.(cs/vb) in the LinqSamplesDynamicQuery directory. It includes a bunch of extension methods that provide the extra overloads to the LINQ methods.

Not something I envision using often, since it’s nice to be able to take advantage of the strong typing when you can get it, but nice to know that it’s there if you need it.

Subclass Difficulties Upgrading to Fluent NHibernate RC 1

I stumbled across Jason Dentler’s post on the problems he had upgrading his code to the Release Candidate of Fluent NHibernate today, specifically regarding subclasses.   I ran into some difficulties with subclasses when upgrading to the RC myself, but I was using a table-per-class-hierarchy structure rather than table-per-subclass, so I thought I’d post my problem and solution as well.

In the RC, rather than defining your class hierarchy entirely in the mapping of the parent class, you create a SubclassMap for each of your subclasses.  The wiki wasn’t quite clear about what needed to go into my subclass mappings since, like Jason, my subclasses didn’t have any extra properties, just redefined behavior. Turns out to be pretty simple, actually, and I think it ends up clearer than the old method.  Using the old way, I had the following in my parent class map:

DiscriminateSubClassesOnColumn<int>(“intCampaignTypeID”, -1)
.SubClass<InspectionCampaign>(1, x => {} )
.SubClass<OBPCampaign>(2, x=> { });
DiscriminateSubClassesOnColumn<int>("intCampaignTypeID", -1)
                .SubClass<InspectionCampaign>(1, x => {} )
                .SubClass<OBPCampaign>(2, x=> { });

This is looks okay, but the empty lambdas are a bit confusing.  Using the new SubclassMap way of doing things, in the parent class map, I just call the following method in the constructor:

DiscriminateSubclassesOnColumn("MyDiscriminatorColumnName");

And in each subclass map, I call the following method in the constructor:

DiscriminatorValue("1");

Much easier to understand!  The signature for this method threw me off a bit, since my discriminator column is actually an integer, but the DiscriminatorValue method only accepts a string.  FNH seems to handle the conversion, though, because it worked just fine.

Hopefully this will be helpful to someone, at least until the wiki gets updated!

The Case for NHibernate

Recently, my employer has instituted a program called “Innovator of the Quarter”.  The idea is for employees to submit ideas, such as new product features, process improvements, or tools to increase productivity, to a committee.  The committee will then pick the idea it thinks has the most merit, and the person who submits the winning idea will get $500 and the chance to implement their idea, or at least build a proof-of-concept.

When this program was first announced, several ideas occurred to me, but the one I thought would result in the greatest benefit for the company (and, admittedly, the greatest reduction of development pain for me) was to use NHibernate for our data access instead of the hand-rolled DAL we currently use.  I put together a proof-of-concept project based on one of our smaller systems (which I unfortunately can’t share since it’s proprietary), and wrote up a short explanation of the benefits I thought adopting NHibernate would provide.

I wanted to post the contents of that entire document here (slightly sanitized for product and team names) so that readers could point out any glaring mistakes or make suggestions for additions.  So here it is:

The Case for NHibernate

One of the most fundamental parts of a system is its data-access strategy.  IT organizations building on the .NET platform have a multitude of options for addressing this part of their systems.  Many of these options are provided by Microsoft itself, while others have grown up out of the open-source community.  They range from thin wrappers over the ADO.NET API, such as the Enterprise Library Data Application Block, which provide similar functionality to plain ADO with an easier-to-use programming model, to full-fledged Object-Relational mapping (ORM) tools, such as the Entity Framework or LINQ to SQL.

On our team, data access has traditionally been implemented using tools that operate very close to the ADO “metal”, using stored procedures for much of the application logic.  The reasoning for adopting these tools and practices at the time our first system was created included both developer skillsets and performance benefits.  Not long ago, there was a substantial performance benefit to using stored procedures as opposed to “inline” SQL.  However, with the improvements made to SQL Server, these advantages have been eroded to the point that stored procedures no longer carry much performance advantage over parameterized SQL queries.

Since the performance differences between the two are now far less significant, other advantages of the various data-access strategies, and specifically those of ORMs, should be examined.  I will outline here the multiple advantages I believe using NHibernate would bring to our projects.  Examples will be taken from the proof-of-concept project stored in source control.

Development Time

CRUD

Chief among the advantages of using any ORM tool is the reduction in development time that is possible if the tool is used appropriately.  In particular, the standard Create, Read, Update, and Delete (CRUD) operations that commonly need to be performed on any business entity are easier to implement using an ORM tool.  Take, for example, the Contact entity from the proof of concept system, which looks something like this:

class

In order to implement persistence for an entity like this using stored procedures, we would need a minimum of four procedures, one each for INSERT, UPDATE, SELECT and DELETE.  Also, application code must also be written to put the data into the appropriate properties of a .NET object when retrieved from the database, and from the .NET object properties to stored procedure parameters when saving back to the database.  This sort of “right-hand, left-hand” code is tedious and time-consuming to write, and is really just specifying the same idea four or five times over.  The CRUD procedures for the Contact object in the project the proof-of-concept was based on are about 350 lines of T-SQL put together, and the .NET code to move the properties and parameters back and forth is about 180 lines.

On the other hand, using NHibernate mappings, we specify the relationships between our object properties and database table columns in only one place.  Through the use of conventions provided by Fluent NHibernate, these mappings can be specified concisely and clearly.  In the case of the Contact entity, the mapping can be specified in about 30 lines of C# (you can see this in the file ContactMap.cs in the proof of concept project).

Once these mappings have been specified, CRUD operations can be executed in the application.  For example, to retrieve a particular Contact from the database, we can simply write:

Contact con = session.Get<Contact>(contactID);

Where “contactID”  is the primary key value for the contact we want to retrieve.  The “session” object here is a key part of the NHibernate infrastructure; all interactions with the database are executed within a session.  The other 3 CRUD operations are similarly simple.  Examples of each can be seen in the ContactsUC.ascx.cs control in the proof of concept project.

Delete

Update

Create

Searches

Besides simple retrieval and modification by primary key, the main way that we interact with persistent data is retrieval by a set of more complex criteria.  This is exemplified by the numerous search pages in our systems.  Currently, these searches are accomplished using dynamic SQL generated within a stored procedure.  As anyone who’s ever worked on a search page can attest, these SPs (along with the pages themselves) can get very large very quickly.  This approach necessitates the use of significant conditional logic within the T-SQL code, which can make the procedure quite confusing due to the limited methods of encapsulation provided by the language and environment.

There are a couple of options for doing these complex searches using NHibernate:  the Criteria API and the new LINQ provider.  Both of these have their strong points, but it’s what they have in common that’s most important.  They provide a way to execute dynamic queries using only application code without resorting to inline SQL.

There are a couple of advantages to using C# to specify searches rather than in T-SQL stored procedures.   The first is in the kinds of tools we have available for achieving logical separation.  Using both native language constructs and refactoring tools, we can break search logic down into much more easily understandable pieces, rather than having to navigate through a sea of IF blocks in T-SQL.  The second advantage is a reduction of duplication.  In our current search implementations, checking search values for nulls, empty strings, etc. occurs in both application and database code.  Using NHibernate, we cut down the number of these checks by half, since we only need to interrogate those values in the application.

An example of how a search page might be implemented can be seen in the proof of concept project in Search.aspx.cs.  Note that this page would certainly benefit from additional refactoring, but it should give the reader a general idea of how better separation of concerns and elimination of duplication can be achieved using this method.

Performance Concerns

As was stated earlier, one of the primary motivations for the data access strategy used by our systems currently was performance, particularly in the area of search.  In order to consider adoption of a new data access strategy, it must be shown that it performs comparably to the current strategy.  Fortunately, SQL Profiler can help us determine the performance impact by showing us the resulting queries produced both by the stored procedures and by NHibernate and the time they take to execute.

Though nowhere near exhaustive, the tests I performed showed that the queries produced by NHibernate, when not nearly textually identical to the ones produced by the stored procedures, executed with time differences that were essentially statistically insignificant.  (E.g., 1885 ms vs. 1886 ms)  In fact, in several instances, the NHibernate queries actually performed better than their stored-procedure-produced counterparts.

It would be prudent to note, however, that this is only one search page.  Different entity relationships may give rise to situations where SQL hints provided by a developer would have a significant impact on query performance, but this would have to be approached on a case-by-case basis to determine if such optimization is worthwhile.  If such a case does present itself, nothing would prevent us from using a stored procedure to perform that particular operation.  While it is best to be consistent with the data-access methods used in a project, using NHibernate is certainly not an all-or-nothing proposition.

When it comes right down to it, NHibernate is an abstraction on top of ADO.NET.  Abstractions are created to increase productivity, not performance.  It may be that in certain cases, the delegation of responsibility to the abstraction layer may result in greater performance due to the elimination of human error or ignorance.  However, a determined developer with intimate knowledge of the underlying technology will often be able to write code that outperforms the abstraction.  In other words, there is no question that an abstraction comes at a price.  The challenge is not to eliminate all abstractions that can be outperformed by hand-tuned code in order to eke out the absolute best performance, but to weigh the benefits and costs of each abstraction to determine if it is, overall, beneficial to the project.

The software development industry as a whole is becoming more and more willing to adopt the ORM abstraction.  This can be seen on a number of different platforms, from Ruby on Rails’ ActiveRecord to the Java Hibernate project (upon which NHibernate was originally based).  Microsoft itself has acknowledged the benefits this kind of technology provides, evidenced by their offering of not one, but two ORM solutions:  LINQ to SQL and the Entity Framework.

Alternatives

NHibernate is hardly the only player in the .NET Object-Relational Mapping space.  Microsoft has two different offerings, LINQ to SQL and the Entity Framework, and there are multiple other commercial and open-source frameworks that offer similar functionality.  So why use NHibernate over these other technologies?

LINQ to SQL

As Microsoft’s first foray into ORM, LINQ to SQL is a fairly lightweight framework for turning your database tables into entities usable by your application.  For example, if you had a table named “Contact”, you could simply drag that table from the Server Explorer in Visual Studio onto the LINQ to SQL design surface to create a persistent “Contact” class.  This works well for simple scenarios, but LINQ to SQL has several significant limitations.  Among these are only supporting one-to-one mapping between classes and tables, limited support for inherited classes, and a less than stellar workflow for modification of mappings.  In short, LINQ to SQL is likely not a good fit for our existing systems.

Entity Framework

The ADO.NET Entity Framework is Microsoft’s full-fledged ORM solution.  While it boasts a larger set of features than LINQ to SQL, it also has a number of shortcomings.    Rather than allowing the user to map tables to an existing set of entity classes, EF generates its own new classes that must be used in order to persist data.  Also, lazy-loading of associated entities (e.g. waiting to load a set of Contacts belonging to a Location until and only if they are needed) is poorly supported.  In addition to these and other problems, the XML mapping files themselves are tremendously complex, and consequently are very difficult to modify when needed.  While there is hope that some of the problems with EF may be addressed in the upcoming version, its release is tied to Visual Studio 2010, which it still a while off.

Other Third-Party ORMs

There are a multitude of ORM options for the .NET platform other than the ones already discussed, including SubSonic, LLBLGen, and Telerik’s OpenAccess among others.  Their feature sets vary widely, and while each has its merits, they all share a disadvantage against NHibernate:  the size of their user-base.  Due to its widespread adoption, there are simply more resources for learning about and troubleshooting NHibernate than any other .NET ORM.  When faced with a technical challenge, the availability of online resources can be the difference between solving the problem in a matter of minutes or a matter of days.

Conclusion

Bottom line, adding the capabilities of NHibernate to our projects will mean increased productivity.  Less time will be spent on repetitive tasks, leaving more time to focus on the problem domain and the needs of our clients.

Any feedback you’d like to provide in the comments is welcome!

UPDATE: Chris Benard suggested that posting the CRUD operations and the mapping would be helpful to people without access to the codebase, and wouldn’t expose any IP.  I agree, so now it’s there, in the form of GitHub gists.  Hopefully that’s an improvement!

Fluent NHibernate Auto Persistence Model

I wrote the other day about how conventions in Fluent NHibernate can make your life easier by decreasing the amount of code you have to write to map your objects to their relational representations.  Well, if you’re willing to go along with even more convention, you can even do away with some of your mapping files entirely.

When setting up your NHibernate configuration via FNH, normally you specify where to find your mappings by using something similar to the following:

.Mappings(m => m.FluentMappings.AddFromAssemblyOf<Entity>())

Notice the “FluentMappings” property being used here.  This will look for any mappings you have explicitly specified in classes that derive from ClassMap, a base class provided by FNH.  However, if we change that to use the “AutoMappings” property instead, like so:

.Mappings(m => m.AutoMappings.Add(AutoPersistenceModel.MapEntitiesFromAssemblyOf<Entity>()))

FNH will determine the mappings all by itself based on conventions, and you don’t have to write a single ClassMap class.

Used in conjunction with NHibernate’s SchemaUpdate feature, this becomes really powerful.  You can simply add a property to an entity class contained in an assembly for which you are using auto mapping.  When you run SchemaUpdate, either at the start of your application, or maybe in a separate console app, your database schema will be updated to include a new column to store that property value.  To illustrate, I’ll show you a simple example.  

I’ve got a small entity named “Course” that represents a college course.

    public class Course
    {
        public virtual int Id { get; private set; }
        public virtual string Name { get; set; }
        public virtual string CatalogNumber { get; set; }
    }

And the database table it maps to looks like this:

sql1

Say that I wanted to add a property to my Course entity to tell how many credits the course was worth.  I just add the property to the class:

    public class Course
    {
        public virtual int Id { get; private set; }
        public virtual string Name { get; set; }
        public virtual string CatalogNumber { get; set; }
        public virtual int NumberOfCredits { get; set; }
    }

And run SchemaUpdate through a simple console app that looks like this:

    class Program
    {
        static void Main(string[] args)
        {
            Fluently.Configure()
            .Database(MsSqlConfiguration.MsSql2005
                .ConnectionString(c =>
                    c.Is(ConfigurationManager.ConnectionStrings["SQLServerConnString"].ConnectionString)))
            .Mappings(m =>
                  m.AutoMappings.Add(AutoPersistenceModel.MapEntitiesFromAssemblyOf()))
            .ExposeConfiguration(BuildSchema)
            .BuildConfiguration();
        }

        private static void BuildSchema(Configuration config)
        {
            new SchemaUpdate(config).Execute(true, true);
        }
    }

I’ll see the script being used to update the schema in the console window:
console

And my database table will be updated with a new column for the new property:

sql2

Having not used this feature of Fluent NHibernate in anything close to a production capacity, I can’t speak to how well it scales.  However, I think this has a lot of potential for rapid prototyping, especially when you pair it with ASP.NET MVC and the templated controllers and views that you can generate.  A little alteration of the T4 templates used for the controllers could do some default persistence via NHibernate, and then we’d be quite a bit closer to the ease of protoyping provided by Rails.

Again, hats off to the Fluent NHibernate team!

Miguel Castro on .NET Rocks

I’m still not ready for the blog post on Fluent NHibernate’s Auto Persistence Model that I’ve wanted to write for a couple of days, so I thought I’d share a few of my impressions about Miguel Castro’s interview on DNR on Thursday.

First, no ORM proponent (to my knowledge) has ever said that using an ORM means that you don’t have to know anything about databases or SQL.  It’s still essential, when using an ORM, to understand how relational databases work.  At the end of the day, that’s how your data is being stored.  And you also need to understand SQL, not even so much to try to decide if your  ORM is producing sub-optimal queries so you can step in and write them manually yourself, but to clue you in that you may be using your ORM incorrectly.  I don’t know of anyone in the ORM camp, as I said, who has indicated otherwise.

Second, I may just not run in the right circles, but the whole OO-vs-SOA argument seemed kind of ridiculous.  Is anyone actually fighting about those things in an either-or way?  Maybe it was just a CSLA thing (which would explain why I’d never heard anything about it), because I don’t know of anyone who’s suggesting that service-oriented-architecture is even the same category of thing as object orientation.  I mean, unless you’re writing your services in F# or Erlang or something, chances are you’re going to be consuming them from a OO environment.  Sounds like a made-up fight to me.

And last, does anyone really need to drag out the C#-vs-VB thing again?  Is anyone really still arguing about which one is better?  Aside from a few notable exceptions (XML literals come to mind), there’s not a dime’s worth of difference between the two languages.  Again, sounds like more of a manufactured fight than anything.

There were several more things that he brought up that were in more or less the same vein.   He’s a smart guy, I think, but he probably could have made a better contribution to the .NET community by discussing a subject he was knowledgeable in and sharing some of that knowledge with us.  In the end, it just sounded like he was trying to find “controversial” subjects to “make some people mad” over, and it ended up falling kind of flat.

Fluent NHibernate Rocks

At work, we are preparing to institute an “Innovation Idea of the Quarter” program.  I say “Innovation Idea” and not “Innovator” because the idea doesn’t necessarily need to end up being implemented for the person who came up with it to win.  I think this is great, because it really encourages people to think outside of the box from the way we normally develop, and not to just propose things they know for sure would be “safe” enough to actually go into the product.

As you can probably tell from the title of this post, I’ve already decided what my initial submission is going to be.  A few months back, when we were first floating the idea of the innovation program, I started trying to prototype using NHibernate in one of our smaller systems as a proof-of-concept.  I eventually got one of search pages working, but it took a lot of trial and error with the “hbm.xml” mapping files, since I didn’t know until I ran the program when I had misspelled something.  Since the innovation program still in incubation when I finished, I set my proof-of-concept aside.

Fast forward a few months, and I had noticed that the Fluent NHibernate project had come a long way.  They’ve now got  a significant amount of documentation on their wiki site (http://fluentnhibernate.org/), pre-built binary packages, and a pretty active mailing list (http://groups.google.com/group/fluent-nhibernate).  After perusing the site a bit, and determining that, based on the amount of activity I saw, Fluent NHibernate probably did have some legs and wouldn’t just be a flash in the pan, I decided to convert proof-of-concept over to using Fluent NHibernate.

I was not disappointed.  One of the key benefits of using FNH is that you get compile-time checking of your object property names, since you use lambda expressions to specify your mappings.  You also don’t have to use the verbose type name syntax (e.g. “My.Really.Long.Chain.Of.Namespaces, ClassName”) when creating relationships between entities.  But the thing I liked the most was the ability to define conventions for just about everything.

The team I work on has an… ahem… interesting pattern for column naming:  basically, Hungarian notation for columns.  If a column is an integer column in the database, it will have an “int” prefix on the column name.  If it’s a char or varchar column, it will have a “str” prefix, and so on.  While I may not agree with this “coding standard”, the team has at least been consistent about it, so I was able to take advantage of conventions.  

In FNH, you define a convention by creating a class that implements one of the convention interfaces provided by FNH, such as IPropertyConvention (you can see the entire list of available interfaces here).  That interface requires you to implement two methods, Accept and Apply.  

The Accept method lets you define what kinds of properties or classes will be affected by the convention.  You get full access to all of the information about the property, such as the name, the data type, and so on, to let you determine if the convention ought to apply to the property.  For example, the Accept method for the string column convention I mentioned earlier looks like this:

public bool Accept(IProperty target)

    return target.PropertyType == typeof (string);
}

The Apply method lets you perform mapping actions on the property or class just as you would in a ClassMap<T> class, which is where mappings are defined in FNH (each one of those classes would correspond to a “hbm.xml” file used by NHibernate when not using FNH).  This is easier shown than explained, so here’s the Apply method for that string column convention:

public void Apply(IProperty target)
{
    target.ColumnNames.Add("str" + target.Property.Name);
}

So, with that convention in place, all I have to do to define a mapping between an entity property named Address and a column named strAddress is:

Map(x => x.Address);

You can, of course, override the conventions you define at any point that you system deviates, so you’re never painting youself into a corner by defining conventions. You’re just taking care of the majority case, and eliminating a lot of code (and magic strings) in the process.  

Since I had conventions and Intellisense working for me, I was able to convert all the hbm.xml mappings I had defined the first time to FNH mappings in a single evening.  If you’re using NHibernate in your application, you should take a serious look at Fluent NHibernate.  Not just for the reasons I’ve covered here, but also for the help it provides with refactoring (property names in mappings are affected by refactoring tools) and the ability to test your mappings.  Kudos to James Gregory,  Paul Batum, Andrew Stewart, Chad Myers, Jeremy Miller, et al. for providing a way for the rest of us to improve our NHibernate experience.