Using PowerShell to Ease the Pain of Branch-per-feature in Web Applications

I’m currently using Mercurial for source control at work, and I absolutely love it.  I love the cheap branching, fast operations, and merging that actually works.  One of the side effects of using a branch-per-feature workflow in Mercurial is that you’re constantly creating new copies of your project structure in the file system.  Unlike Git, where the guidance is to create branches within one working copy of the repository and switch between them, the Mercurial community recommends creating full clones instead.

Even when doing development work, I like to use IIS for serving my web applications rather than the Visual Studio web server (Cassini), so my development environment is as close to production as possible.  I’ve gotten bitten a couple of times when transitioning from Cassini during development to IIS in production, so I decided to just use IIS from the start.

Using these technologies in combination, I started to run into a problem.  Every time I created a clone of my web app’s repository, I had to set up the directory as an IIS application, plus add the permissions required for IIS to read static files (and in my case, write to a temp images directory, since I’m using the Microsoft charting tool).  To make this process easier, I whipped up a couple of PowerShell scripts to take care of all those tasks in one fell swoop.

   1: # New-App.ps1

   2: # usage: New-App "VirtualDirectoryName"

   3: param([string]$appName = "appName")

   4: $path = $pwd.Path

   5: $fullAppName = 'IIS:SitesDefault Web Site' + $appName

   6: pushd

   7: cd iis:

   8: ni $fullAppName -physicalPath $path -type Application

   9: cd c:

  10: popd

  11: $acl = Get-Acl $pwd.Path

  12: $inherit = [system.security.accesscontrol.InheritanceFlags]"ContainerInherit, ObjectInherit"

  13: $propagation = [system.security.accesscontrol.PropagationFlags]"None"

  14: $arIUSR = New-Object system.security.AccessControl.FileSystemAccessRule("IUSR", "FullControl", $inherit, $propagation, "Allow")

  15: $arIISIUSRS = New-Object system.security.AccessControl.FileSystemAccessRule("IIS_IUSRS", "FullControl", $inherit, $propagation, "Allow")

  16: $acl.SetAccessRule($arIUSR)

  17: $acl.SetAccessRule($arIISIUSRS)

  18: Set-Acl $pwd.Path $acl

A couple of things about this one. A few things are hard-coded, like the site name (this will probably be “Default Web Site” on your machine, too unless you’re running a server OS) and the “FullControl” access, which can be changed to whatever minimum level of access you need the IIS accounts to have, like “Read” or “ReadAndExecute”.

I wish there was an easier way to set the permissions on the directory, but the System.Security .NET API was the only way that I found.  I’ve always felt that calling .NET code from PowerShell was a little bit kludgey, but I’m glad it’s at least possible to fill in the gaps in functionality.

In order to not leave an orphaned IIS virtual directory when I’m done with a branch, I use this script, which will search for the app by the physical path. This one could be fleshed out a little more.  It assumes that the virtual directory exists at the root of the default web site, and that you’re executing the script from the directory itself.

   1: #Remove-App.ps1

   2: $path = $pwd.Path

   3: pushd

   4: cd iis:

   5: cd 'IIS:sitesDefault Web Site'

   6: $site = ls | Where-Object {$_.PhysicalPath -eq $path}

   7: ri $site.Name

   8: cd c:

   9: popd

One more note: you’ll need to import the “WebAdministration” PowerShell module to get this to work. If you’re on Windows 7 and you’ve got PowerShell docked on your task bar, you can just right click and choose “Import System Modules”, the web admin module (along with a few others) will be imported into your PS session.  Otherwise, you can execute “Import-Module WebAdministration” at the PowerShell prompt or in your profile script.

Hope this helps somebody!

Customer-Specific Behaviors in a Multi-Tenant Web Application Using Windsor

The main application that I work on is a multi-tennant web application, and we’ve always struggled a bit with the best way to separate the different behaviors and features required by the different customers who use our application.  Right now, it’s accomplished by a mixture of database lookups, subclasses, and the dreaded switch statement.

Lately, I’ve been working on a proof of concept for a new architecture.  We’re introducing several new things, including the Windsor inversion of control container.  After working with it a little bit and starting to get my mind around the benefits of leaving the responsibility of object construction to the container, I started to think that there must be a way to use the container to separate customer-specific behavior into different implementations of the same interface.  That way customers’ rules would be nicely isolated and easy to find.  In order to do that I needed to find a way to inject a particular interface implementation based on a run-time value, in my case the organization to which the logged-on user belongs.

After quite a bit Googling, I finally came across this post by Ayende Rahien.  The IHandlerSelector was exactly what I was looking for.  It works like this:  each time an object is constructed, Windsor calls the HasOpinionAbout method on each of the handler selectors you’ve defined, where you can determine based on the interface that’s being requested whether you want to decide yourself which implementation to use.  If you decide that you do, Windsor will call the SelectHandler method of your handler selector, giving you a full list of all the implementations of the interface that’s being requested that are registered with the container.  Based on whatever logic you want, you just return one of those implementations.

It’s a bit more clear with a concrete example.  One of the core concepts in my application is the inspection of certain kinds of machinery.  However, each organization that uses the application has different rules and processes around inspections.  So, I’ll define an interface called IInspectionService, and have an implementation per customer. Let’s say we have two customers that use the app, Acme and ServiceCo (note: totally made-up business names).

Now that we’ve got those defined, we need to register the interface and implementations with the container, then define our IHandlerSelector. As with any IoC registration, you’ll want to do this just once, as your application is starting (the simplest way is to do it inside the Global.ascx.cs of your web app).

The implementation of IHandlerSelector needs a bit of explaining. Whenever the container is about to create an instance of anything, it will call the HasOpinionAbout method on all IHandlerSelectors that you’ve registered. The container’s basically asking, “Do you want to get directly involved in choosing which implementation to use?” In our case, we only want to get our hands in there if the container is trying to select some implementation of our IInspectionService, so we return “true” from HasOpinionAbout if that’s the case.

If HasOpinionAbout returns “true” for an IHandlerSelector, the container will then call that IHandlerSelector’s implementation of SelectHandler.  The key parameter to that method is the third one, the array of IHandlers.  All the implementations that could possibly satisfy the interface in question (IInspectionService in this case) that have been registered with the container will be inside that array, you just have to pick the one you want to use, using any arbitrary criteria you like.  Since we’re talking about this in the context of a multi-tenant system, I based the decision here on the group that the currently logged-on user belongs to.

So what does this all this IoC stuff get us?  Well, it particularly shines in an ASP.NET MVC application, where you can have the IoC container take control of creating your controllers, and thusly specify all of your controllers’ dependencies in their constructors.  When you do that in combination with an IHandlerSelector, you completely remove all the messy “if…then” code related to different customers from your controller action methods.

In the code above, when the container creates the InspectionController, it will use our IHandlerSelector to pick the appropriate implementation of IInspectionService to pass in to the controller’s constructor.  So, if a customer from Acme is signed in, _inspectionService will be an AcmeInspectionService, and if a customer from Service Co. is logged in, _inspectionService will be a ServiceCoInspectionService.

I think this is a great way to segregate customer-specific logic.  It’s all in one class per customer, and doesn’t clutter up the entirety of your application.  You could also have a base class with operations that needed to happen regardless  of the customer to reduce duplication if needed.

I hope this is useful to somebody!

My First “Speaking Tour”

I took a trip up to northwest Arkansas earlier this week to speak to several different .NET User Groups in the area about NHibernate.  On Monday evening, I spoke at my user group “alma mater”, the Fort Smith .NET User Group.  It was great to see a bunch of my old friends from Data-Tronics, including group president David Mohundro.

The next day, I was quite busy.  I met Robby Gregory and a few other Wal-Mart employees for lunch, and at 1:30 I spoke at the Wal-Mart internal .NET User Group.  I then moved on to the Tyson internal user group (known as “DevLoop”) at 4:00, and directly on from there to the Northwest Arkansas .NET User Group at 6:00.  I enjoyed hanging out with Jay, John, Devlin, Michael, and several others at Jose’s afterwards, but by that time I was pretty beat.

I’m really glad that I’m getting to start speaking more (big thanks to my new employer Falcon Applications for letting me keep the dates only a week after I started the job!).  I’ve submitted a session for the Dallas TechFest, plan to submit one or two for DevLink, and will continue to try to speak at other user groups later this year, as well.  Looks like I’m well on track to meeting at least one of my goals for 2010!

Thanks to all the user group leaders who invited me up; I hope to be able to visit again some time!

I Love Lucy

This past Friday, the Sullivan family got a little bit bigger.

Lucy Sullivan

Mom and baby are both doing fine.  I have to say, at least from the daddy perspective, the prior experience definitely helps.  I feel like things are going much easier than they did with our first daughter, Molly.  We’re still getting quite a bit less sleep than normal, but there aren’t as many unknowns, and we don’t get stressed out about everything the way we did the first time.

And Lucy herself has been making it pretty easy; she’s a champion sleeper, just like her daddy!

Needless to say, blogging has taken kind of a back seat, but I hope to start back up again soon.

Why is the .NET Community Using git?

(Disclaimer: I have minimal experience with DVCSs.  The title of this post is an honest question, and if I’ve made any incorrect asummptions or gotten something just flat-out wrong, I’d love to be corrected.  Please let me know in the comments!)

My friend David Mohundro wrote a post the other day with some great tips on using git on Windows.  That got me thinking about why the .NET open-source community has started to coalesce around git as its source control tool of choice.  I know git is the new hotness, but it’s not the only distributed version control system (DVCS) out there.  In particular, I wonder why Mercurial hasn’t caught on.  It’s a DVCS like git, is more compatible with Windows, and is easier for new users to learn (by virtue of simply having fewer commands and using terminology closer to that of older VCSs).

I know that git can be used on Windows using a project called “msysgit,” but the fact that that project exists at all should tell us something about git.  Git was developed by Linus Torvalds to be used when working on the Linux kernel, using a combination of C and Linux shell scripts.  The maintainers have very little motivation to make git cross-platform, since it already solves the problem it was designed to solve.  In addition, the maintainers of the msysgit project have not always been very interested in solving the problems of their users, as evidenced by comments like these.

Mercurial, on the other hand, is designed to be cross-platform in the first place  (from the about page on the Mercurial site:  “Mercurial was designed with platform independence in mind.”).  It seems to me that it ought to be a more natural fit for people developing on the Windows platform.  And it’s not as if it’s some obscure bit of technology used by only a few people; large organizations (e.g. Mozilla and the Python Software Foundation) as well as smaller ones are using it.  On top of that, Codeplex (Microsoft’s open-source project hosting site) now supports Mercurial, and so does Google Code.  So why git instead of Mercurial?

I have a couple of theories about why lots of .NET devs have made this choice, but please keep in mind the disclaimer at the top of this post.

  1. Features – Git does have a few features that Mercurial lacks.  Most notably, it has an index or “staging area”, which allows you to have more control of what a commit will look like.  Local branching is also a bit different, since git doesn’t actually create a copy of all the files in the repository when you create a branch.  It would seem to me, though, that the main feature that attracts OSS devs to git is its distributed nature, which fits very well with the OSS workflow, and which Mercurial shares.
  2. A Superior Project Hosting Site in GitHub – Almost immediately after its launch, GitHub became the place to be for open source projects, and for good reason.  The site offers a great user experience, and lots of tools to make maintaining a project easier, such as the network graph and pull requests.  Bitbucket aims to do the same for Mercurial, and has some of the same features, but hasn’t caught on the way GitHub has. (Circular logic there, maybe? Oh, well.)
  3. Rails Envy

I hate to say it, but this last one is the one that I suspect may be closest to the truth.   .NET developers, especially ones heavily involved in open source software, have always had a bit of an inferiority complex.  At first we felt inferior to Java devs, who had a big head start on figuring out the limitations of their platform, which led to development of lots of patterns and a plethora of open source tools at their disposal (their culture was a lot more amenable to open source a lot earlier than the Microsoft culture was).  The similarities between the .NET and Java platforms and languages was to the .NET community’s advantage; it was straightforward to directly port many of the more useful Java tools, and the patterns translated easily.

A few years ago, a shift in who we compared ourselves to began.  We saw how much less friction there was when using a framework like Rails and a malleable language like Ruby.  So, as we did before, the .NET OSS community began adopting the tools and patterns used by those we envied admired.  Some of these things translated pretty well.  The principle of convention over configuration, for example, has nothing to do with platforms; it’s just a mind shift that .NET OSS devs were willing and eager to make.  The tools, however, can’t always make the jump.  Windows has always been a second-class citizen when it comes to Rails development (DHH has made his disdain for the platform quite clear), and that tends to create a self-perpetuating cycle.  The existing Rails devs don’t put much effort into making things like tools and plugins cross-platform, so the experience sucks for devs on Windows, who finally give up and switch to a Mac (or at least do their Rails development on a Linux VM), so nobody’s clamoring for the tools to work on Windows.  Regardless, many .NET OSS devs tend to use a lot of the same tools as Rails devs.  Things like rake, Cucumber, and, of course, git.  It sometimes seems like we’re bending over backwards to make tools that weren’t designed for our environment work for us anyway (e.g. msysgit).

So are the few extra features in git or the better UX on GitHub really enough to justify the friction of using git on Windows, or is it just a cargo-cult mentality?  As I said, I have very little experience with either git or Mercurial, so I may be missing something big.  I’d love to hear from someone who has experience with both DVCSs to set me straight.

In any case, I hope that at some point, we as .NET developers can get over our inferiority complex and just feel comfortable in our own skin.  That doesn’t mean using the Microsoft tool every time, but acknowledging that sometimes, just because a tool is useful in one environment, doesn’t mean it’s a better fit (let alone the aesthetically superior choice) for ours.

Developing Against Large Databases

The database for the main application that I work on is fairly large, about 50 Gb or so.  Not the largest SQL Server database I’ve seen, by far, but one that is non-trivial to move across the network.  This has a significant impact on the development process of our team.  The size of the database combined with the mediocre size of the hard drives on the laptops we use means that keeping a local copy of production data is unfeasible.

Getting a local database’s schema in sync with that on production would be easy, but in our case, that’s not enough to create a working system.  Our application has a large amount what I call “static data”, such as menu structures or sets of permissions.  So getting a database that’s “up to date” means not only getting the schema in sync with production, but also ensuring this static data is up to date as well.

Using some sort of tool for evolutionary database design like Rails ActiveRecord migrations would alleviate some of these problems, because schema and static data changes would be a part of the source code. Developers could just run the migrations on their local databases after they updated from source control.  However, this still wouldn’t solve the whole problem.  In order to effectively develop features within the application, our developers need a reasonable set of test data so that the system isn’t just completely empty.

There are tools out there, such as Red Gate’s SQL Data Generator or the data generators in Visual Studio, that will do a pretty good job creating test data by looking at database column and table names, foreign keys, and such.  This might work out even for such a large system as ours, except that a lot of key tables have “polymorphic” relationships, meaning that the foreign key that they contain could point to the primary key in a number of different tables, depending on the particular piece of data.

For example, say we have an “Invoices” table.  We have a multi-tenant system, and our customers often base their invoicing on different things.  Some might base invoices based on each individual service they performed for their clients, while others might base them on the amount of time logged in the time and expense module for a client.  In each case, the invoice database record needs to point back to a record in the table that’s most relevant, given the customer’s business processes.  Another example of this kind of relationship might be audit records, which might point back to just about any other table in the system.

Since these “polymorphic” associations are not defined as proper foreign keys in the database, those data generation tools wouldn’t be able to figure out that the columns were foreign keys at all, and as far as I’ve been able to figure, it’s not possible to define foreign key relationships with a number of different tables manually.  And even if it were, I don’t think I could prevent the tool from associating and invoice from a company that bases it’s invoices on services performed with a time and expense entry.

There are a couple of ways that our developers cope with this, neither of which are ideal.  The first, which most of our team members use, is to develop against one of several shared database instances on our data tier servers.  The problems associated with using shared databases for development are well established; developers simply can’t be as productive when stepping all over each other with data and schema changes.

The second, which I use, is to keep an instance of the database on an external hard drive.  This keeps me isolated from the changes made by other developers, and it’s a significantly better experience than using a shared database, but problems start to crop up when I get latest from source control.  Developers will check in source code changes that require data or schema changes in order to work, and my local database won’t have those changes.

So, at the end of the day, the only reliable way to get an up-to-date copy of the schema is to restore a database from the last backup of production.  Since the database is so big, that restore takes multiple hours, which can seriously impede the development process.  This actually impacts developers using shared databases even more than me, because when one of those shared databases has to be refreshed, multiple developers are put out of commission.

The only way I’ve thought of to make this a little better is to manually create a script that will cherry-pick a certain number of rows from what’s essentially the “root” table of our database, and spider out to include all the data related to those cherry-picked rows, while also including all rows from the tables that contain static data.  The end result would be a much smaller test database that contains a meaningful subset of production data that could me moved around and refreshed in minutes or seconds rather than hours.  The problems with this idea are that it would be onerous to create the script in the first place, since our database contains over 500 tables, and keeping the script up to date with any changes to tables or columns.

I wish there was an easier answer to this.  I have a few ideas in the back of my head about writing a tool that might help me create those scripts, but I think it would still end up being a very manual process of defining relationships that only a human being with knowledge of the system would be able to come up with.  If any readers have experience with this kind of thing, I’d love to hear how you dealt with it.

TFS Installation: No Longer Rocket Science

I think one of the best things that I’ve observed in playing around with TFS 2010 was how easy it was to install.  This was a pretty big hurdle in previous versions, but 2010 has installation pared down to a “Next, Next, Finish” level of complexity in some simple scenarios.  In particular, the “Basic” installation, which doesn’t include the SharePoint or Reporting Services components, is brain-dead simple.

In addition, TFS can now be installed on client OSes (Vista and above), and use the free SQL Server Express.  It will even go so far as to install SQL Express for you if you don’t already have it installed (you probably already do if you’ve installed Visual Studio).  You can download Beta 2 of TFS 2010 from here.

However, if you don’t want to sully your pristine machine with Beta products, there’s a fully configured Virtual PC image available for download here.

In other words, it’s pretty trivial now to try out TFS yourself if you’re stuck on SourceSafe and are looking to try out all the other mainstream options, or (like me) if your shop’s already using TFS and you’d like to try your hand at some of the administrative features that are behind lock and key.

A Plan for 2010

Everyone on my blogroll is taking the opportunity of the new year to take stock of 2009 and make public plans for 2010, so I thought I would join in.

In 2009, I made an effort to start speaking a little bit more.  I spoke at my first code camp, the Northwest Arkansas Code Camp, on the Harding and LSU campuses for recruiting trips for my employer, as well as the Baton Rouge .NET User Group.  I hope to continue ramping that up in 2010.  I’m presenting at my home meeting, the Shreveport .NET User Group, later this month, and I have a trip scheduled to the northwest Arkansas area in March.  I’d like to make it down to southern Louisiana at some point, too; the guys down in Lafayette and New Orleans in particular need some love.

I also need to step up in promoting the SDNUG this year.  I felt like I was able to kind of coast for a lot of 2009 with our current set of attendees, and didn’t really make as much of an effort as I could have to make more people aware of the group’s existence.  I’ve already begun to remedy that this year by seeking out area software companies and making phone calls, which will hopefully yield some more members.  I need to get on top of speaker scheduling, too, both within the group and without.  There are several group members that I think would make great presenters, I just need to convince them that it’s a good idea!

I want to start blogging more, as well.  One post a month just isn’t where I want to be.  I think having a regular schedule will help with that, so I’m pledging right now in public to post something on my blog at least once a week.  That plan may take a hit when my daughter is born in February, but I’m going to give it my best effort.

I think something else that may help with blogging is having an area of focus.  I’ve really been interested in application lifecycle management lately, especially after all the stuff I saw at PDC, so at least for now, I think I’ll focus on Team System and TFS for a little while.  I’ve been listening to the back catalog of Radio TFS, which has been great, and I plan to seek out other additional resources for ideas. I’ve recently installed TFS on a virtual machine to play around with, so hopefully that will lead to some ideas, as well.

I hope you had a great 2009; here’s to 2010!

PDC09 Debrief

I’ve just returned from my first major conference, Microsoft’s Professional Developers’ Conference in Los Angeles, California.  I have to say, things were a bit different than I expected, but in a good way.  More on why later; first, the play-by-play!

Day 0

The air travel wan’t as bad as I expected it to be.  A quick hop to Houston, and then 3 hours to LAX.  When I finally got to the hotel (the Westin Bonaventure in downtown LA), I met up with Mike Huguet, a fellow user group leader from Baton Rouge.  We ended up down at the Figueroa Hotel with Chris Koenig, a Developer Evangelist from the South Central district, and enjoyed the open bar they had set up for PDC attendees.  I got to meet Dave Bost, the guy behind the Thirsty Developer podcast, and several other Microsoft employees.  We got a little glimpse into Microsoft culture by talking with these guys, and confirmed that the structure of myriad customer liaisons that Microsoft employs is a bit confusing, even to the insiders!  I had planned to go the Party with Palermo, and still rather wished that I had, but we were having such a good time at the Fig, we decided to stay.  We also ended up eating at this great little greasy spoon that only took cash, and has been open continuously, 24 hours a day, since 1924!

Day 1

The first day at PDC was pretty much all about Azure.  This was a bit disappointing to me, since the current application that I work on couldn’t really make much use of the cloud.  We have a relatively small user base (a thousand or so users), so scaling out really wouldn’t buy us much.  I went to the Future of C# and VB session with Luca Bolongese (who has an amazing Italian accent), but wasn’t really surprised by much there.  After that, I spent waaaay too much time standing in line to do an Azure hands-on lab.  I missed lunch and a session, and didn’t even get to finish the lab before the expo hall closed at 3:00.  That did, however, secure me a coveted badge stamp that would get me a free Flip video camera the following day (which my wife is loving for taking quick videos of our daughter Molly).  I went to a session on SQL Azure because I figured the database might be one place that we could actually use the scaling, but afterward I concluded that the sharding required to use it for large sets of data would create too large of an impact on our application.  The last session of the day, on PEX and code contracts, was interesting, and perhaps applicable since we’re looking to start using unit tests soon, but both technologies are still in the research stage, and may never actually make it into the framework proper.  All in all, a bit of a disappointing first day at the conference, but better things were in store for the next day.

That evening, I attended an “Influencers” party with Mike and a couple of guys from my old stomping grounds of Northwest Arkansas, Jay Smith and Jon Oswald.  I got to catch up with Colin Neller, a former co-worker at Data-Tronics and fellow Harding alum, as well as meet other community members that I’d only heard on podcasts before: Jon Galloway, Chris “Woody” Woodruff, and even Scott Hanselman.  It was cool to be able to put faces with names, and to see that those people are just human beings like you and me.  Jon in particular comes off as just about the friendliest guy in the world, really cheerful and willing to chat with anybody.  Some pretty nice food at that place, too, sushi and shish kabobs. Nom nom!

Day 2

This is when things really started to get interesting.  The Steven Sinofsky did a pretty good job with his part of the keynote, and of course, the announcement that we would all be getting free laptops certainly made him some friends. 😉  The Gu was great, as usual, despite the quadruple iPhone fail.  With the features he described about Silverlight 4, it’s really starting to look like a compelling platform.  The Silverlight team have really been killing it. They’re on a lightning-fast release pace, and not just fluff releases either.  They’re taking customer feedback, even going so far as to add elevation so that applications can do things outside the normal Silverlight sandbox, which at one point they said they’d never do.

The sessions were great that day, too.  Scott Hanselman’s MVC talk was great edu-tainment, and it was great to see some of the new templating features he showed off.  I went to an open source panel, and got to meet Miguel de Icaza, which was pretty cool.  I also had an interesting conversation with some of the people on the Entity Framework team.  We’re starting to think  about integrating an ORM into our product, and we were leaning toward NHibernate.  I asked the team members point blank why I should use EF instead.  They were pretty frank with me, and basically said “NHibernate is a mature product, and we’re still relatively new to the ORM space, but we’re making a lot of big strides in version 4.”  Between POCO support, transparent lazy loading, and the code-only (read “Fluent”) configuration model, most of the things on my wish list have been met.  It might be worth some further scrutiny at this point.  This is when I realized that all the stuff about getting to interact with the product teams was real, and not just conference marketing.

That evening, rather than going to the big “Underground @ PDC” party (for which there was an enormous line), Mike, Chris, Jay, John, and I hung out at the ESPN Zone, kind of a sports bar/restaurant a la Buffalo Wild Wings.  We had some great discussions about managing communities, Microsoft culture, the MVP program, and the role of Developer Evangelists.  I’m starting to get the feeling that this is the kind of thing one needs to go to conferences for.  Community leaders can talk via Twitter or email all the time, but it’s only during conference time that we get to take advantage of the high bandwidth of in-person communication.

Day 3

The third day was all about ALM tools for me.  I went to presentations on MSDeploy, the new Test and Lab Manager, Team System process customization, and a kind of roll-up presentation about starting from a project that just compiles to one that’s under CI with tests (unit and coded UI).  I also spoke with Microsoft employees from several different teams about our particular difficulties with database deployments.  I’ve got several ideas now, and I’m looking forward to seeing if we can reduce some of the pain that we’re experiencing in that area right now.  By this point in the week, I was pretty wiped out, so I headed to the hotel and crashed.

Takeaways

  1. Prefer interactions with product team members and community leaders to attending sessions. You can watch the sessions online later if you miss one you really wanted to see.
  2. Leave your laptop in the hotel room (locked up if you feel it’s necessary).  You really won’t use it that much, and carrying it around can start to get painful after a couple of days, particularly if you’re walking to the convention center from your hotel like I was.
  3. Don’t go out of your way to get swag. You’ll probably end up with a bunch of it without even trying anyway, and if you put a dollar value on your time, you’ll quickly realize that the gizmo you’re in line for.

Overall, it was a great experience.  Given how expensive it was, I don’t think it’s something I’m going to do again soon, but hopefully I can attend some smaller conferences next year. Since TechEd is in New Orleans this year, I think the Louisiana community may try to cook something up for just before or after.  Stay tuned for more info!

Single-Project Areas in ASP.NET MVC 2

The ASP.NET MVC framework brought a lot of benefits to the Microsoft web developer, such as easier automated testing and better separation of concerns, but it did come with its share of pain points.  One of those pain points was the difficulty of organizing the files in your web project.  By default, all folders that contained views, which were named after the controllers in your application, had to be directly under the “Views” folder that was provided by the framework.  For large applications, the number of folders underneath “Views” could get quite unwieldy.

To help alleviate this, in the first preview of version 2 of the MVC framework, the concept of “Areas” was introduced to allow an additional hierarchical level to controller and view organization.  It was implemented in Preview 1 using separate web projects for each area, each with its own “Controllers” and “Views” folder.

This was a definite improvement, but there was some pretty quick feedback from the community about the implementation.  Having a separate project for each area means that large solutions would end up with quite a few projects, which can dramatically impact compilation time.  I can speak from experience; the main solution I work on had over 80 projects in it when I first joined my current team.  Build time  was usually about 10 minutes, and that was just compilation, no tests or other things going on in the build.  When we reduced it to three projects, build time went down to about 10 seconds.  Needless to say, as our team starts thinking about doing some MVC work, we don’t want to go back to that place.

Thankfully, in preview 2, the MVC team provided the ability to create all your areas within a single web project.  This provides all the organizational benefits without the impact to compilation time.  To add areas to your MVC web project, follow these steps:

  1. Add a folder named “Areas” to your web project.
  2. Add a folder underneath the Areas folder with the name of the area you want to create, “Accounting” for example.
  3. Add a folder called “Controllers” under the Accounting folder.  Now, when you right-click on this folder, you’ll get the “Add Controller” context menu option.
  4. Add a folder called “Views” under the Accounting folder.  This will work just like the Views folder that gets created as part of the MVC project template.  You’ll have one folder inside the Views folder for each controller in your area.
  5. Add a new class  file to the Accounting folder named “Routes.cs”.  This class will need to inherit from AreaRegistration and override the AreaName property.  It should end up looking something like this:
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Mvc;
    
    namespace MyProject.Areas.Accounting
    {
        public class Routes : AreaRegistration
        {
            public override string AreaName
            {
                get { return "accounting"; }
            }
    
            public override void RegisterArea(AreaRegistrationContext context)
            {
                context.MapRoute(
                    "accounting_default",
                    "accounting/{controller}/{action}/{id}",
                    new { controller = "Invoices", action = "ShowUnpaid", id = "" }
                );
            }
        }
    }
  6. You’ll also need to add a line to your Global.asax.cs file.  Simply call “AreaRegistration.RegisterAllAreas();” just before the first call to routes.MapRoute().

That’s it!  Well, almost.  Since you can have more than one area with the same controller name, when you create an ActionLink or something similar, you have to specify which area you intend to link to.  For instance, if you wanted to link to the ShowUnpaid action of the Invoices controller in the Accounting area from some other area, you’d do it like so:

    <%= Html.ActionLink("Unpaid Invoices", "ShowUnpaid", "Invoices", new {area = "accounting"}, null) %>

Note that if you’re linking to a controller from a view within the same area, you don’t have to specify it in the ActionLink call.

I think this is a great feature, and should allow us to maintain the current level of logical partitioning within our application.  Thanks to the MVC team for putting this in!