Category Archives: Tech

Displaying Database-stored Images in ASP.NET MVC with Data URLs

I’ve worked on quite a few websites which featured user-uploaded images as part of the content. To implement a feature like this, we obviously have to store the image somewhere. One way to do that is to store all the uploaded images directly in the filesystem, then storing the name of the files for the corresponding record in the database. This has always struck me as a bit clunky, since we’re storing part of the data for a Contact (say) in the database, and part of it on the filesystem. I’d much rather store the image in the database and have everything all in one place.

The problem in a web context is that the normal way of displaying images is to render an <img> tag in the HTML and have the browser make a subsequent request to the server at the URL contained in the tag. If you stored your images in the database, this would mean that you’d need a separate action method that would query the database again and write the image content directly to the response stream. But using an HTML feature called Data URLs, we can actually render the content of the image itself to the markup on the page, and not need to make another server request.

The way to do this is to transform the bytes of your image into a base 64-encoded string, put a special prefix on it, and set that as the value of the src attribute of your <img> element.

So, assuming you had a Entity Framework Contact object that looked like this:

public class Contact
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public byte[] Photo { get; set; }
}

And a view-model like this:

public class ContactViewModel
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string PhotoString { get; set; }
}

You could grab it from the database and send it down to the view like this:

public ActionResult Index(int id)
{
    var c = _contactEntities.Find(id);
    var vm = new ContactViewModel 
                 { 
                     Id = c.Id, 
                     FirstName = c.FirstName, 
                     LastName = c.LastName, 
                     PhotoString = "data:image/png;base64," + Convert.ToBase64String(c.Photo)
                 };
}

And render the view:

@model ContactViewModel;

...

<ul>
    <li>
        <span>First Name:</span> @Model.FirstName
    </li>
    <li>
        <span>Last Name:</span> @Model.LastName
    </li>
    <li>
        <span>Photo:</span> <img src="@Model.PhotoString" />
    </li>
</ul>

Keep in mind that this will increase the download size of your page, so you’ll have to weigh that against the convenience of storing images in the database. Also be aware that this won’t work in IE 7 and below, and data URLs are limited to 32k in IE 8. For more info, see the wikipedia article on Data URIs.

Video recording of “Taking the Pain Out of Web Deployments with MSDeploy”!

Shawn Weisfeld, the omnipresent “video guy” from INETA, was there to record my presentation at the Northwest Arkansas Code Camp, and the video has been posted to the INETA site.  You can check it out here.

I haven’t watched it yet because, like most people, I hate the sound of my recorded voice.  I’ll have to eventually, since it would be a great way to help me pick out my public speaking quirks, but I’m kind of dreading it. 😛

In any case, thanks to Shawn and INETA for recording and hosting the video!

Speaking Update

Just a quick note here about some recent and upcoming speaking engagements for me. 

A couple of weeks ago,  I gave my Mercurial presentation at Houston Tech Fest.  This conference gets better and more varied every year.  I saw presenations on everything from Xaml styling to the Core Data API for iOS.  I also got to meet Markus Egger, and spoke to him about writing for the project postmortem series in Code Magazine.  I’m working on a pretty cool project at work right now that I think would be interesting to Code readers, so hopefully I’ll be able to get started on that soon.

Last week I presented at both the Tyson Developers’ Conference and the Northwest Arkansas Code Camp.  I really enjoyed hanging out with some friends from the Northwest Arakansas area, as well as some of the other guys and gals I’m beginning to think of as the “Road Crew”.  No matter where I’m speaking, I’m getting to the point where I know at least a few of the other speakers, not from having worked with them or even lived in the same area, but from speaking at other conferences together before.  It’s kind of cool to start to feel a part of the “Brotherhood of the Demo”.  😉

Coming up soon, I’ll be speaking in Hattiesburg, MS at the Hub City .NET User Group, run by my friend Keith Elder.  I’m putting together an intro presentation on Silverlight for that group, which will be an interesting change of pace.  Most of my previous presentations have been fairly advanced or at least niche subjects, but I think having an intro-level talk in my repetoire will advantageous.

Over the weekend, I also found out that one of my submissions got accepted for CodeMash!  This will be the farthest I’ve ever travelled to speak at a conference, and the trip will end up being pretty expensive, but I’m really looking forward to this one.  The organizers have built up a stellar reputation for CodeMash, evidenced by the fact that all 800 tickets were sold in less than 4 days!  At this one, I’ll be presenting on the Entity Framework Code-First API that was released as a part of the Community Tech Preview 4 of EF.  Since the API is similar in some respects to Fluent NHibernate, I’ll be able to port some of the samples I used in my talk on that to the one on EF.

Hope to see you at an event soon!

Blank Page When Viewing an ASP.NET MVC Web Application

Since I recently purchased a new laptop for my personal dev machine, I’ve been working with a pretty fresh install of Windows 7.  As I was prepping a presentation for Tyson Dev Con next week, I ran into an odd and frustrating problem.  My presentation involves an ASP.NET MVC application, specifically served from IIS rather than the ASP.NET development server, so I enabled IIS and ASP.NET in the “Add/Remove Windows Features” dialog.

I successfully deployed the app to IIS, but when I hit the site, all I got was a blank page.  What was odd was that when I deployed a Web Forms app to IIS, it worked just fine.  This led me to think it was a problem with my MVC installation, but that wasn’t actually the case.

When you install IIS with the default features enabled, one of the things that is off by default is HTTP Redirection.  The first thing the default MVC template application does when you request the root of the app is redirect to Home/Index.  Enabling this IIS feature fixed my problem immediately.

Hopefully this will help someone who runs into the same problem!

Speaking at Houston TechFest

I’ll be speaking at the Houston Tech Fest this Saturday.  I’ll be doing the same presentation that I did at DevLink, “Introduction to Distributed Version Control with Mercurial.”  Planning to get some extra rehearsing done in the next couple of days so it’ll have a little more polish than last time.

I’m looking forward to the Community Leader Town hall the evening before on the Microsoft campus in Houston, led by my friend, Jay Smith. Always great to get together with other South Central community leaders!

Hope to see you there!

DevLink Retrospective

I had a great experience at DevLink last week.

For one thing, I got to hang out with a bunch of really smart people, some of whom I only knew through Twitter or their blogs.  I noticed during the dinner conversations and various Open Spaces sessions was that these guys read… a lot.  I used to be a pretty voracious reader, but that seems to have changed in the last few years.  Personal programming projects, research, and experimentation have taken the place of books in my evening routine.  While those activities may help me stay up-to-date technically, I think I’ve been missing out on bigger picture concepts that will only come from reading.  One of the takeaways from the conference for me was that I need to work on the balance between the two types of learning.  I plan to start going to bed earlier so I can read more.

Another thing that really started to crystallize in my mind at DevLink was that the primary value for me at conferences is networking and discussion with peers.  That’s not to say that the sessions weren’t valuable. On the contrary, there was a ton of great content, and I wish I had been able to see more of it.  But I’m a smart enough guy that I can usually pick up what I need about technical topics online through blog posts, articles, and tutorials.  What I can’t do every day is meet my peers face-to-face and begin to form relationships that are not only personally rewarding, but that can lead to career enrichment as well.  The various Open Spaces sessions I attended played a big part in this mind shift for me.  Kudos to Alan Stevens for helping facilitate such a great experience for the DevLink attendees.

I also got to hang out with some former co-workers from Praeses.  They took me to a restaurant where I watched my first UFC fight.  It wasn’t nearly as barbaric as I had expected, and I honestly enjoyed it a lot.  I may have to start attending the UFC-watching gatherings that those guys have from time to time.

My Mercurial session went pretty well, I think.  Based on the feedback, it definitely needs some polish in places, but I have a little bit of time to do that before I give it again at the Northwest Arkansas Code Camp and Houston Tech Fest.  If anybody who attended my session would like the slide deck, you can find it on my BitBucket account here.

All in all, it was definitely worth the trip up to Nashville, and I hope to make it back next year.

Customer-Specific Behaviors in a Multi-Tenant Web Application Using Windsor

The main application that I work on is a multi-tennant web application, and we’ve always struggled a bit with the best way to separate the different behaviors and features required by the different customers who use our application.  Right now, it’s accomplished by a mixture of database lookups, subclasses, and the dreaded switch statement.

Lately, I’ve been working on a proof of concept for a new architecture.  We’re introducing several new things, including the Windsor inversion of control container.  After working with it a little bit and starting to get my mind around the benefits of leaving the responsibility of object construction to the container, I started to think that there must be a way to use the container to separate customer-specific behavior into different implementations of the same interface.  That way customers’ rules would be nicely isolated and easy to find.  In order to do that I needed to find a way to inject a particular interface implementation based on a run-time value, in my case the organization to which the logged-on user belongs.

After quite a bit Googling, I finally came across this post by Ayende Rahien.  The IHandlerSelector was exactly what I was looking for.  It works like this:  each time an object is constructed, Windsor calls the HasOpinionAbout method on each of the handler selectors you’ve defined, where you can determine based on the interface that’s being requested whether you want to decide yourself which implementation to use.  If you decide that you do, Windsor will call the SelectHandler method of your handler selector, giving you a full list of all the implementations of the interface that’s being requested that are registered with the container.  Based on whatever logic you want, you just return one of those implementations.

It’s a bit more clear with a concrete example.  One of the core concepts in my application is the inspection of certain kinds of machinery.  However, each organization that uses the application has different rules and processes around inspections.  So, I’ll define an interface called IInspectionService, and have an implementation per customer. Let’s say we have two customers that use the app, Acme and ServiceCo (note: totally made-up business names).

Now that we’ve got those defined, we need to register the interface and implementations with the container, then define our IHandlerSelector. As with any IoC registration, you’ll want to do this just once, as your application is starting (the simplest way is to do it inside the Global.ascx.cs of your web app).

The implementation of IHandlerSelector needs a bit of explaining. Whenever the container is about to create an instance of anything, it will call the HasOpinionAbout method on all IHandlerSelectors that you’ve registered. The container’s basically asking, “Do you want to get directly involved in choosing which implementation to use?” In our case, we only want to get our hands in there if the container is trying to select some implementation of our IInspectionService, so we return “true” from HasOpinionAbout if that’s the case.

If HasOpinionAbout returns “true” for an IHandlerSelector, the container will then call that IHandlerSelector’s implementation of SelectHandler.  The key parameter to that method is the third one, the array of IHandlers.  All the implementations that could possibly satisfy the interface in question (IInspectionService in this case) that have been registered with the container will be inside that array, you just have to pick the one you want to use, using any arbitrary criteria you like.  Since we’re talking about this in the context of a multi-tenant system, I based the decision here on the group that the currently logged-on user belongs to.

So what does this all this IoC stuff get us?  Well, it particularly shines in an ASP.NET MVC application, where you can have the IoC container take control of creating your controllers, and thusly specify all of your controllers’ dependencies in their constructors.  When you do that in combination with an IHandlerSelector, you completely remove all the messy “if…then” code related to different customers from your controller action methods.

In the code above, when the container creates the InspectionController, it will use our IHandlerSelector to pick the appropriate implementation of IInspectionService to pass in to the controller’s constructor.  So, if a customer from Acme is signed in, _inspectionService will be an AcmeInspectionService, and if a customer from Service Co. is logged in, _inspectionService will be a ServiceCoInspectionService.

I think this is a great way to segregate customer-specific logic.  It’s all in one class per customer, and doesn’t clutter up the entirety of your application.  You could also have a base class with operations that needed to happen regardless  of the customer to reduce duplication if needed.

I hope this is useful to somebody!

My First “Speaking Tour”

I took a trip up to northwest Arkansas earlier this week to speak to several different .NET User Groups in the area about NHibernate.  On Monday evening, I spoke at my user group “alma mater”, the Fort Smith .NET User Group.  It was great to see a bunch of my old friends from Data-Tronics, including group president David Mohundro.

The next day, I was quite busy.  I met Robby Gregory and a few other Wal-Mart employees for lunch, and at 1:30 I spoke at the Wal-Mart internal .NET User Group.  I then moved on to the Tyson internal user group (known as “DevLoop”) at 4:00, and directly on from there to the Northwest Arkansas .NET User Group at 6:00.  I enjoyed hanging out with Jay, John, Devlin, Michael, and several others at Jose’s afterwards, but by that time I was pretty beat.

I’m really glad that I’m getting to start speaking more (big thanks to my new employer Falcon Applications for letting me keep the dates only a week after I started the job!).  I’ve submitted a session for the Dallas TechFest, plan to submit one or two for DevLink, and will continue to try to speak at other user groups later this year, as well.  Looks like I’m well on track to meeting at least one of my goals for 2010!

Thanks to all the user group leaders who invited me up; I hope to be able to visit again some time!

Why is the .NET Community Using git?

(Disclaimer: I have minimal experience with DVCSs.  The title of this post is an honest question, and if I’ve made any incorrect asummptions or gotten something just flat-out wrong, I’d love to be corrected.  Please let me know in the comments!)

My friend David Mohundro wrote a post the other day with some great tips on using git on Windows.  That got me thinking about why the .NET open-source community has started to coalesce around git as its source control tool of choice.  I know git is the new hotness, but it’s not the only distributed version control system (DVCS) out there.  In particular, I wonder why Mercurial hasn’t caught on.  It’s a DVCS like git, is more compatible with Windows, and is easier for new users to learn (by virtue of simply having fewer commands and using terminology closer to that of older VCSs).

I know that git can be used on Windows using a project called “msysgit,” but the fact that that project exists at all should tell us something about git.  Git was developed by Linus Torvalds to be used when working on the Linux kernel, using a combination of C and Linux shell scripts.  The maintainers have very little motivation to make git cross-platform, since it already solves the problem it was designed to solve.  In addition, the maintainers of the msysgit project have not always been very interested in solving the problems of their users, as evidenced by comments like these.

Mercurial, on the other hand, is designed to be cross-platform in the first place  (from the about page on the Mercurial site:  “Mercurial was designed with platform independence in mind.”).  It seems to me that it ought to be a more natural fit for people developing on the Windows platform.  And it’s not as if it’s some obscure bit of technology used by only a few people; large organizations (e.g. Mozilla and the Python Software Foundation) as well as smaller ones are using it.  On top of that, Codeplex (Microsoft’s open-source project hosting site) now supports Mercurial, and so does Google Code.  So why git instead of Mercurial?

I have a couple of theories about why lots of .NET devs have made this choice, but please keep in mind the disclaimer at the top of this post.

  1. Features – Git does have a few features that Mercurial lacks.  Most notably, it has an index or “staging area”, which allows you to have more control of what a commit will look like.  Local branching is also a bit different, since git doesn’t actually create a copy of all the files in the repository when you create a branch.  It would seem to me, though, that the main feature that attracts OSS devs to git is its distributed nature, which fits very well with the OSS workflow, and which Mercurial shares.
  2. A Superior Project Hosting Site in GitHub – Almost immediately after its launch, GitHub became the place to be for open source projects, and for good reason.  The site offers a great user experience, and lots of tools to make maintaining a project easier, such as the network graph and pull requests.  Bitbucket aims to do the same for Mercurial, and has some of the same features, but hasn’t caught on the way GitHub has. (Circular logic there, maybe? Oh, well.)
  3. Rails Envy

I hate to say it, but this last one is the one that I suspect may be closest to the truth.   .NET developers, especially ones heavily involved in open source software, have always had a bit of an inferiority complex.  At first we felt inferior to Java devs, who had a big head start on figuring out the limitations of their platform, which led to development of lots of patterns and a plethora of open source tools at their disposal (their culture was a lot more amenable to open source a lot earlier than the Microsoft culture was).  The similarities between the .NET and Java platforms and languages was to the .NET community’s advantage; it was straightforward to directly port many of the more useful Java tools, and the patterns translated easily.

A few years ago, a shift in who we compared ourselves to began.  We saw how much less friction there was when using a framework like Rails and a malleable language like Ruby.  So, as we did before, the .NET OSS community began adopting the tools and patterns used by those we envied admired.  Some of these things translated pretty well.  The principle of convention over configuration, for example, has nothing to do with platforms; it’s just a mind shift that .NET OSS devs were willing and eager to make.  The tools, however, can’t always make the jump.  Windows has always been a second-class citizen when it comes to Rails development (DHH has made his disdain for the platform quite clear), and that tends to create a self-perpetuating cycle.  The existing Rails devs don’t put much effort into making things like tools and plugins cross-platform, so the experience sucks for devs on Windows, who finally give up and switch to a Mac (or at least do their Rails development on a Linux VM), so nobody’s clamoring for the tools to work on Windows.  Regardless, many .NET OSS devs tend to use a lot of the same tools as Rails devs.  Things like rake, Cucumber, and, of course, git.  It sometimes seems like we’re bending over backwards to make tools that weren’t designed for our environment work for us anyway (e.g. msysgit).

So are the few extra features in git or the better UX on GitHub really enough to justify the friction of using git on Windows, or is it just a cargo-cult mentality?  As I said, I have very little experience with either git or Mercurial, so I may be missing something big.  I’d love to hear from someone who has experience with both DVCSs to set me straight.

In any case, I hope that at some point, we as .NET developers can get over our inferiority complex and just feel comfortable in our own skin.  That doesn’t mean using the Microsoft tool every time, but acknowledging that sometimes, just because a tool is useful in one environment, doesn’t mean it’s a better fit (let alone the aesthetically superior choice) for ours.

Developing Against Large Databases

The database for the main application that I work on is fairly large, about 50 Gb or so.  Not the largest SQL Server database I’ve seen, by far, but one that is non-trivial to move across the network.  This has a significant impact on the development process of our team.  The size of the database combined with the mediocre size of the hard drives on the laptops we use means that keeping a local copy of production data is unfeasible.

Getting a local database’s schema in sync with that on production would be easy, but in our case, that’s not enough to create a working system.  Our application has a large amount what I call “static data”, such as menu structures or sets of permissions.  So getting a database that’s “up to date” means not only getting the schema in sync with production, but also ensuring this static data is up to date as well.

Using some sort of tool for evolutionary database design like Rails ActiveRecord migrations would alleviate some of these problems, because schema and static data changes would be a part of the source code. Developers could just run the migrations on their local databases after they updated from source control.  However, this still wouldn’t solve the whole problem.  In order to effectively develop features within the application, our developers need a reasonable set of test data so that the system isn’t just completely empty.

There are tools out there, such as Red Gate’s SQL Data Generator or the data generators in Visual Studio, that will do a pretty good job creating test data by looking at database column and table names, foreign keys, and such.  This might work out even for such a large system as ours, except that a lot of key tables have “polymorphic” relationships, meaning that the foreign key that they contain could point to the primary key in a number of different tables, depending on the particular piece of data.

For example, say we have an “Invoices” table.  We have a multi-tenant system, and our customers often base their invoicing on different things.  Some might base invoices based on each individual service they performed for their clients, while others might base them on the amount of time logged in the time and expense module for a client.  In each case, the invoice database record needs to point back to a record in the table that’s most relevant, given the customer’s business processes.  Another example of this kind of relationship might be audit records, which might point back to just about any other table in the system.

Since these “polymorphic” associations are not defined as proper foreign keys in the database, those data generation tools wouldn’t be able to figure out that the columns were foreign keys at all, and as far as I’ve been able to figure, it’s not possible to define foreign key relationships with a number of different tables manually.  And even if it were, I don’t think I could prevent the tool from associating and invoice from a company that bases it’s invoices on services performed with a time and expense entry.

There are a couple of ways that our developers cope with this, neither of which are ideal.  The first, which most of our team members use, is to develop against one of several shared database instances on our data tier servers.  The problems associated with using shared databases for development are well established; developers simply can’t be as productive when stepping all over each other with data and schema changes.

The second, which I use, is to keep an instance of the database on an external hard drive.  This keeps me isolated from the changes made by other developers, and it’s a significantly better experience than using a shared database, but problems start to crop up when I get latest from source control.  Developers will check in source code changes that require data or schema changes in order to work, and my local database won’t have those changes.

So, at the end of the day, the only reliable way to get an up-to-date copy of the schema is to restore a database from the last backup of production.  Since the database is so big, that restore takes multiple hours, which can seriously impede the development process.  This actually impacts developers using shared databases even more than me, because when one of those shared databases has to be refreshed, multiple developers are put out of commission.

The only way I’ve thought of to make this a little better is to manually create a script that will cherry-pick a certain number of rows from what’s essentially the “root” table of our database, and spider out to include all the data related to those cherry-picked rows, while also including all rows from the tables that contain static data.  The end result would be a much smaller test database that contains a meaningful subset of production data that could me moved around and refreshed in minutes or seconds rather than hours.  The problems with this idea are that it would be onerous to create the script in the first place, since our database contains over 500 tables, and keeping the script up to date with any changes to tables or columns.

I wish there was an easier answer to this.  I have a few ideas in the back of my head about writing a tool that might help me create those scripts, but I think it would still end up being a very manual process of defining relationships that only a human being with knowledge of the system would be able to come up with.  If any readers have experience with this kind of thing, I’d love to hear how you dealt with it.