DevelopMENTAL Madness

Tuesday, June 30, 2009

TIP: Open SQL Files in a Visual Studio Project Into the Same Instance of SSMS

Considering how integrated Microsoft tools usually are the result is frustrating when you tell Visual Studio to open SQL files using Sql Server Management Studio (SSMS). I really don’t like using Visual Studio to edit T-SQL files but in the past, before I discovered this tip, each SQL file I opened would open in a new instance of SSMS. Try it:

  1. Open a solution which contains SQL files
  2. Right-click any SQL file and select “Open With…”
  3. Click “Add”
  4. Browse to "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" or if you’re running x64 Windows "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe", then click “OK”
  5. Click “Set as Default” and then “OK”

Now open multiple SQL files. Each time you’ll get a different instance of SSMS opened. What a pain!

NOTE: This entire article applies to SQL 2005, just replace SSMS with SQLWB.

How do you resolve this? Repeat steps 1-3 above, but at step #4 enter the following values:

  • Program Name: “explorer.exe”
  • Friendly Name: “Windows Explorer”

Repeat step #5 (set as default) above and then click OK. Now, open additional files. They should all open in the same instance of SSMS.

It would seem that Visual Studio issues a command to SSMS.exe which includes the path of the file selected in the solution explorer. It is up to SSMS to check for a new instance, which it doesn’t. But when you pass the file name to explorer it gets opened up in the same instance.

QUIRK WARNING!

If SSMS is not already open, the first file you attempt to open (not first time ever, but every time you open an SQL file from Visual Studio and SSMS isn’t open yet) SSMS will open, but your file will not. Click the file a 2nd time and it will open the file this time. Don’t ask me to explain it it just is (and I have no idea why).

Conclusion

The result when you tell Visual Studio that SSMS is the default editor makes sense, but I don’t get why it would be different when you tell explorer to open it. Maybe if I were a Windows developer instead of a web developer I would know the answer. But either way, now you know. Enjoy.

Labels: , , ,

Wednesday, June 24, 2009

ASP.NET MVC: An Application Platform

The MVC / WebForms Debate

Yesterday I wrote about the stored procedure/dynamic sql debate - I must be feeling a bit argumentative lately, because today I was reading Tony Lombardo’s post on WebForms versus MVC. I don’t know Tony, I haven’t read any of his other posts or met him in any forum so I don’t hold anything against him. I’d be a hypocrite if I disagreed that you need to weigh the pros/cons and make an informed decision. I just made the same argument myself. However, at the end of his post he said this:

“The best advice I’ve seen so far is that WebForms is the platform of choice for building web applications, where MVC is more suited to building web sites.  This is still a bit abstract since there’s no clear definition of web applications, but I think it’s safe to say that if you’re building a web version of a win client application, you’re building a web application.  If you have a ‘grid’ in your page for purposes above that of just layout, you’re building a web application.”

It’s true MVC is very attractive for “web sites” when you consider how nicely url routing helps you build an SEO friendly site that also adheres to RESTful principals. However, this is only superficial in that these are features that stand out when you look at an MVC site. These were certainly goals of the MVC team, but if that’s all you see in MVC then you’re judging the book by it’s cover.

MVC as an Application Platform

I agree with your argument to use the best tool for the job. This is always the case in software development and loosing site of this is just going to bite you in the end. However, I believe that as future versions continue to be released WebForms will continue to loose ground because MVC will cover more and more scenarios where WebForms holds an advantage. I disagree with the recommendation that Application == WebForms, WebSite == MVC.

MVC is an application platform from the ground up. There are many reasons why, here are some that matter to me:

Model Binding

When WebForms was first released I was so excited that I didn’t have to use Request.Form[] and Request.QueryString[] (or just plain Request[]) to get access to my form data. I could look at a TextBox or better yet a DropDownList and get not only the value submitted via POST but I could inspect additional properties like SelectedItem.Text. It was a much richer model than classic ASP.

What this new model didn’t do for me was return primitive types (unless I built my own custom controls or purchased a 3rd party control library). I still had to parse the strings to primitive types, but at least validation allowed me to safely do so without using TryParse().

I have been using MVC since Preview 1 (Nov 2007) and I have found that it gets out of my way. I get strongly typed binding both in my view and when responding to any request. I no longer have to spend time parsing strings into primitive types then hydrating an object by hand. Instead I just have to do this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Mvc.Ajax;
 
namespace MyApp.Web.Controllers
{
    public class MyObject {
        DateTime Date { get; set; }
        Int64 Integer { get; set; }
        String Text { get; set; }
    }
 
    public class Default1Controller : Controller
    {
        [AcceptVerbs(HttpVerbs.Get)]
        public ActionResult Index() {
            return View();
        }
 
        [AcceptVerbs(HttpVerbs.Post)]
        public ActionResult Index(MyObject model) {
 
            // save my model
            Repository.Save(model);
 
            return View();
        }
    }
}

I think that speaks for itself.

RAD Without Drag-n-Drop

It’s no secret you get no DnD from MVC. But here’s what you do (or will) get:

  • Scaffolding – at the moment this is a bit of a stretch to say MVC gives you scaffolding. It doesn’t support DynamicData yet (it’s in the pipeline), but on a per View basis you can have an entire page built for you (List, Create, Edit, Detials) with a strongly typed model by selecting the type of page you want and the type of your model.

    image 

  • Easy HttpHandlers – need to return an image or file other an an HTML response? The WebForms way is usually to create a Generic Handler (a class inheriting from IHttpHandler). You can do it with System.Web.UI.Page as well, but the point is that you create the handler create your content, set your Http headers and then write to the response stream. With MVC you simply return from an action a result type that corresponds to your content/type or use File and specify the mime type:
  • using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Mvc;
    using System.Web.Mvc.Ajax;
     
    namespace MyApp.Web.Controllers
    {
        public class Default1Controller : Controller
        {
            //
            // GET: /Default1/
     
            public FileResult SomeFile()
            {
                return File("/Path/To/File.ext", "content/type");
            }
     
            public JsonResult JavaScriptObject() {
                return Json(new { 
                    Property1 = "something", 
                    Propety2 = DateTime.Now, 
                    Property3 = 323.00 
                });
            }
     
            public JavaScriptResult ClientScript() {
                return this.JavaScript("function message(val) { alert(val); }");
            }
     
            public PartialViewResult HtmlSnippet() {
                return PartialView("NameOfAscxControl.ascx");
            }
        }
    }
  • Lightweight Web Services – as a side effect of easy control over the mime-type of your results you can create your own lightweight web services without the need of formally creating web services via .asmx or .svc files. JsonResult allows you to make easy calls from your javascript and return a JSON resonse or use FileResult and specify application/xml as your content type and you’re off and running with a working response to an ajax request to a RESTful interface.

Testability

If I'm building an application then being able to write unit tests for both my application logic and business logic is very important to the long-term life of my project. Applications are organic, they change and grow in size and complexity. Often an application will outlive the employment of the programmer(s) who wrote it. Maintaining unit tests ensure that the application continues to work as intended. This keeps things clean and maintainable, which in turn extends the life of your application longer than any other method I know of.

Control

WebForms is very closed when compared to MVC. Sure you get a lot of control/flexibility through HttpModules, HttpHandlers, Application and Page events, Server Controls. But many times I have needed to add custom logic which required a hack because I couldn’t put it where I needed to most to get the cleanest or most effective solution. You can debate that if you know what you’re doing then it isn’t a problem and I won’t let my pride get in the way of that. But in MVC I can hook in wherever I want at any level and MVC doesn’t try to “protect” me from myself. I could shoot myself in the foot, but I’m also free to do what I decide is best for my application. This is something you can’t fully appreciate until you experience it.

Other Concerns

WebForms Designer vs. Raw HTML

Many people site having to get their hands dirty in html and how nicely WebForms abstracts these details from them. This is strictly my own opinion and I may be biased. I started out almost 10 years ago writing HTML and classic ASP in notepad and I have never used the WebForms designer. I’ve tried it many times, but I find that it slows me down and very quickly looses any fidelity with the end result such that it becomes worthless to me.

Writing server controls by hand and writing raw HTML by hand are almost the same – it’s all tag soup. In some cases I’d even rather write the raw HTML when you compare the two:

<asp:TextBox ID="myTextBox" runat="server" />

compared to:

<input type="textbox" id="myTextBox" />

But even there I get helpers:

<%=Html.TextBox("myTextBox")%>

Tag Soup?

On a related note: after years of WebForms we’ve become conditioned to believe that the classic ASP style escapes (<%= %>)are bad, bad, bad. It’s mixing server code with markup. How’s that different from a server control – it isn’t html either, it just looks like it.

What’s bad is placing logic into the page. Rob Conrey demonstrated this better than I ever could.

Conclusion

Should you evaluate the needs of your application against the benefits of WebForms vs MVC? Yes. Should you learn MVC so you know what it has to offer? Most definitely. Will you regret using MVC? I haven’t and I don’t believe you will. MVC is an incredible application platform and it will only get better. WebForms was developed in a relatively closed way, whereas from day one MVC was developed with extensive community involvement and feedback. It had 5 preview releases before beta and was being used in production environments all along the way. The MVC team developed this so that it works and it works well. Give it a spin, you won’t regret it.

Labels: , ,

Tuesday, June 23, 2009

Data Access: Stored Procedures vs. ORM (ad hoc) Queries

I’ve had many spirited discussions over time with my colleagues on this issue. And I will most likely invite a flame war from the developer community for this post, but after reading this post I was more than a little annoyed and decided to write a rebuttal.

As a disclaimer I don’t take issue with the decision to use ORM/ad hoc SQL over stored procedures. I use both in my applications and I have always maintained that this is a decision to made by you and your team (if applicable) after weighing the pros/cons against the requirements of your application. But I do take issue with those who make blanket statements completely dismissing stored procedures.

Not only have I used and tested a number of ORM components, but I have had the experience of writing my own to support a large, high-traffic application. I am no stranger to ORM.

Manageability

Everybody talks about all the work involved in maintaining stored procedures. Heck, I worked on an application which contained almost 4,000 stored procedures so I agree it can get unwieldy. When we re-wrote the application we went ad hoc with an ORM component. But the ORM couldn’t meet the needs of all our queries so we hand crafted many queries, but now our SQL code was littered across the entire application. Plus, I had to constantly battle improper use of parameters which caused a bloated SQL procedure cache. I was constantly on my soap box pleading with my team to be explicit when declaring their SqlClient parameters.

So you can have an unmanageable mess even without the help of stored procedures. However, the argument that an ORM allows you to just regenerate and go can be applied to stored procedures as well. I have a project I’m working on where I do just that. My stored procedures are largely generated, so they are just as easy to update as an ORM component.

Troubleshooting

I have spent many hours analyzing SQL Profiler logs to troubleshoot performance issues and unless you know your application well it takes more time to track down the location of a query you need to fix based only on the text of the query statement than it does when Profiler gives you the name of the stored procedure.

Performance

The Procedure Cache

There are 3 types of plans in the procedure cache: SQL, Prepared and Compiled. Compiled plans have the highest priority and in the event SQL Server experiences memory pressure compiled plans are the most likely to stick around when the procedure cache gets flushed. (see: http://technet.microsoft.com/en-us/library/cc293624.aspx)

Many think of the Procedure Cache as a memory issue, but the procedure cache is a huge CPU saver on your database. Compiling a SQL query is a very expensive CPU operation and in the event something triggers memory pressure or recompilation you’re going to also end up with increased CPU usage which may or may not bring down your database.

Additionally, many ORM implementations don’t properly implement parameters for variable-length data types (ex: VARCHAR, DECIMAL). LINQ to SQL is a prime example: The length for string parameters is set to the size of the string. This results in a separate execution plan every time the value of the parameter is a different length. This is one reason I’ll never use an ORM without running SQL Profiler on the queries it generates. Try doing the same thing with a stored procedure, you can change the size of the parameter all you want and you’ll only get one execution plan.

Many Databases == High Traffic Database

If your web application is being hosted by some sort of hosting provider or if you work for a corporation with a multiple LOB applications your database is hosted on the same SQL server as any number of other databases all of which are competing for the same resources. The result is the same as running a single high traffic web application – your database server is dealing with high traffic.

If you don’t control all the applications on the database server it is in your best interest to consider the use of stored procedures since your execution plans will be less likely to be flushed from cache than those of the other applications running on the same database instance.

RBAR

“Keep business logic out of my database” is a good practice, but I also say, “keep database logic out of my business logic”. I’ve yet to meet an ORM framework which properly supports batch updates. I’m not talking about submitting a string of INSERT or UPDATE statements in the same batch, that’s still RBAR, just without the network latency (not to mention can’t be cached by the optimizer). I’m talking about using any number of features supported by SQL Server which, when used allow you to submit an entire data set (Xml, TVP, String parsing) and allow the database to work with the data as a set. Anything else is RBAR.

If anything is evil in this debate it’s lazy-loading. If your ORM supports lazy-loading and you’re not using POCO classes then it can sneak in very easily. When it does it’s only a matter of time before you’re gonna end up refactoring it for performance reasons.

Caveat

Back in 2003 I was looking for a nice clean way to build a data access layer. I found an article on CodeProject by Frans Bouma describing LLBLGen v1.2. I really liked it, until I had to get a list of related records. I had to loop through the first result and load the related records one-by-one, why? Because it was using stored procedures (NOTE: it is my understanding that LLBLGen no longer uses stored procedures). I suspect this is a major factor in the decision of many to chose ORM or ad hoc queries over stored procedures. In fact it was my motivation for writing a custom ORM myself which used ad hoc SQL. I’ve got some ideas to solve this issue, but for now you either need to include the related data in the stored procedure itself or you end up with the same problem as lazy loading.

Security

In my opinion, this is an argument that is inaccurately made by both camps, with the ORM camp simply saying “create two roles and give one update permissions”. Yeah, that works fine until an account (or the account) with update permission gets compromised.

The security model for stored procedures is an example of security in-depth. Stored procedures provide an API on your database that can be secured for each operation. Yes you could do this as a service but you’re still running around with an account that has full CRUD access to your tables. And last I checked SQL Injection isn’t the only way to hack your database. As one example, a recent study claims there are about a half-million database servers publicly exposed on the internet. Among these exposed servers are web hosting providers. If your application is being hosted then it is publicly exposed (it has to be so you can access it). It isn’t a long stretch from that point to your application’s “admin” accounts being exposed and from there your pants are down and your data is fully exposed.

Here’s an example I’ve seen used to show how stored procedures don’t protect you from SQL injection attacks:

strsql = "EXECUTE findtitle '" & textboxtitle.text & "'"
objCmd = New SqlCommand(strSQL, objConn) 

This is true, but only if your application has been granted access to perform operations other than EXECUTE. If you’re using stored procedures as a security measure, then the account won’t have access other than EXECUTE and you’re safe.

Vendor Lock-in

How many applications have you really worked on that you’ve had to move between database vendors? Really? And if you have, how many other applications have you worked on that didn’t require it?

I don’t think stored procedures are the issue here. If you wanted to change database vendors and couldn’t was it really because of stored procedures? Or was it because your database layer wasn’t properly abstracted from the rest of your application? I worked on an application once with this issue and we were using an ORM, but we still couldn’t change. The reason was GUIDs, the target we wanted to move to didn’t support the UNIQUEIDENTIFIER data type. We had some UDFs and actually had a stored procedure or two, but these were supported and could have been migrated.

Be Smart

Up to this point I have largely argued in favor of stored procedures. My reasoning is that among developers those who favor stored procedures are in the minority (the reverse is most likely true among DBAs). My preference is to use a hybrid of both. ORM/Ad hoc queries have very strong advantages over stored procedures, but the argument can be made both ways. This is why the debate gets so hot, because no one is 100% right. Here’s what I recommend:

Generate your CUD as stored procedures – Here I’m referring to CRUD, minus the “R”. I strongly believe that you should never directly update records in your table. I cannot be convinced that a two-role model where one role has write access to the table is sufficient. Your application is not the only attack vector on your database. Since it is true that maintaining stored procedures by hand after a schema change is a nightmare you should generate your stored procedures in this case. This is easy enough to do by writing a script that uses SQL Server metadata to build your procedures. If possible, provide a means to do batch updates in a single call, you’ll thank me.

Use ORM/Ad hoc for SELECT – if you want to use a stored procedure for “select by id” or “select all” you can it makes no difference really. But in my experience there are too many different ways to retrieve your data and you WILL have a maintenance nightmare on your hands if you try to write stored procedures for every case. Many who use stored procedures will do this in one of two ways (both are bad in my opinion):

  • Dynamic SQL: This is just plain wrong and evil IMHO. You open your stored procedures up to SQL injection attack vectors. This means you now have a false sense of security, just don’t do it. Also, it’s like building HTML strings in your code – difficult to maintain and hard to debug (plus it’s like nails on a chalkboard for me). A second argument against this method is that your stored procedure will be recompiled every time the different arguments are used, the more parameters you have the more often you will see the stored procedure recompiled.
  • COALESCE/ISNULL Functions: This protects you from SQL injection attacks, but you now opened up a performance problem. I usually see these used to along with default parameters to make a stored procedure flexible enough to search on any column, but it can disable the use of indexes. Avoid this practice if you can.

In the long run, using ORM for any trivial execution plan will save you time and headaches. As far as security goes, granting SELECT on the table is not a huge problem unless you have sensitive data. If you have data you don’t want to expose carte blanche then use a view.

Use stored procedures for complex queries – ORM tools are great, but it can be difficult to really optimize complex queries being generated by an ORM.

Some queries which would be otherwise simple are made complex by ORM tools which of necessity must accommodate any scenario. It will be easier to track down these problem children when using Profiler and you can also create solutions which don’t match your object model but will vastly outperform anything your ORM can generate.

And remember that your query will survive procedure cache flushes longer.

NOTE: I don’t mean to imply that you should do your performance tuning before you determine it’s needed. But you can encapsulate the query in your DAL and then as soon as you need it, replace your ad hoc query with a stored procedure – you’ll keep your hair.

Avoid Lazy-Loading - Lazy loading is just that: lazy. Lazy is synonymous with sloppy. This is one feature that is almost guaranteed to come back and bite you. The main reason is you will almost certainly end up in a loop and inside that loop you will need related data. You will say, “Hey, this is so cool! Look, ma! No Hands”. This is VERY BAD! Enough said. Dump your ORM and find one that supports eager loading and LINQ (LINQ is my own preference, it’s not required I just really like it).

Use Repository/ActiveRecord – use some sort of pattern to abstract your database from your business logic. This gives you the freedom to mix and match and go back and forth between either stored procedures or ORM/Ad hoc without affecting your application. You’ll be able to choose the best solution for each situation and you’re not tied to either choice.

Conclusion

Don’t be a lemming. Give it some thought, weigh the pros and cons and then pick what’s right for the application you’re working on. Whatever your preference is try to be objective enough to step back and think about what will meet the needs of your application. If you ever hear some zealot/disciple screaming about how they’ll never use one or the other just smile and nod. Then when they’re not looking deck ‘em and run :).

Labels: , , , ,

Tuesday, June 09, 2009

ASP.NET MVC: Discover the MasterPageFile Value at Runtime

A couple weeks ago it was finally time to add a context-sensitive, data driven menu system to our MVC application. As I thought about it I was stuck. I wasn't sure what the best way to implement it was. As is common with an MVC application there was no 1-to-1 relationship between actions and views. And even more difficult was that our *.master files could be used by views tied to different controllers. So it was looking like I would have to load the data I needed from the ViewMasterPage.

I really didn't like this option and looked around a bit trying to find out what others had done. Here's a couple examples of what I found:

While all of these options work, none of them sat well with me because they either require me to remember to include the data or they feel contrived or foreign to MVC.

@Page Directive

When you create a new View file you can specify that you want to use a MasterPage. When you do this your @Page Directive will look like this:

<%@ Page Language="C#" MasterPageFile="~/Views/Shared/Default.Master" Inherits="System.Web.MVC.ViewPage" %>

This can be changed as needed but if you are using MasterPages in your application you the value of the MasterPageFile is exactly what you need to determine which MasterPage is being used by the view being returned. I like this idea because the same action can return different views, or even result in a redirect, so it isn’t until you actually arrive at the controller’s ActionExecuted event that you know for sure that the result is a View and which view that will be.

Controller.OnActionExecuted event

The key to the whole thing is you need to be able to read the @Page directive located in the first line of your ViewPage. When you’re handling the OnActionExecuted event you get an ActionExecutedContext object passed in from System.Web.MVC.ControllerBase which contains the result of Action which just finished executing. Here’s what you do to get from the start of the event to the value of MasterPageFile:

  1. Check to see if ActionExecutedContext.Result is an ViewResult
  2. Check to see if ViewResult.ViewName has been set (if you’re writing tests for your Actions you’ll be doing this anyway). If it hasn’t then you know that the name of your view will be the same as the Action, so you can get the value from ControllerContext.RouteData.
  3. As long as you are using the WebForms view engine (or inheriting from it) you can use the ViewResult.ViewEngineCollection.FindView method to let the ViewEngine find the view for you.
  4. FindView returns a ViewEngineResult has a View property which returns a WebFormView which in turn has a ViewPath property.
  5. At this point you can get the source of your view, parse it and retrieve the value of MasterPageFile. Once you’ve done this I’d recommend caching the value to prevent the need to parse the file every time.

Here’s what the full implementation looks like:

using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Text.RegularExpressions;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
 
namespace GPM.Web.Code {
    public class MasterMenuDataFilterAttribute : ActionFilterAttribute {
 
        // private vairables supporting MasterPageFile discovery
        private static Dictionary<string, string> _viewMasterType = new Dictionary<string, string>();
        private static Regex _masterFilePath = new Regex("\\bMasterPageFile=\"(?<path>[^\"]*)\"", RegexOptions.Compiled);
 
        // private members for improved readability
        private HttpContextBase _httpContext;
        private ControllerContext _controllerContext;
 
        /// <summary>
        /// Loads data for dynamic menus in our MasterPage (if applicable)
        /// </summary>
        private void LoadMenuData(string viewName, string masterPath) {
            if (string.IsNullOrEmpty(masterPath) || !System.IO.File.Exists(_httpContext.Server.MapPath(masterPath)))
                return;
 
            switch (Path.GetFileName(masterPath)) {
                case "Site.Master":
                    break;
                case "Default.Master":
                    break;
                case "Custom.Master":
                    break;
                default:
                    break;
            }
        }
 
        /// <summary>
        /// Discovers the master page declared by the view so we can determine
        /// which menu data we need loaded for the view
        /// </summary>
        /// <remarks>
        /// If we find that we have too many controllers which don't need this 
        /// functionality we can impelment this as a filter attribute instead
        /// and apply it only where needed.
        /// </remarks>
        public override void OnActionExecuted(ActionExecutedContext filterContext) {
            // this logic only applies to ViewResult 
            ViewResult result = filterContext.Result as ViewResult;
            if (result == null)
                return;
 
            // store contexts as private members to make things easier
            _httpContext = filterContext.HttpContext;
            _controllerContext = filterContext.Controller.ControllerContext;
 
            // get the default value for ViewName
            if (string.IsNullOrEmpty(result.ViewName))
                result.ViewName = _controllerContext.RouteData.GetRequiredString("action");
 
            string cacheKey = _controllerContext.Controller.ToString() + "_" + result.ViewName;
            // check to see if we have cached the MasterPageFile for this view
            if (_viewMasterType.ContainsKey(cacheKey)) {
                // Load the data for the menus in our MasterPage
                LoadMenuData(result.ViewName, _viewMasterType[cacheKey]);
                return;
            }
 
            // get the MasterPageFile (if any)
            string masterPath = DiscoverMasterPath(result);
 
            // make sure this is thread-safe
            lock (_viewMasterType) {
                // cache the value of MasterPageFile
                if (!_viewMasterType.ContainsKey(cacheKey)) {
                    _viewMasterType.Add(cacheKey, masterPath);
                }
            }
 
            // now we can load the data for the menus in our MasterPage
            LoadMenuData(result.ViewName, masterPath);
        }
 
        /// <summary>
        /// Parses the View's source for the MasterPageFile attribute of the Page directive
        /// </summary>
        /// <param name="result">The ViewResult returned from the Controller's action</param>
        /// <returns>The value of the Page directive's MasterPageFile attribute</returns>
        private string DiscoverMasterPath(ViewResult result) {
            string masterPath = string.Empty;
 
            // get the view
            ViewEngineResult engineResult = result.ViewEngineCollection.FindView(
                _controllerContext, result.ViewName, result.MasterName);
 
            // oops! caller is going to throw a "view not found" exception for us, so just exit now
            if (engineResult.View == null)
                return string.Empty;
 
            // we currently only support the WebForms view engine, so we'll exit if it isn't WebFormView
            WebFormView view = engineResult.View as WebFormView;
            if (view == null)
                return string.Empty;
 
            // open file contents and read header for MasterPage directive
            using (StreamReader reader = System.IO.File.OpenText(_httpContext.Server.MapPath(view.ViewPath))) {
                // flag to help short circuit our loop early
                bool readingDirective = false;
                while (!reader.EndOfStream) {
                    string line = reader.ReadLine();
 
                    // don't bother with empty lines
                    if (string.IsNullOrEmpty(line))
                        continue;
 
                    // check to see if the current line contains the Page directive
                    if (line.IndexOf("<%@ Page") != -1)
                        readingDirective = true;
 
                    // if we're reading the Page directive, check this line for the MasterPageFile attribute
                    if (readingDirective) {
                        Match filePath = _masterFilePath.Match(line);
                        if (filePath.Success) {
                            // found it - exit loop
                            masterPath = filePath.Groups["path"].Value;
                            break;
                        }
                    }
 
                    // check to see if we're done reading the page directive (multiline directive)
                    if (readingDirective && line.IndexOf("%>") != -1)
                        break;  // no MasterPageFile attribute found
                }
            }
 
            return masterPath;
        }
    }
}

I’ve implemented this as an ActionFilterAttribute so you can just apply it to any controller or action. This way you can use it in a more flexible way. The only thing left for you to do is fill in the blanks in the LoadData method to retrieve the data you need based on the name of the MasterPageFile.

Conclusion

We’ve been running this setup for a couple weeks now in development, QA and UA and it’s working like a charm so far. Once you have it setup, you’re free to forget about it until you need to change how your menus function or your data set. Plus now you’re keeping all your interactions with your model inside your controller and your view just needs to pull the data from the ViewDataDictionary.

Labels: