free html hit counter
Posted on: Thursday, November 28, 2019 by Rajiv Popat

Blazor And The Idea Of Dotnet In The Browser.

Web Assemblies and the Theory Behind Blazor

Back from the days of Java applets, flash and Silverlight, companies and developers alike have always dreamt of being able to run full blown applications inside the browser.

But then, most of these technologies have always been bulky, not so secure and fairly proprietary. Then as JavaScript evolved (both on the client side and the server side), true Single Page Applications became a reality. But, even today as JavaScript matures to become a ubiquitous platform for web development, most developers have a love hate relationship with JavaScript.

JavascriptLoveHateRealtionship

Enter Web Assemblies.

Web Assemblies ship with a light weight stack machine which is capable of running code that has been compiled to binary. Think of this as Byte Code for the Web. This is cool because with web assemblies you can run compiled languages like C++ inside your browser.

It’s developed in a W3C community group which tells you it's not proprietary; The community group behind web assemblies has representatives from almost all browsers.

Web Assemblies run in any browser, on any platform at almost native speed. If you want to know more about Web Assemblies you can go here.

And why are we talking about Web Assemblies in a post on Blazor? Because Blazor is .NET runtime built using Web Assemblies. This means I can now take Dotnet code and run it inside a browser.

You build .NET Apps or assemblies and ship them as Blazor Apps. The Dotnet code you write gets downloaded and runs on the Blazor runtime which is basically a web assemblies implementation. It has the ability to interact with the Dom extremely efficiently and even find out what changed in the DOM.

Alternately, you can have the same C# code run on the server and have it update the client side Dom using a Signal-R Connection. Any UI events that happen on the client side are sent to the server using Signal R. The Server captures these and runs the relevant server side code. When the server modifies the dom, Blazor calculates a diff, serializes that diff back to client and browser applies it to Dom.

Let's Try Out Blazor

Actually, Blazor has existed for sometime now; but what's interesting is that Blazor Server now ships with .NET Core 3.0 and is production ready. The ability to build completely client side apps in Blazor using Web Assembly is in still in preview through and will most likely ship in May 2020.

The tooling is seriously awesome and simple. The implementation is so neat that to pick up the basic concepts all you have to do is just generate a new project with it and the tooling stubs out a fully functional hello world sample you can learn from.

As a quick overview let's stub two Blazor Projects, one using Blazor Server and one using Web Assemblies and let's try to learn from the basic hello world examples the tooling generates. As always we'll use Visual Studio Code because it's free and let's us look under the hood to understand the tooling.

Blazor Server Example:

To generate a new project I fire:

dotnet new blazorserver -o serverexample
(Where serverexample happens to be the name of the project I want to stub out).

This stubs out a project for me:

BlazorServerNewProject

I can now simply hit "Dotnet Run" like any other Dotnet Project and the stubbed out code runs like any other web application:

BlazorServerDotnetRunning

Notice that the application is running on port 5001 using HTTPs. I just hit https://localhost:5001 and then hit "Fetch Data" on the left to see an example of how data is fetched using Blazor:

FetchDataUsingServer

Awesome. We now have an example with Blazor Server running. Let's take a quick look at the code to see what's going on. The first thing to look at is the startup file. There are a couple of things happening here:

BlazorServerStartup

Just like we do a "UseMvc" in a typical dotnet application, here we are the Server Side Blazor service to the service pipeline. We use the new Endpoint routing that comes with .NET Core 3.0 to Map a Signal-R hub Blazor uses internally. The Fallback route of "/_Host" is theoretically supposed to be hit when no routes match. This means you can use other controllers and pages without conflicting with Blazor routes, but when none of the other routes match, the _Host route acts as a starting point for the application.

HostView

The above _Host view has two aspects. After it lays out the head and body tags, it has a section that hosts the entire app and another section to display errors. The app section itself manifests itself as a view (app.razor) of what happens when a route is found and not found:

AppView

When routes like "/FetchData" are found the corresponding views are rendered the respective View Razor files are invoked:

CallingServiceFromServer

Notice the HTML is similar to regular HTML and Rather other than the fact that it uses a local C# variable called forecasts which is declared in the @code block. The @block is where you write your c# code. If you don't prefer mixing HTML with C# you can actually extract this code out in a separate file which makes it very similar to the code behind model that we used with Web Forms in ASP.NET. The Forecast Service Class in the code above is just another C# class that runs on the server, which can then invoke Rest APIs and do things. In the stub is just returns hard coded data.

What's important to note here is that the C# code that write here is running on the server which means having an offline client is not possible. Also under the hood the server needs to keep a connection open using Signal-R with every connected client. Where I see this being used is small and quick prototypes or places where there is the going to be heavy use of Signal-R anyways and connection are going to be open with the server anyways all the time. A classic example is a real-time price ticker! If you need a more disconnected SPA experience you are better of moving to the client side model of Blazor.

Blazor Web Assemblies Example:

Even though this is in preview till May 2020, the tooling for building Blazor Web Assembly pages is also really awesome with .NET Core. I had to get .NET Core 3.1 (preview) for this to work though. Once I have the right version of the framework I create a new project using:

dotnet new blazorwasm -o clientexample
This stubs out a simple Web Assembly based project for me:

BlazorWebAssemblyNewProject

I built it and run it just like any other .NET project:

BlazorWebAssemblyDotnetRunning

And we get:

FetchDataUsingWebAssembly

I get the exact same output as the server example we did before but the underlying tech and design that's powering this example is completely different though. Let's take a look at the code to see what's different:

BlazorWebAssemblyMain

This project kicks off with a regular main method that basically utilizes the Blazor Web Assembly Host Builder to host your application. The App.razor and other aspects of your app might look similar to the server example that we tried out but what's strikingly different is the call to fetch the data:

CallingServiceFromClient


Notice above that we are using the HttpClient library of C# directly on the client side and then passing the URL of a Json file to it. This could also be a URL of a service that returns Json. There is no backend server side in this app as far as fetching data is concerned and the client is doing most of the heavy lifting.

This design is pretty similar to any angular or client side application where the .NET pieces are just being used to start and host the application. All C# code that you put in your views is directly running on your client and using Http Client libraries to hit micro services or web-api's that run on the server.

Take Away

The maturity of the tooling both on client side and the server side as far as Blazor is concerned has blown me away. All the complexity behind Web Assemblies and Signal-R are encapsulated rather elegantly by the tooling. Having said that, will I use Blazor in a production level application yet? I'm not sure.

The sever implementation of Blazor seems creepily similar to the code behind model of ASP.NET where the server has to do bulk of the processing. Unless it's a prototype or something really simple I'm building I'm not sure if I am ready to go that route.

The client side model is still in preview but that's something worth keeping your eyes on when it goes live and ready for production. Till then, back to angular and the good old JavaScript and TypeScript.

If you are a web developer, Web Assemblies are a big paradigm change and Razor is Microsoft's bet on it, which is what really makes it worth spending some time on it and seeing if it fits your problem statement.

posted on Thursday, November 28, 2019 11:10:22 AM UTC by Rajiv Popat  #    Comments [0]
Posted on: Monday, November 11, 2019 by Rajiv Popat

Why Developers Should Care about GRPC

GRPC has been around for quite some time but it has recently been integrated into .NET Core 3.0 and the tooling support with it is just first class now.

If you write Rest WebAPI / Microservices using .NET Core, you send JSON data over HTTP requests. The service does its work and sends a JSON response back.

Till the time your request object reaches the service it waits and doesn’t begin processing. Then it does it’s work and sends you a response back. Till your browser or client gets the response back fully there is not much the client can do and basically waits. That’s the request-response model we’ve all grown up with.

We’ve had various takes on improving this basic design in the past. GRPC is Google’s take on solving the problem of making RPC calls and leveraging data streams compared to the standard request response model.

Without going into too much theory, GRPC uses Google’s Protocol buffers to generate code which then sends data using specialized streams which happens to be really fast and as the name suggests, allows streaming of both request and response objects.

Streams are better because you can use the data as it comes in. A crude example? Instead of downloading the whole video when you stream a video you can watch the video as it downloads. GRPC uses the same approach for data. If this doesn’t make sense, read on and by the time you’ve mucked around a bit with the code, it will all start making sense.

For this example we’ll use visual studio code. The tooling is much simpler with Visual Studio 2019 but I prefer to use Visual Studio Code as an IDE of choice because it shows me what’s going on under the hood. With visual studio code, I use following plugin for getting proto file syntax highlighting and support directly inside my IDE:

For syntax highlighting you can also use additional plugins like this one:

protoplugin2

I have .NET Core 3 installed on my machine. 

The first thing I do is:

  1. Generate a server project: This is like your Web API that is going to be consumed by the client.
  2. Generate the client project: This is your client that is going to consume the server and get the data by invoking an endpoint/method on the server.

I generate the server-side project using:

grpcserver

The -o specifies the output path and creates a folder called 'server' where the GRPC service is generated.

I reference the following nugets by hopping into the terminal of VS Code:

Dotnet add package GRPC.Net.Client
Dotnet add package Google.Protobuf
Dotnet add package Grpc.Tools

Here are the repositories of these three nugets if you want to know more about them:

GRPC.NET Client.

Google Protocol Buffers

GRPC Tooling. 

Once I've stubbed the code out and added the necessary packages to the project. I build the server using:

Dotnet Build

And then I open the code with VS Code.

grpcprojectserverstructure

Notice the Protos folder? That has the proto files .NET tooling generated for us. Think of the proto files like your WSDL files if you come from a web service world. Proto files are specifications for your service. You write them by hand. You primarily use them to describe your request objects, response objects and your methods. Here is the example of the proto file that I wrote:

protofile

The above proto file basically says:

  1. I have a request object with the “companyName” attribute that is ordered 1 in the list of attributes. This is the request object because I will be passing the company name whose users I want to fetch.
  2. I have a response object with these attributes: userName, firstName, lastName and address. The numbers next to them is the order in which these attributes will be serialized.
  3. I have a method that takes a company name and streams back the list of users to the client. This is indicated by: “rpc GetUserDetails (UserRequest) returns (stream UserResponse);” line of code that you see in the above screenshot.
    The GetUserDetails it the method that accepts a user request and returns a stream of UserResponse. (By default, a stream would be an array of objects that would be streamed to the client).

Every time I add a .proto file I add it to the servers project (.csproj) file:

serverprotofile

Once I’ve done that, I fire the build and Google Tooling nugets generates the c# files for me in the background to actually generate the real request and response classes. With Visual Studio 2019 this tooling is hidden under the hood. With VS code the tooling fires when you build your project using the “Dotnet build” command.

Once I have the stubs I can write the service. In the service, I fetch some hard-coded values from a function. Typically, I would do this fetching from a database/service but for now, let’s keep this simple and focus on GRPC.

Once I fetch the data I just push the data back to the client but instead of sending the data in a response object that is pushed to the client all at once and waiting for the client to "download" the response, I use GRPC to stream the data one user at a time back to the client:

grpcserveractualservice

Typically, I would have just returned the users I get from GetUserFromDb back to the client but that would generate a regular response and I want to stream the users back to the client so I write them asynchronously to the response stream. Also notice the Task.Delay? I do that to simulate any delays that might actually be happening on the server as you process and return each user. This shows that each user that is processed is streamed back to the client even as the server continues it’s processing with additional users.

Each user that I write to the stream now flows back to the client and the client can start doing whatever it wants to do with it rather than waiting for the whole response to complete.

On the client-side, I write a simple .NET Console Application that makes a call to the server. The only thing the client needs to generate code to call the server is a copy of the proto files which contains the specs for the entire service. You would send your proto files to your clients or publish them somewhere.

I copy the same proto files on the client side and include them in my client project as “Client” files. Here is how I modify the project (.csproj) file:

clientsideprojectfile

I modify my client project to include a copy of the same .proto files and then I can fire a build. This generates all the stubs I need on the client-side to call the server.

Once this is done I start writing the client.

clientsidecode

Notice how I am using the Dangerous Accept Any Server Side Certificate Validator in the code above? That’s just for non-production because I am running this without any valid SSL certificate. In your production you would get a real certificate.

See how I am using the while loop to iterate through the response stream? This allows me to get each user from the stream as the server writes to the stream. And once I get the current item from the stream? Well, I am just showing each user on the console as soon as the server processes the user and writes the user object to the stream.

Now when I run the client the client calls the server, starts listening to the stream for response and starts dealing with partial responses as and when these are streamed by the server.

finaloutput

This is cool, because:

  1. The response is streamed over a channel that is much more optimized compared to JSON data being sent over HTTP using Rest. There are posts that seem to suggest that GRPC is 7x to 10x faster than JSON over rest.
  2. I can do the same streaming I did on the response object while receiving data, even when I send data using the request object. So, if you want to send huge data to the server but don't want to wait till the entire data is sent before the server starts processing it, GRPC works for that too. Put simply, it supports two-way streaming.

The post is long, but the actual implementation is tiny and super simple. If you’ve not tried GRPC before I highly recommend downloading the entire sample project I described in this post from here (it’s listed under the HelloGrpc folder) and running the server first and then the client and mucking around with the code.

Given the level at which Visual Studio Code and Visual Studio tooling for GRPC is right now, I personally think it’s really easy to pick up and most Web API developers will benefit from having this additional arrow in their quiver.

If you are a web developer who writes APIs and who cares about performance and payloads, you should care about newer better ways of communication between servers and clients compared to the traditional rest based WebAPIs that send data over JSON.

We moved from XML to JSON because the payloads were smaller in JSON. GRPC is the natural next step for smaller payloads, better compression and two way streaming of data.

Go on, give it a try. It’s super easy and well worth the few minutes you will invest in learning it. Chances are you can put it to good use right away and see huge gains in performance and end-user experience.

posted on Monday, November 11, 2019 1:54:17 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Tuesday, January 10, 2017 by Rajiv Popat

ASP.NET Core - Part 1: Debugging ASP.NET Core Applications On Visual Studio Code

I’m obviously late to the party but I’ve been hooked on to Visual Studio Code both as an editor and a complete IDE for developing .NET Core Applications and all I can say about Visual Studio Code and .NET core is that I am loving everything I see.

Getting up and running with .NET Console applications is really easy with Visual Studio Code and something I'll probably cover in a different post. This post is focused more around building and debugging ASP.NET Core applications using Visual Studio Code. In the posts to come we will build a simple real life application using .NET core and Visual Studio code.

I recently needed a simple application where I can store excerpts from various books and research papers I read for future reference and so that's the project I'm going to work on for the purposes of these posts. The code that we build during this series will eventually be open sourced and posted on GitHub.

In this post we get started with a simple ASP.NET Core project using Yeoman and the .NET Core CLI and we will then debug it using Visual Studio Code. Why do all this when we can build the same application using Visual Studio 2015? Even though we will build this application in Windows we want the toolset and code to be portable so that we can easily move to Linux or Mac and start developing there whenever we feel the need to do so; which is why we won’t use anything that we cannot use in a Linux or a Mac environment.

In fact, once we get through a couple of posts, we will actually move to a Linux machine and start developing there.

Let’s start by creating an ASP.NET Core app and setting up the debugging using Visual Studio Code. You can of course do this using two ways:

Approach #1: The .Net Core CLI:

This is probably the simplest and provides you with a nice clean ASP.NET Core application. Pretty similar to doing “File / New / Web Application” in Visual Studio if you happen to be a Visual Studio developer in the past. Some folks may love this because it’s straight forward. Others may not like it because it bundles a bunch of things (like the Entity Framework, Membership and stuff you may not even be interested in using). Plus as of now, it doesn’t seem to integrate things like bower out of the box (more on this later).  However, if you are looking to get up and running with a simple ASP.NET Core app up and running quickly you can start a command prompt window, go to the folder you want to create the project in and do:

dotnet new -t Web

This creates a simple Web Application project. To fetch all the dependencies the project needs you would have to do a:

dotnet restore

And to run the project (which also builds it automatically) you would do:

dotnet run

This would start the development web server and host the application which means you can now access it using http://localhost:5000:

And if you open your browser and hit the URL you have the application running:

We’ll come to the debugging part in a minute. Of course if you are not happy with a bunch of extra things that were added to your environment you can of course get more control over the templates that you use for stubbing out your application using Yeoman, which of course brings us to the second way of stubbing out your ASP.NET core applications.

Apporach #2: Yeoman Templates:

You will have to install Yeoman before you begin with this, which of course would mean installing NPM (and the simplest way of doing that is installing Node JS). Yeoman also fetches your Javascript files and files like the bootstrap.css from the right locations using Bower. So you are better off installing bower up front before you proceed.

Once you have Yeoman installed you can do a:

yo aspnet

From your command prompt once you are in the folder where you would like to create the project. Yeoman of course gives you larger control over the project you stub out by letting you pick from a host of templates that you can use (which would in turn decide which dependencies get installed):

In the above sample / screenshot we have an option of picking from different templates. We can pick a basic web application without Membership and Authorization OR just “Web application” which has everything (including membership and authorization pre-configured).  Of course with this I also get to pick between the UI framework that I would like to use for my project:

In the above example I’m going Bootstrap. Once done you would specify the name of the project and once that is done you can go ahead with::

cd research
dotnet restore
dotnet run

In the above command we switched to research folder because yo command creates a folder with your project name. Once you run the code with ‘dotnet run’ You get a similar application this time as you did with .NET CLI only this time around you don’t see the Login link on the top right corner of application:

Now that we have the application up and running (with both .NET CLI / Yeoman, depending on what you pick), let’s get to debugging it using Visual Studio Code.

Debugging Using Visual Studio Code:

The more I use Visual Studio Code the more I seem to like it. It’s light. It’s elegant. Works on multiple platforms and what I love about it is the ecosystem of plugins that turn a lighting fast editor into a full blown IDE! If you don’t have the IDE, grab a copy from here and install it on your machine. Now from the command prompt you can navigate to your project folder and type a “code .” (without the quotes) and you should see your project open. The “.” of course, stands for the current folder and in Visual Studio Code you don’t work with projects / solutions, you open specific folders. Which means if opening the project from command prompt doesn’t make sense to you, you can open Visual Studio Code and Open a Folder from File / Open menu. The moment you open the codebase in Visual Studio Code, it looks for required assets and asks you if it should import those. Click on Yes.

Like I said before, it’s the plugins that turn this code editor into a powerful IDE. I’ve jumped to the plugins tab, searched for and have grabbed the the following plugins I need to get started:

At this time if you were using Yeoman and had bower properly installed Your Launch.json should have the following value correctly set and you should be able to debug your application using debug tab and selecting “.NET Core Launch (Web)” from the debug type drop down and hitting the play button of the familiar F5 key:

If you started with .NET CLI tools (instead of Yeoman), you may not automatically get all bower dependencies configured in your bower.json file like Bootstrap and JQuery. So when you run the project with “dotnet run” it runs fine but when you Debug using Visual Studio Code so see things like Bootstrap and JQuery aren’t properly imported and you see your application looks like this (and the Javascript functions inside the application don’t work either):

This is where is pays to understand how Bower really works and how these templates are generated. The reason why your application runs fine when you execute it using “Dotnet Run” and doesn’t when you execute it using Visual Studio Code is that both (Dotnet Run / Visual Studio Code debugging) execute the applications in different modes. While “DotNet Run” executes the application in production mode, Visual Studio Code runs it in debug / development mode.

If you open your _layout.cshtml file you would notice that the template has generated a layout file that picks up bootstrap, Jquery and other dependencies from “~/lib” folder for Development environment and directly from ASP Net live CDN in case of production and staging environments. Since we are running in Development environment when debugging from Visual Studio Code we need the dependencies to be present in the “~/lib” folder.

If you check the wwwroot folder however you’ll see that the lib folder is missing:

To get the dependencies in the right folder we’ll use Bower to download the dependencies in the right folder. Where bower downloads the dependencies is defined in “.bowerrc” file:

And as the above picture shows our .bowerrc file does have the right location. We also have the bower plugin installed. So Let’s hit CTRL + P and Type “> Bower” in the search bar  that pops up:

Now you get a list of bower commands from which you can select Bower Install and hit enter:

The moment you do bower should grab all required dependencies for you and you should now see a new lib folder with the right dependencies:

And you are also able to debug the application properly now with Bootstrap, Javascript and other dependencies working fine:

Personally, I like Yeoman primarily because it provides a larger choice of templates and runs the “bower install” command pretty much automatically (assuming you have bower installed) but both .NET Core CLI and Yeoman should help you get started quickly with your first ASP.NET Core application. Both work across platforms and which one (Dotnet CLI / Yeoman) you use is just a matter of which templates you prefer.

As far a Visual Studio Code is concerned I love it. While Visual Studio 2015 Professional versions manage some of these tasks out of the box, Visual Studio Code is really nice because for me it hits the fine spot between showing me what’s happening under the hood at the same time keeping me sufficiently productive. This post covers two ways of getting up and running with an ASP.NET core project and you can use either of the two and all it takes us is a minute to get started with the setup and debugging on a new ASP.NET core project using Visual Studio Code.

In the next post we’ll get started with the actual application using ASP.NET Core where we will be creating a simple application where you can store experts from various books and research papers that you might be reading for future reference. As the series of posts proceeds I’ll dump the code on GitHub and also try and host it using the cheapest most scalable cloud options.

posted on Tuesday, January 10, 2017 1:52:02 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Tuesday, February 6, 2007 by Rajiv Popat

Fixing Package Load Failure Error with Windows Workflow Foundation and Visual Studio

I've often said that the really good part about Windows is What can be done, or sometimes... what happens ("seemingly automatically"), can be undone (and explained). The same is also true for Visual Studio 2005. At rare occasions there are errors in the IDE which seem magical and mysterious. With the right debugging techniques, commands, some time the mysterious errors can be fixed and explained.

I've been busily coding away at Windows Workflow Foundation for a few months now. Currently I'm using the final released version of Windows workflow foundation with the released version of Workflow extensions for Visual Studio, and my experience with WF has been very good. So when this strange error popped up today without any particular reason, I was a little surprised.

The error wasn't very helpful in describing what was going wrong. All it stated was: Package Load Failure. It basically said - "Package 'Microsoft.Workflow.VSDesginer.DesignerPackage, Microsoft.Workflow.VSDesigner, version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' has failed to Load Properly (GUID = {some_guid}..." and then it went on to ask me If I wanted to disable my workflow foundation extensions for now.

Even though my workflow project opened and worked perfectly this was just an irritating error which popped up every time I started Visual Studio 2005. Determined to put an end to this harmless yet irritating error, I started my investigation and Google searches.

I started off with basic commands like - "devenv.exe /setup" - which did not fix the issue at hand. Then there were posts out there which suggested that I delete my native images and regenerate them using the ngen.exe tool. That solution did not seem to work either.

After a few hours of Google searches I came across this post which basically described what the issue was. Turns out, a few days ago I had tried my hand at writing Visual Studio Add-ins and had forgotten to delete an Add-In I had written. Remains of my HelloWorld Addin existed in "C:\Users\admin\Documents\Visual Studio 2005\Addins" on my Vista Partition.

All I had to do was to delete the ".Addin" file (which btw, had nothing to do with Windows Workflow Foundation, so it's a little difficult to suspect this would be the cause of the problem) and as soon as I had done that Visual Studio Started without any Windows Workflow Extension Add-in errors.

If this doesn't do the trick for you - here are few other commands / resources which are usually helpful in catching and fixing Visual Studio.NET 2005 IDE issues:

  1. devenv.exe /setup - very useful for a host of other IDE setting related problems or if you just want to restore VS settings to their original state. Did not help in my case because the failed add-in was something I had written myself and my problem had nothing to do with visual studio settings. In my case, the settings were just fine, even though I had initially suspected that they had gone corrupt. 
  2. ngen.exe - Aaron Stebner's Post describes this pretty elaborately. The basic idea here is to delete your Visual Studio Native Image files and regenerate them using this tool.
  3. devenv.exe /log - this generates XML log which can be quite helpful in trouble-shooting various startup issues with the IDE.

And if you're still struggling Aaron Stebner also provides a Definitive list of workarounds for Package Load Failure errors in Visual Studio 2005, here.

We spend so much of our time with the IDE everyday writing code. Mysterious errors like give us a chance to look under the hood and see what's going on. It's just like the joy of fixing that little problem in your car, yourself. After all knowing how to fix minor problems with your car, or knowing how to replace a flat tire, is also important besides knowing how to drive. Isn't it? :)

posted on Tuesday, February 6, 2007 6:02:34 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Saturday, December 2, 2006 by Rajiv Popat

Hello Atlas Article at Code Project

I Posted this Hello World Article on Atlas (of-course, now called "Microsoft ASP.NET Ajax") on Code Project a few months ago. Recently, I've been receiving multiple emails / comments telling me that the article was helpful but I should think about updating it since Atlas is undergoing a lot of changes (including the name:)).

I've spent some time during this weekend to update the article based on the Beta 2 of Atlas (Microsoft ASP.NET Ajax 1.0) and resubmitted it to CodeProject for updation. The article has been updated and is available at CodeProject.

However, if are looking for one single zip which allows you do download the latest copy of article, and source code, you can get it here. This post will be updated when further changes are made to the article.

Thanks to everyone who read the article and commented on it.

posted on Saturday, December 2, 2006 12:33:08 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Wednesday, November 29, 2006 by Rajiv Popat

Enabling ASP.NET 2.0 Debugging on Visual Studio 2005, IIS 7.0 and Vista

For the past couple of years my desktop ran on a Windows 2003 Server which was tweaked to make windows look like a Mac. I loved my desktop, but I also loved everything that had been going on the Windows Vista side and had been trying things out on VMWare instances and other non-work machines since the Beta 1 days. I had always stated that I would move to Vista as soon as Microsoft comes out with a Released version. Just wasn't adventurous enough to run a Beta version of an Operating System on my primary work machine. Yesterday, I finally made the move to Vista.

As a developer / geek, there were a few minor hiccups in 'getting up and working' and I'll probably post about all of those and their workarounds in other posts but this post is focused on getting Visual Studio 2005 to run with F5 / Play based Debugging on ASP.NET 2.0 Web-sites. I thought I would post about and document this, since this was the most tricky part and 'almost' kept me awake all night, trying to figure out what the heck was going on.

The Vista Install itself was a Smooth install. Minor hiccups here and there. Surprisingly, there were no drivers issues (things have come a long way since the Beta 1 days, when I had a bad time with the drivers). I didn't have to install a single driver manually. It detected everything on my Dell 700m and pretty much installed it, automatically. Sweet! Most hiccups were compatibility issues with 3rd Party tools, which is expected anyway, because of the enhanced security. Workarounds and alternate tools that work in Vista were pretty easy to find. Yes, there were some very minor issues with the default display drivers - which work fine with extended monitors but somehow, will not let me run on a external projector. I think I can live with that till Dell comes out with their own drivers. [This was resolved. If you googled to this page, in search of Intel-Adaptor-on-Dell Display Driver issues for Vista, see update at the end of the post.]

The real "Opps!" moment however, came when I was done with installing IIS 7.0 and started installing Visual Studio 2005. This is when I was warned that Visual Studio 2005 has known compatibility issues with Vista. I ignored the warning and the installation continued smoothly. Finally I decided to test my installation by making a simple ASP.NET 2.0 Hello-World website. Apparently, Studio would not let me open a website from the Local IIS.

The Warning Said: "You must be a member of the Administrator group on the local computer to access the IIS..."; This however, is a known issues and has to do with the Vista's On-Demand Administrator (i.e. UAC) feature. Installation of additional IIS components, A right click on Studio's Shortcut, followed by a "Run As Administrator" fixes this issue. There's a detailed post on this both as Scott Guthrie's Blog and MSDN. Both of these are pretty helpful and elaborate; So I won't repeat that information here.

After I did everything mentioned in the MSDN / Scott's Post, I could now open web sites from the local IIS. But wait, that was not the complete Fix. Once those steps were applied there were issues with Visual Studio which wanted me to have Integrated Windows Authentication in the Virtual Directory. Now this is something that is not installed by default with IIS 7.0. Which means that I had to explicitly go to "Turn Windows features on or off" and install Integrated Authentication for IIS. (I went ahead and installed all three since I am used-to and use all three - Integrated, Digest and Basic, in different projects. With this done I configured the virtual directory to use Integrated Windows Authentication using IIS Manager.)

So, was that it? Not Really. I still wasn't able to Debug my website. This time Visual Studio showed a security Dialog Box telling me that wasn't able to start debugging on the web server:

The workaround for this is to configure the site / virtual directory (depending on what you're trying to debug from Visual Studio) to run under the "Classic .NET Pool". To Do this just right-click the site on the new IIS Management console and click "Advanced Settings". In the Property Pane that opens up, change the Application Pool Setting to "Classic .NET AppPool" instead of "DefaultAppPool". 

With this done I opened my sample site in Visual Studio.NET 2005 and clicked the Play button (F5 key) and the Debugging worked, just like it should!

Yet another tricky Gotcha here is that changing the Application Pool for the website to Classic .NET pool doesn't seem to change the pool for all virtual directories under it. In other words, if you moved your entire site to DefaultAppPool but are trying to debug a specific virtual directory, you still need to go ahead and manually change the Advanced settings of that Virtual Directory to run Classic .NET AppPool, to enable debugging on it.

There were some 'really convincing' posts out there, that I came across, in a couple of Forums (I can't seem to find the link to those) where people suggested that F5 / Play and Debug is something you cannot do in Vista. I guess, that information is either old (from early beta builds) or just inaccurate. I would have liked to post a reply there and clear this up, but I can't seem to find the link again after reaching the right answer. So, I'm going to post this article here and hopefully Google will index it and help others who're trying to get this to work. 

Update [12/29/2006]: If you are looking for more details and other approaches, Mike has recently provided detailed step-by-step instructions on various approaches you can take and has described several tradeoffs associated with each approach. His detailed post is available here.

Update [02/07/2007]: Even though this post is not directly related to Dell 700m Intel Display Cards and Vista drivers for these cards a lot of people seem to be googling their way to this page in search of similar answers. This update might help. If you are running Intel(R) 82852/82855 Graphic Adaptor on Your Dell 700m and are having problems switching to clone mode with Dual Monitors or running on a Projector by pressing Fn-F8 key, start your Add New Hardware Wizard on Vista, select the manual driver installation process, choose Display Adaptor in the type of device and from the list of drivers Vita offers, choose Intel as Manufacturer, and "Intel(R) 82852 / 82855 GM/GME Graphics Controller (Microsoft Corporation - XDDM)" as your driver, even if Vista had detected your driver successfully during installation. Complete the Wizard and Reboot. Once the reboot is complete, you should be able to see two display adaptors with the same name installed in your Device Manager. Now disable Extended Desktop and should be able to switch to a projector or clone your desktop on your second monitor using the Fn-F8 key.

posted on Wednesday, November 29, 2006 10:52:23 AM UTC by Rajiv Popat  #    Comments [0]