free html hit counter
Posted on: Tuesday, May 5, 2020 by Rajiv Popat

Debugging Mysterious 500 Internal Server Errors.

When 500 Internal Server Error Shows up With Missing Logs in Event Viewer and IIS Logs.

When it comes to publishing Dotnet core APIs on IIS, 500 Internal server errors are a pain in all the wrong places.

500InternalServerError

What's most annoying about them is that most of the times, the IIS logs say nothing about what's wrong. And neither does the event viewer. In most cases these are not even logged on the event viewer. This is what makes internal server errors so difficult to debug.

Turns out that most of these can be debugged locally just fine and you can see the exact issue on the browser if you can manage to RDP into the server and hit the URL using localhost.

ErrorUsingLocalHost

But there are many a times when running the application using localhost is not an option in production. Maybe because the application uses static routing. My blog for instance, always routes to a URL even if you access it using localhost.

In cases like these, you can 'temporarily' configure IIS to spit out the exact issue and even the stack trace rather than the generic "500 - Internal server error" page even when testing things remotely.

You do this by modifying the Error Pages for your website:

errorpages

Once here you can pick the error code 500 and modify it's feature settings:

editfeaturesettings

The default here is configured in a way where you can see the errors when testing locally but when you test remotely using a simple browser from a client you will get a generic 500 error page. But you can change this to always show the detailed error messages:

detailederrors

This should make the relevant changes to the underlying web.config and once you do this you should see the exact issue on the page even when you hit it remotely over a web URL instead of the typical 500 error page.

Once you debug the issue and fix it, you can revert the configuration back to "custom error pages" for a more secure experience but this should allow you to debug those pesky 500 internal server errors rather easily and quickly. Worth noting that this makes a change to web.config of your application so if you push the codebase again or overwrite that file you may have to make the same change all over again using IIS.

This configuration has always been around but most of us spend so much time looking at logs and fiddling around event viewer whenever these issues occur in production that sometimes we forget the obvious.

Hope this helps someone struggling with debugging their very own 500 Internal server error.

posted on Tuesday, May 5, 2020 7:46:16 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Monday, May 4, 2020 by Rajiv Popat

Customizing Prompts With A Customized Terminal.

In an older post we talked about how we can customize the new Windows Terminal and make it look beautiful.  If you went through the post your terminal pretty much looks like this:

BasicTerminalLookAndFeel

Generally I'm happy with this, but I'm not happy with the prompt: "rajivpopat@localhost" part. It annoys me and I was to change or customize that.

With dos prompt was easy to change. You used PROMPT $P$G to show the path followed by the greater than sign. Which means you could just use the PROMPT followed by anything and your prompt would change to that.

With power-shell things are a little more complicated if you want to change the prompt. In Power Shell this is done using the prompt function. You can read more about it here.

In our case we are just using oh-my-posh so our power-shell profile looks like this:

powershellprofile

In the above profile we are using the Agnoster theme so to change the prompt we hop into the Angoster Profile script file which, if you have followed my previous post, can be found at: C:\ Users\ [your_user_name]\Documents \ WindowsPowerShell\ Modules\ oh-my-posh\ 2.0.332\ Themes\ Agnoster.psm1 (without the spaces).

These are the lines controlling the prompt I see on screen:

ProfilePrompt

I want to change my prompt to show a static text instead of user name and the computer name. So I change:   $prompt += Write-Prompt -Object "$user@$computer " to  $prompt += Write-Prompt -Object "thousandtyone " and my lo and behold my console looks like:

BasicTerminalWithChangedPrompt

I can obviously configure it to whatever I want. For example, I could print the time or use any combination of PowerShell functions to show anything I wanted on my screen.

For someone like me who blogs from different machines and different users names and takes screenshots being able to change the prompt and make it something constant which is same as my blog name helps.

The same file also lets you control color options and also provides how the git symbols and colors show up in the command line. If you've followed my previous post and were wondering how to you can control and fine tune the prompt this post should help.

posted on Monday, May 4, 2020 12:57:45 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Thursday, November 28, 2019 by Rajiv Popat

Blazor And The Idea Of Dotnet In The Browser.

Web Assemblies and the Theory Behind Blazor

Back from the days of Java applets, flash and Silverlight, companies and developers alike have always dreamt of being able to run full blown applications inside the browser.

But then, most of these technologies have always been bulky, not so secure and fairly proprietary. Then as JavaScript evolved (both on the client side and the server side), true Single Page Applications became a reality. But, even today as JavaScript matures to become a ubiquitous platform for web development, most developers have a love hate relationship with JavaScript.

JavascriptLoveHateRealtionship

Enter Web Assemblies.

Web Assemblies ship with a light weight stack machine which is capable of running code that has been compiled to binary. Think of this as Byte Code for the Web. This is cool because with web assemblies you can run compiled languages like C++ inside your browser.

It’s developed in a W3C community group which tells you it's not proprietary; The community group behind web assemblies has representatives from almost all browsers.

Web Assemblies run in any browser, on any platform at almost native speed. If you want to know more about Web Assemblies you can go here.

And why are we talking about Web Assemblies in a post on Blazor? Because Blazor is .NET runtime built using Web Assemblies. This means I can now take Dotnet code and run it inside a browser.

You build .NET Apps or assemblies and ship them as Blazor Apps. The Dotnet code you write gets downloaded and runs on the Blazor runtime which is basically a web assemblies implementation. It has the ability to interact with the Dom extremely efficiently and even find out what changed in the DOM.

Alternately, you can have the same C# code run on the server and have it update the client side Dom using a Signal-R Connection. Any UI events that happen on the client side are sent to the server using Signal R. The Server captures these and runs the relevant server side code. When the server modifies the dom, Blazor calculates a diff, serializes that diff back to client and browser applies it to Dom.

Let's Try Out Blazor

Actually, Blazor has existed for sometime now; but what's interesting is that Blazor Server now ships with .NET Core 3.0 and is production ready. The ability to build completely client side apps in Blazor using Web Assembly is in still in preview through and will most likely ship in May 2020.

The tooling is seriously awesome and simple. The implementation is so neat that to pick up the basic concepts all you have to do is just generate a new project with it and the tooling stubs out a fully functional hello world sample you can learn from.

As a quick overview let's stub two Blazor Projects, one using Blazor Server and one using Web Assemblies and let's try to learn from the basic hello world examples the tooling generates. As always we'll use Visual Studio Code because it's free and let's us look under the hood to understand the tooling.

Blazor Server Example:

To generate a new project I fire:

dotnet new blazorserver -o serverexample
(Where serverexample happens to be the name of the project I want to stub out).

This stubs out a project for me:

BlazorServerNewProject

I can now simply hit "Dotnet Run" like any other Dotnet Project and the stubbed out code runs like any other web application:

BlazorServerDotnetRunning

Notice that the application is running on port 5001 using HTTPs. I just hit https://localhost:5001 and then hit "Fetch Data" on the left to see an example of how data is fetched using Blazor:

FetchDataUsingServer

Awesome. We now have an example with Blazor Server running. Let's take a quick look at the code to see what's going on. The first thing to look at is the startup file. There are a couple of things happening here:

BlazorServerStartup

Just like we do a "UseMvc" in a typical dotnet application, here we are the Server Side Blazor service to the service pipeline. We use the new Endpoint routing that comes with .NET Core 3.0 to Map a Signal-R hub Blazor uses internally. The Fallback route of "/_Host" is theoretically supposed to be hit when no routes match. This means you can use other controllers and pages without conflicting with Blazor routes, but when none of the other routes match, the _Host route acts as a starting point for the application.

HostView

The above _Host view has two aspects. After it lays out the head and body tags, it has a section that hosts the entire app and another section to display errors. The app section itself manifests itself as a view (app.razor) of what happens when a route is found and not found:

AppView

When routes like "/FetchData" are found the corresponding views are rendered the respective View Razor files are invoked:

CallingServiceFromServer

Notice the HTML is similar to regular HTML and Rather other than the fact that it uses a local C# variable called forecasts which is declared in the @code block. The @block is where you write your c# code. If you don't prefer mixing HTML with C# you can actually extract this code out in a separate file which makes it very similar to the code behind model that we used with Web Forms in ASP.NET. The Forecast Service Class in the code above is just another C# class that runs on the server, which can then invoke Rest APIs and do things. In the stub is just returns hard coded data.

What's important to note here is that the C# code that write here is running on the server which means having an offline client is not possible. Also under the hood the server needs to keep a connection open using Signal-R with every connected client. Where I see this being used is small and quick prototypes or places where there is the going to be heavy use of Signal-R anyways and connection are going to be open with the server anyways all the time. A classic example is a real-time price ticker! If you need a more disconnected SPA experience you are better of moving to the client side model of Blazor.

Blazor Web Assemblies Example:

Even though this is in preview till May 2020, the tooling for building Blazor Web Assembly pages is also really awesome with .NET Core. I had to get .NET Core 3.1 (preview) for this to work though. Once I have the right version of the framework I create a new project using:

dotnet new blazorwasm -o clientexample
This stubs out a simple Web Assembly based project for me:

BlazorWebAssemblyNewProject

I built it and run it just like any other .NET project:

BlazorWebAssemblyDotnetRunning

And we get:

FetchDataUsingWebAssembly

I get the exact same output as the server example we did before but the underlying tech and design that's powering this example is completely different though. Let's take a look at the code to see what's different:

BlazorWebAssemblyMain

This project kicks off with a regular main method that basically utilizes the Blazor Web Assembly Host Builder to host your application. The App.razor and other aspects of your app might look similar to the server example that we tried out but what's strikingly different is the call to fetch the data:

CallingServiceFromClient


Notice above that we are using the HttpClient library of C# directly on the client side and then passing the URL of a Json file to it. This could also be a URL of a service that returns Json. There is no backend server side in this app as far as fetching data is concerned and the client is doing most of the heavy lifting.

This design is pretty similar to any angular or client side application where the .NET pieces are just being used to start and host the application. All C# code that you put in your views is directly running on your client and using Http Client libraries to hit micro services or web-api's that run on the server.

Take Away

The maturity of the tooling both on client side and the server side as far as Blazor is concerned has blown me away. All the complexity behind Web Assemblies and Signal-R are encapsulated rather elegantly by the tooling. Having said that, will I use Blazor in a production level application yet? I'm not sure.

The sever implementation of Blazor seems creepily similar to the code behind model of ASP.NET where the server has to do bulk of the processing. Unless it's a prototype or something really simple I'm building I'm not sure if I am ready to go that route.

The client side model is still in preview but that's something worth keeping your eyes on when it goes live and ready for production. Till then, back to angular and the good old JavaScript and TypeScript.

If you are a web developer, Web Assemblies are a big paradigm change and Razor is Microsoft's bet on it, which is what really makes it worth spending some time on it and seeing if it fits your problem statement.

posted on Thursday, November 28, 2019 11:10:22 AM UTC by Rajiv Popat  #    Comments [0]
Posted on: Monday, November 18, 2019 by Rajiv Popat

Making Your Terminal Look Gorgeous.

Beautiful IDEs and Developer Productivity tools are my weakness. Which is why when PowerShell was released back in 2006, the first thing I wrote about was how you can skin it and make it look beautiful. But that was 2006. Things have changed now and Microsoft is taking the cosmetics of your terminal pretty seriously. Add to that a little bit of magic from the open source community and you can have really slick looking terminals now.

This is my diary of making my terminal beautiful on my work laptop. This is what we're trying to get to:

terminalfinaloutput

Let's start with first things first. We begin by getting the new Windows Terminal and then sprinkle a bit of open source magic on it.

Getting Windows Terminal

Windows Terminal is available in the Windows Store. You can search for "Windows Terminal" and you should see it there.

terminalwindowsstoreicon

The Repository is available here. I had a slightly older version of windows 10 available so had to upgrade it before windows store would allow me to install the terminal. Oddly enough, if you don't meet the system requirements the Microsoft Store doesn't give you any visible attention grabbing error. The download link simply doesn't work. I am just about to give up when I see the tiny "See System Requirements" link (shown in the screenshot above). I click it the Microsoft Store tells me what the issue is:

OsUpgradeError

There is an upgrade button in the store which takes me here and that link lets me upgrade my windows to the version of windows needed. After upgrading windows to the required version I'm able to download and install the Terminal from Microsoft Store. I open the terminal after installing it and  I'm presented with:

terminalscreenshot

Installing Git and Oh-My-Posh + Posh-Git Modules

Most of the times when I am in the terminal I'm working on codebases and am on git repositories, so let's make the console pretty and also make it Git aware. We begin by installing Git for windows and then move to installing Oh-My-Posh and Posh-Git modules. The following comments install both modules:

Install-Module posh-git -Scope CurrentUser
Install-Module oh-my-posh -Scope CurrentUser

Modify Your Profile Script

Once you have fired the commands above you need to modify your profile. Type "Notepad $Profile" in the terminal and that should open a blank file for you.  Add the following code into your profile:

Import-Module posh-git
Import-Module oh-my-posh
Set-Theme Agnoster

The "Agnoster" used above in one of the many other themes oh-my-posh provides you and you can pick the one that works best for you.

Installing the Right Fonts:

You need fonts that support Glyphs, without which the beautiful symbols that you see in the screenshot are nothing but ugly characters. To put things simply when people collectively agree that a bunch of character translate to a graphic we have a Glyph. Glyphs are useful because they allow you to represent a combination of characters with pretty looking symbols and icons.

You can choose from all the fonts here which already have glyphs patches inside them (or you can patch any font you like with Glyphs), but I'm just keeping it simple and using this one. From the link download "Delugia.Nerd.Font.Complete.ttf" and install it on your machine just like you would install any other font using your control panel fonts app.

Modify Your Profile settings, (JSON):

You can get you profile settings by clicking on the Down Arrow menu in the terminal and clicking on settings:

profilesettings

This should open up your profile file. There are a few aspects of the profile file worth understanding:

profileslist

The profile file holds a collection of profiles. The 'defaultProfile' contains the guid of the profile terminal uses by default when you launch it. Here you will notice that the guid matches the guid of the profile called "Windows PowerShell". That starts PowerShell by default every time I start the terminal. If I wanted the terminal to start the Command Prompt I can replace the defaultProfile guid with the guid of the profile called "cmd".

Now look at the profile named "Windows PowerShell" in the screenshot above. The "commandline" tells the terminal which executable it should use. The Font Face tells it which fonts to use. Delugia Nerd Font is the font we just installed in the "Installing the Right Font" section of this post and that has Glyphs oh my post and post-git need already patched into it.

In the screenshot above I'm setting Delugia as the default font by setting the fontFace with the value 'Delugia Nerd Font'  in all my profiles. The Color Scheme tells the terminal which colors to use. In the screenshot above, My Color Scheme is called "ThousandtyOne" and this is what it looks like:

profilesettingsschemes

If you want my entire profile you can grab my profile.json from GitHub here. If you've done everything correctly start your terminal and your terminal should now look like:

terminalfinaloutput

It's good looking and the fun part is, it's git aware. Notice the Git integration above. My git strip in the prompt, when I switch into a folder that contains git repository, looks green because there are no changes. Once I make changes the strip changes orange and shows me the number of changes right there in the command line:

terminalfinaloutputgitchange

Note that all these colors are controlled by  your color themes in the profile file so if you wanted different colors you could totally change the profile file to fit your needs and customize each theme.

I do realize that as far as Windows Terminal is concerned I'm a little late to the party. Here is an awesome post from Scott Hanselman on this topic. Think of this post as just my diary of the issues I faced and a customized version of the profile I am using for myself. If you're like me and spend a bulk of your time in command line it might be a good idea to get windows terminal and work with a CLI that is a little more good looking and slightly more git aware than what you get out of the box.

Go on, make your own gorgeous terminal now and share your profile with others. Time to have some fun with your terminals people!

Note: I did face an issue with not being able to save my profile cleanly with VS Code and VS Code kept complaining about conflicts with older version of the same file. When that happened the changes I made in the profile.json are not having an impact on the terminal. That's an issue with dirty write where if you get warnings about conflicts when you save your profile.json. This link contains a solution. The idea is that every time you have a conflict saving the profile isn't enough. You have to explicitly accept the changes. The link shows you a screenshot of how you can do that.

posted on Monday, November 18, 2019 5:48:29 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Monday, November 11, 2019 by Rajiv Popat

Why Developers Should Care about GRPC

GRPC has been around for quite some time but it has recently been integrated into .NET Core 3.0 and the tooling support with it is just first class now.

If you write Rest WebAPI / Microservices using .NET Core, you send JSON data over HTTP requests. The service does its work and sends a JSON response back.

Till the time your request object reaches the service it waits and doesn’t begin processing. Then it does it’s work and sends you a response back. Till your browser or client gets the response back fully there is not much the client can do and basically waits. That’s the request-response model we’ve all grown up with.

We’ve had various takes on improving this basic design in the past. GRPC is Google’s take on solving the problem of making RPC calls and leveraging data streams compared to the standard request response model.

Without going into too much theory, GRPC uses Google’s Protocol buffers to generate code which then sends data using specialized streams which happens to be really fast and as the name suggests, allows streaming of both request and response objects.

Streams are better because you can use the data as it comes in. A crude example? Instead of downloading the whole video when you stream a video you can watch the video as it downloads. GRPC uses the same approach for data. If this doesn’t make sense, read on and by the time you’ve mucked around a bit with the code, it will all start making sense.

For this example we’ll use visual studio code. The tooling is much simpler with Visual Studio 2019 but I prefer to use Visual Studio Code as an IDE of choice because it shows me what’s going on under the hood. With visual studio code, I use following plugin for getting proto file syntax highlighting and support directly inside my IDE:

For syntax highlighting you can also use additional plugins like this one:

protoplugin2

I have .NET Core 3 installed on my machine. 

The first thing I do is:

  1. Generate a server project: This is like your Web API that is going to be consumed by the client.
  2. Generate the client project: This is your client that is going to consume the server and get the data by invoking an endpoint/method on the server.

I generate the server-side project using:

grpcserver

The -o specifies the output path and creates a folder called 'server' where the GRPC service is generated.

I reference the following nugets by hopping into the terminal of VS Code:

Dotnet add package GRPC.Net.Client
Dotnet add package Google.Protobuf
Dotnet add package Grpc.Tools

Here are the repositories of these three nugets if you want to know more about them:

GRPC.NET Client.

Google Protocol Buffers

GRPC Tooling. 

Once I've stubbed the code out and added the necessary packages to the project. I build the server using:

Dotnet Build

And then I open the code with VS Code.

grpcprojectserverstructure

Notice the Protos folder? That has the proto files .NET tooling generated for us. Think of the proto files like your WSDL files if you come from a web service world. Proto files are specifications for your service. You write them by hand. You primarily use them to describe your request objects, response objects and your methods. Here is the example of the proto file that I wrote:

protofile

The above proto file basically says:

  1. I have a request object with the “companyName” attribute that is ordered 1 in the list of attributes. This is the request object because I will be passing the company name whose users I want to fetch.
  2. I have a response object with these attributes: userName, firstName, lastName and address. The numbers next to them is the order in which these attributes will be serialized.
  3. I have a method that takes a company name and streams back the list of users to the client. This is indicated by: “rpc GetUserDetails (UserRequest) returns (stream UserResponse);” line of code that you see in the above screenshot.
    The GetUserDetails it the method that accepts a user request and returns a stream of UserResponse. (By default, a stream would be an array of objects that would be streamed to the client).

Every time I add a .proto file I add it to the servers project (.csproj) file:

serverprotofile

Once I’ve done that, I fire the build and Google Tooling nugets generates the c# files for me in the background to actually generate the real request and response classes. With Visual Studio 2019 this tooling is hidden under the hood. With VS code the tooling fires when you build your project using the “Dotnet build” command.

Once I have the stubs I can write the service. In the service, I fetch some hard-coded values from a function. Typically, I would do this fetching from a database/service but for now, let’s keep this simple and focus on GRPC.

Once I fetch the data I just push the data back to the client but instead of sending the data in a response object that is pushed to the client all at once and waiting for the client to "download" the response, I use GRPC to stream the data one user at a time back to the client:

grpcserveractualservice

Typically, I would have just returned the users I get from GetUserFromDb back to the client but that would generate a regular response and I want to stream the users back to the client so I write them asynchronously to the response stream. Also notice the Task.Delay? I do that to simulate any delays that might actually be happening on the server as you process and return each user. This shows that each user that is processed is streamed back to the client even as the server continues it’s processing with additional users.

Each user that I write to the stream now flows back to the client and the client can start doing whatever it wants to do with it rather than waiting for the whole response to complete.

On the client-side, I write a simple .NET Console Application that makes a call to the server. The only thing the client needs to generate code to call the server is a copy of the proto files which contains the specs for the entire service. You would send your proto files to your clients or publish them somewhere.

I copy the same proto files on the client side and include them in my client project as “Client” files. Here is how I modify the project (.csproj) file:

clientsideprojectfile

I modify my client project to include a copy of the same .proto files and then I can fire a build. This generates all the stubs I need on the client-side to call the server.

Once this is done I start writing the client.

clientsidecode

Notice how I am using the Dangerous Accept Any Server Side Certificate Validator in the code above? That’s just for non-production because I am running this without any valid SSL certificate. In your production you would get a real certificate.

See how I am using the while loop to iterate through the response stream? This allows me to get each user from the stream as the server writes to the stream. And once I get the current item from the stream? Well, I am just showing each user on the console as soon as the server processes the user and writes the user object to the stream.

Now when I run the client the client calls the server, starts listening to the stream for response and starts dealing with partial responses as and when these are streamed by the server.

finaloutput

This is cool, because:

  1. The response is streamed over a channel that is much more optimized compared to JSON data being sent over HTTP using Rest. There are posts that seem to suggest that GRPC is 7x to 10x faster than JSON over rest.
  2. I can do the same streaming I did on the response object while receiving data, even when I send data using the request object. So, if you want to send huge data to the server but don't want to wait till the entire data is sent before the server starts processing it, GRPC works for that too. Put simply, it supports two-way streaming.

The post is long, but the actual implementation is tiny and super simple. If you’ve not tried GRPC before I highly recommend downloading the entire sample project I described in this post from here (it’s listed under the HelloGrpc folder) and running the server first and then the client and mucking around with the code.

Given the level at which Visual Studio Code and Visual Studio tooling for GRPC is right now, I personally think it’s really easy to pick up and most Web API developers will benefit from having this additional arrow in their quiver.

If you are a web developer who writes APIs and who cares about performance and payloads, you should care about newer better ways of communication between servers and clients compared to the traditional rest based WebAPIs that send data over JSON.

We moved from XML to JSON because the payloads were smaller in JSON. GRPC is the natural next step for smaller payloads, better compression and two way streaming of data.

Go on, give it a try. It’s super easy and well worth the few minutes you will invest in learning it. Chances are you can put it to good use right away and see huge gains in performance and end-user experience.

posted on Monday, November 11, 2019 1:54:17 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Wednesday, August 21, 2019 by Rajiv Popat

Fixing Gripes And Electric Shocks from Your New Television.

My wife and I are not into television and have not had a television for a very long time. A couple of months we bought our first new television in five years.

Both my wife and I were not sure how much time we would actually spend in front of it and hence the decision to go Froogle. That and I wanted to be able to control my devices and something like a Raspberry Pi is much more customizable than a locked down android television. We settled for a regular, not smart Vu TV and then build our own media center using a Raspberry Pi.

The TV is a 49 inch Grade A+ panel. So the display hardware is not Ultra-HD but on the Full HD side it is as good as it gets. Pretty nice and does what we need it to at a price that's almost half that of a Sony. Very soon we run into a couple of bumps:

My TV is Shocking Me! Strange Electric Shock on the Television Frame:

I initially start with the assumption that this is an issue with my earthing and call an electrician who tells me that my earthing is just fine and the TV is flawed. He asks me to file a request with Vu and walks out.

I look up the forum and discover there are even more expensive TVs out there that have the same issue. I call up support and they replace the TV but the new one has the same issue. This time around they can give me a refund but aren't willing to help with fixing the issue.

Frustrated I end up deciding to put some basic electronics they teach you in class eight to use. The idea is that the TV missing grounding unit that would connect the current flowing with the grounding socket inside the TV but we can do that externally without even opening the TV.

So let's buy a 50 cent copper wire from a local electric shop. Next, we find a spot on the TV that has electric current running through it. In other words, the area that shocks you. You can test this with a regular tester. We tie one end of the copper wire to the frame that has the shock and screw it up securely behind a screw that holds the TV mount.

The other end goes to the earth pin. The  basic idea here is that since the TV does not provide for any grounding or earthing we earth the current running externally on the TV frame directly to the earth pin of the 3 point socket. This works for countries that support 3 pins and the 3rd pin is for grounding.

With the earth pin wound with the copper wire you ensure that any current running through the frame of the television runs through the copper wire, grounding pin and is eventually earthed effectively ensuring you don't get shocked which you touch the TV frame.

That actually works. I touch the TV and no more shocks. I guess Vu is skimping money by not grounding their circuits, but a 50 cent copper wire fixes that. We now have a really simple home made earthing on the TV frame. No more shocks. And the end product looks pretty elegant with the copper wire concealed behind the TV:

The overall result looks pretty neat and you can barely see the copper wire running from the back of the TV frame to the earth socket. Yes, the Pi and the Firestick and all those wires still need to be organized and concealed but the copper wire itself is that tiny green bit you see in the picture. Nothing objectionable.

I've seen a bunch of articles out there about TVs and monitors shocking people but no solution as such and I hope this helps someone who has a similar problem in future. It's simple class eight electronics you might have learned in your physics class put to some basic use.

Strange Skin Tone

The other day my wife and I were watching a stand-up comedy show and the skin tone of characters seems a little… artificial. Turns out this setting is controlled by something called 'Tint' in most TVs and the default Vu settings don't allow end-users to modify tint settings. The tint option is disabled by default, which means you can't change the setting:

When I first see the setting, I realize it's bumped up all the way to 100 and no way to lower it. I wear my nerd glasses and hop on to a special hidden service menu most Vu TV's provide which can be reached by going to the sound setting menu, clicking on sound balance and then typing 1969 on the number pad in your remote. For some reason the folks at Vu like the year on the moon mission and have picked that to open up secret service menus on the TV:

Notice that this mode is pretty powerful and pretty much allows you to control most tiny aspects of the TV a regular user may not even care about.

Once in, you can tell the TV to not use any special intelligence for skin tones by turning off the tint setting (which you are completely allowed to do in this secretly hidden service menu):

I turn the tint down to zero using the special service menu and the problem is gone.

The Backlight is still way too strong:

The backlight of the TV is still too strong and hurts my eyes. Vu doesn't give any option to change that. Even the service menu doesn't have any backlight settings. I panic and think of returning the TV.  But then service menu allows me to change the RGB gain on the LEDs which are all bumped up to 100 by default:

I realize if I bring those down proportionately I can control backlight of my panel. I do just that. The backlight goes much smoother and the strain on my eyes is gone.

Sorted.

The AI Is A Little Too Smart.

AI is the new thing and most TV's want to run the race of adding AI to their picture rendering. Companies like Vu however mostly do a mediocre job at it. The good part is they let you turn this off by disabling noise reduction setting to off.

Much better.

Love-Hate Relationship With My TV

At this moment I have a love-hate relationship with my TV. I love gadgets which I can customize and root into. Most phones I've owned thus far, are rooted. The service menu of Vu essentially gives me root access to the TV and is very powerful. I dig that about the TV.

The fact that I can hack into my TV and have complete control over my TV makes me feel powerful.

The fact that Vu doesn't handle this little gripes out of the box and expects end users to wear their nerd glasses on to fix these issues make me a little annoyed. Meh!

Either way, all my problems with my TV are sorted and I have a two year extended warranty during which I can return the TV if I face any additional issues. So for now, this will have to do.

If you are thinking of buying a Vu TV - here is my honest advice: Buy it only if you are willing to wear your nerd glasses on and do a little bit of tinkering with the TV, both on the hardware and software front. If you are expecting it to work out of the box like a flawless appliance, Vu isn't for you.

Having said that, the TV is a Froogle choice and once you've made the modifications you feel really happy about spending half the money than what you would have spent of other TVs and getting similar picture quality and overall experience.

Couple the dumb TV up with a Firestick TV (which I bought at discount on Amazon) and a Raspberry Pi 3b+ (bought locally) and you'll have a full-blown smart TV with a pretty decent media center, but that's a whole new post in itself.

Being a Nerd Helps.

This post was just about my gripes and issues with Vu TV and how to fix those. If you own a Vu (or any other TV) and are facing similar issues (particularly electric current running through the TV frame), feel free to use some of these fixes and let me know how it goes.

If your TV just works fine, you can still take solace in the fact that even though most of what you buy or download online today is broken it can be fixed with a little bit of tinkering and geeking.

There is value in being a nerd today. You can stop feeling bad about being a geek. Being a geek is no longer a curse. In today's world it is actually a blessing.

Now go fix something that's broken.

posted on Wednesday, August 21, 2019 9:16:26 AM UTC by Rajiv Popat  #    Comments [0]
Posted on: Saturday, February 18, 2017 by Rajiv Popat

Automatically Placing Semicolons in Visual Studio Code.

There was a time when making IDE plugins for Visual Studio was for folks who specialized in art of writing plugins; i.e. folks like DevExpress and Jetbrains. With Visual Studio Code, writing extensions is no longer a mysterious black art. Even regular programmers like you and me can write extensions which solve our little specific problems.

My specific little problem? I hate having to type a semicolon and then hit enter on every line of code that I write. Specially when the IDE is auto-completing my brackets and quotes. For example when I write:

Console.WriteLine("Hello

If I have the C# plugin installed in VS Code, VS code understands my intent and completes the sentence by writing:

Console.WriteLine("Hello[my cursor is here]")

Notice my cursor position in the snippet above? At this point if I need to end the line I hit the right arrow key twice, then type semicolon and then hit enter to continue to the next line.

Technically, in the above example, if my IDE was really smart, I should just be able to type a semicolon where my cursor is, have the IDE understand my intent, move the semicolon to the end of the line and automatically move me to the next line so that I can continue coding.

It's just 4 keystrokes per-line (two right arrows, a semi-colon and an enter), but when you write hundreds of lines of code condensing 4 keystrokes to 1, adds up and goes a long way in making you productive. Actually, it's not so much about reducing the keystrokes as it is about being in the flow and rhythm.

At one point DevExpress CodeRush had this feature; and If I wrote:

Console.WriteLine("Hello;

CodeRush would intelligently complete this as:

Console.WriteLine("Hello");

It was a very fluid experience. I used to love that feature. When I moved to Linux and Visual Studio Code, I lost most plugins like Resharper and Coderush; but then other free Visual Studio Code plugins made up for most of what I loved between Resharper and CodeRush. However, I continued to miss the above feature where the IDE would automatically understand my intent and move my semicolons where they belong.

So, I decided to see how difficult it would be to write an extension which would:

  1. Automatically move the semi-colon to the end of the line even if you type it in between the line (except for special cases like a for-loop or a for-each loop).
  2. Automatically move you to the next line without having to explicitly hit enter.

It took me one day to write the extension. It took me one more day to brand it with a logo and documentation and publish it to Visual Studio Code Marketplace after releasing it on Github. Before I started this extension, I knew nothing about writing Visual Studio Code extensions. Not to mention that the entire development was done on a Linux Laptop. The code was written in type-script and I am not a java-script or typescript guru either.

I think a regular programmer like me being able to write a plugin of this sort, publish it live to a marketplace and have folks download over just a couple of days says more about Visual Studio Code's highly extendable design than it says about my talent. By far one of the more amazing editors / IDE's I've seen in my life.

Because I used an source code of an open source extension on the marketplace to learn how to get started with writing extensions, and I could see an ever growing community of open source extensions on the Visual Studio Code marketplace, I'm also publishing my code on github.

Go ahead and try it out. It has already had a couple of dozen downloads; makes me hugely productive as a programmer when I am inside Visual Studio Code above all keeps me in flow when I write code. I've been fully supporting this plugin and closing bugs as and when I find them or if and when they are reported.

It is called 'Autoend' and is available for free on the visual studio code marketplace.

If you do try it out please drop me your feedback / comments in the comment section of this post and if you find an issue you can always post it on github or you can always drop a line to contact@thousandtyone.com. Happy coding!

posted on Saturday, February 18, 2017 11:46:23 AM UTC by Rajiv Popat  #    Comments [1]
Posted on: Tuesday, January 24, 2017 by Rajiv Popat

A .NET Programmer Tries Out GNU / Linux As The Primary OS.

I’ve always been both; a windows and an Ubuntu user. I’m not an OS zealot and I love both operating systems. My work machine runs on Windows and one of my two laptops at home has been on Ubuntu for a very long time. I love windows because it’s convenient. I love Linux because it’s powerful, seriously geeky and free. Which is why when .NET core team announced the ability to run on multiple platforms (including Windows and Linux), the announcement was like music to my ears. This meant that I could be on the OS of my choice (or fancy) and still do development on a language I love (C#).

I had already played around with using Visual Studio Code as a full-blown IDE and had realized that with the right plugins it’s possible to be fully productive on it. The only missing piece now, was SQL server for which I would always need windows. And then the SQL Server team announced that they’ve added support for multiple platforms as well and you can now run SQL Server on Linux.

This meant that I could now use Linux as my primary OS if I wanted; and that was an itch I really wanted to scratch. This of course, meant that it was time to test where Linux was as an operating system when it comes to being a primary operating system for me as a .NET developer. I’ve already had one laptop with only Ubuntu for ages but I just use that machine for surfing, browsing, watching YouTube and sometimes writing books or posts. Using Linux as a daily driver was going to be completely different. This time around, my goal was to find if I can use Linux as my primary operating system.

With this goal in mind, I decided to look at various Linux Distributions and pick one for my work life. This post is more of a running diary of my experience.

Since my organization wasn’t fully ready to move me to Linux (we’re an all windows shop), I decided to get Linux on a VM and spend most of my work hours there for a few days before jumping in fully. Given that I have 8 GB of Ram and over 200 gigs of disk space with an I5 processor I figured I can have substantial horse power in my VM to spend my work life inside the VM. And because this was going to be a work machine I wanted to try out distributions other than Ubuntu – which is what I’ve been using for years. Why? Because I wanted variety and spice in my life.

After looking at various Linux Distros these are what I short listed:

Mint Linux:

Apparently, this seems like the simplest version of Linux that you hop on to when you move on from the Windows world. Under the hood it’s Ubuntu but it looks and feels more like windows. Which is why a lot of windows users who move to Linux and are confused by Unity in Ubuntu like Mint better. For me, if I wanted an OS that looked and felt like Windows, I was already on Windows and I could just stick to it; so, Mint was not something that appealed to me.

Elementary OS:

I went and grabbed a copy of elementary OS and installed it on a Virtual Box VM only to realize that with 8 GB host, and 4 GB on the Virtual Box the OS was still slow and choppy. When I did that however I wasn’t aware how large an impact on performance small settings like enabling 3D acceleration and GPU allocation can have on the overall speed of Linux on a Virtual Machine, so in all likely-hood it wasn’t elementary OS that was an issue but probably bad configuration on my part.

I read a few posts mentioning that Elementary OS works much better with VMWare Player (which is a free product for trying out and personal use) than it does with Virtual Box so I tried it on VM-Player and it was better; but since this was meant to be a work VM, using VM-Player for work related VM’s wasn’t allowed by the VMWare license anyways. So, I dropped the idea and deleted the VM.

At the end of the day, if Mint looks like Windows, Elementary is inspired by Mac and if I loved Mac machines, I would get a Mac. So the choppy performance of Elementary on a Virtual Box and the fact that it’s inspired by Mac, ruled it out as a distribution that I would pick for myself at this point of time. There is a high chance I may have used it if the performance on Virtual Box would have been better and there is a good chance I’ll revisit Elementary sometime in the future because I genuinely liked and appreciated the user interface but for this evaluation I moved on to other distributions.

Fedora:

I grabbed a copy of Fedora and got it installed, up and running in no time. The Gnome based desktop is… for lack of a better word.. extremely classy. The OS was fast and slick and worked extremely well. I was about to settle down with Fedora, when I realized that the chrome installation that I had done on the OS just doesn’t work. No Errors. No warnings. Chrome just doesn’t start. Actually, chrome starts and then disappears. No Windows. No Screens. (I later encountered a similar issue on Ubuntu and fixed this by starting chrome without GPU and then disabling hardware acceleration using chrome settings. For more details on this fix see the ‘Chrome Blackouts’ on Ubuntu section of this post or read on).

I later moved on to .NET installation and realized that DotNetCore keeps giving an initialization error every time I try to do a “dotnet new”. The command fails with initialization errors. This is because Fedora 25 is not supported by DotNetCore. Turns out, there is a bug in .NET Core which makes it require version 52 of ICU Library and Fedora 25 has a higher version. Here is an unofficial fix but I wasn’t able to make it work; and after wasting hours on this I moved back to the familiarity of Ubuntu.

Ubuntu:

After having tried out three different Distributions I ran out of patience (and almost an entire day) and decided to eventually settle down with the known territory of Ubuntu. Unity is a controversial topic. Some folks love the UI, others can’t stand it. I personally have no issues with it since I’ve used Unity for months on my home laptop and am happy with it. But then having tried Fedora, I had also fallen in love with Gnome 3 and because this is Linux, I realized there was nothing stopping me from running Gnome 3 on Ubuntu. So I did just that and grabbed Gnome 3 on top of Ubuntu after I had installed base Ubuntu. Of course, I could have fetched Ubuntu Gnome directly but I like the manual way better because it lets me switch between Gnome 3 and Unity whenever I want to (or at each login!). I also love the Arc theme so I decided to grab that and install that using the Gnome tweak tool. Eventually however with Gnome 3, I settled for the default Adwaita theme.

Note: Version 16.10 of Ubuntu somehow doesn’t seem to play nice with VMWare Player on my machine, and causes Kernel panics and the famous ‘CPU has been disabled by the Guest OS’ error. However, it worked fine with Virtual Box which is nice because Virtual Box was my preference for virtualization to begin with.

Long story short, at this point, I had the familiarity of Ubuntu, and the newness of the Gnome 3 User Interface that I experienced with Fedora. The best of both worlds:

So I was on Ubuntu with Gnome 3, but I was still far away from making this machine my daily driver. There were multiple other hoops that I had to jump to make this machine usable as a daily driver.

Sound Card Issues:

With Ubuntu installed on my virtual machine; I realize that sound doesn’t work with Ubuntu on Virtual Box. Turns out, after a certain version, Virtual Box doesn’t seem to pick up the right sound card drivers to be used for host and guest operating systems and you need to pick them up manually. For me what worked was Windows Direct Sound on the host and Intel HD Audio on the guest operating system.

I then go to the sound settings of Ubuntu and crank up the volume to maximum value allowed. Actually, I crank it up to 140% of what’s allowed:

Sometimes when I want to sound to go louder I have to go to the terminal and crank up the sound even louder with aslamixer command:

And then the sound works fine. The next thing I was going to need if I was going to use this machine on a daily basis was a stable browser like Chrome.

Chrome Blackouts:

I go ahead and grab chrome and am just about ready to work; when I see a blank black screen each time I start chrome. To fix this I start chrome without a GPU using the command:

google-chrome -disable-gpu

From my terminal window and once chrome starts I disable “Use hardware acceleration when available” by going to Chrome Settings and then into Advanced Settings of Chrome.

Note: This same fix works on on Federo where the chrome window disappears after you click on the chrome icon.

Sluggish Speeds:

My Virtual box is now up and running; I have a browser and sound; but the performance is still sluggish. I crank up the GPU to 128 MB and select ‘Enable 3D Acceleration’ from the virtual box settings which considerably speeds up the virtual machine and makes it fast. I also grab CompizConfig Settings Manager so that I can tweak animations and I disable them to make my system move faster. This speeds up my Virtual Box considerably and makes it actually extremely usable.

But What About Email?

With the basic setup of the OS complete, my next concern is email. Because we use Office 365 at my organization and Exchange at my client’s organization, I needed something that works seamlessly with Exchange Web Services and while evolution comes pre-installed in Fedora, Unity comes preloaded with Thunderbird; which, based on what I’ve read doesn’t work with Exchange services as of this writing. So I grab a copy of Evolution in my Ubuntu and configure my Office 365 emails with it.

Configuring Office 365 emails was relatively easier, though Evolution does tend to loose your preconfigured account the first time you configure them. If that happens open your process monitor, kill all threads of evolution and start fresh and there is a high chance you might find your accounts back. I ended up creating the accounts thrice and then found them all when I killed the evolution threads and started evolution fresh. Then I deleted all of them and re-created a single fresh account. This was of course a one-time issue and things have been fine once the accounts are configured.

Configuring Office 365 accounts were easy. With on premise Exchange accounts however things get a little more complex to troubleshoot. Because my client uses NTML based authentication and Evolution detected that as Kerberos; I kept getting the following error message:

The reported error was "No response: SPNEGO cannot find mechanisms to negotiate".

Finding out what the issue here was mostly a hit and try exercise where I tried to use basic authentication and that didn’t work so I moved to NTLM and that worked.

Site Note: the lack of support for Exchange in mature email clients like Thunderbird and the fact that you have to shell out 10$ a year to get an Exchange plugin in Thunderbird is a little disheartening. I have no issues with paying developers for the hard work they put in, but paying to accomplish something as simple as checking email when your entire OS is open source (and free) and every other app on your machine is open source is, for lack of a better word… a little… ironic. So I decided to grab Evolution which supports Exchange free out of the box and battle out the issues. And it paid off. Evolution has been working well both with Office 365 email account and with Exchange email account and I am actually starting to like it a whole lot.

For those of you who haven’t used evolution, the only thing I missed, compared to outlook was free text search. Turns out, Evolution has a very powerful advanced search and you can also turn on expression based searches:

Visual Studio Code:

With everything else configured I set out to load Visual Studio Code (the primary reason why I started to spend a day on making myself a Linux Work VM). Getting Visual Studio Code itself is super easy. You just download the package and you install it using the Application Manager. However, when I start Visual Studio Code I get a blank black screen. This reminds me of the black window in chrome so I go ahead and look for a similar fix. You just start Code without the GPU:

code --disable-gpu

But because we cannot be doing this each time we add this as an alias in our ~/.bashrc file (or in my case I just add it to my ~/.bash_aliases file which bashrc file references which just helps keeps things clean):

alias code='code --disable-gpu'

Once you’ve added the line you need to close your terminal and start it afresh for the alias to kick in. Caveats? First, you can’t open Code from the Icon in Gnome. Second, you can’t do a “Code .” and expect “.” to represent the current folder you are in when working on the terminal. You need to open Visual Studio Code and then do a File / Open… which is not that bad.

Next I follow these instructions to install DotNetCore on Ubuntu 16.10. Then Install the usual plug-ins and I am in business:

And so, with the development environment in place we are now going to need a DB to work with.

SQL Server:

SQL Server installation was by far the smoothest. You just follow the instructions here and then you follow these instructions for installing the client tools. SQL Server claims to require 4 GB RAM but I barely notice any slowdowns post install and the DB has been running blazing fast. I’m actually really impressed with the DB performance thus far.

There are no UI tools like SSMS for SQL Server on Ubuntu so I grab the DBeaver and use that as a visual editor for DB design.

To be honest the performance of DBeaver in a Virtual Box with 4 GB of RAM is extremely sluggish and it tends to slow down the entire VM. At the danger of offending and triggering Eclipse fans, it’s a trend I’ve seen with a lot of other applications that are built on Eclipse. I then move to SquirreL SQL which is light weight but only provides query capabilities and no Drag and Drop DDL capabilities.

I’m still looking for a visual database development tool but for now, between the command line, SquirreL and DBeaver I should be good.

And A Shared Folder with the Host OS:

If you’re going to run in a VM Mode you will probably want a shared folder with the host OS which you can mount automatically so that anything you save there is also available when you are not using Ubuntu. I do that by sharing a specific folder on my host OS with Ubuntu using Virtual Box settings:

And then I run into permission issues where I cannot access this folder from Ubuntu which I solve by adding my current user to the vboxsf group.

And I’m set for now. All ready to take my newly created VM for a spin and because my VM is just 12 Gb, I decide to take a full backup of my VDI file instead of taking a snapshot. My entire disk file size after installing everything I need, is about 12 GB, so it’s still a file I can carry on a 16 GB drive.

My Overall Experience:

I’ve been a happy Linux user on and off on at-least one personal laptop for over 15 years and Linux has come a long way, but even today, every time I decide to spend a day with various Linux distributions to see where they are and play around with them or try to expand the scope of Linux in my life, I encounter some hurdles which I have to jump and I eventually end up learning new things. That is what makes me angry at Linux sometimes. It’s also what makes me love Linux most of the times. Let’s just say it’s a healthy relationship – the kind that you have with your friends, wife or your kids. :)

If you’re an average office user who is Installing Linux on a bare metal modern day laptop, Linux has indeed come a long way, is very usable and your learning curve might be minimum. You probably can get started almost as easily as you do with windows. But if you plan on using Linux as a primary work machine (especially in a Virtualized environment because your office is on windows) there is a high chance you’ll hit a few bumps but among the dozen odd distributions of Linux and a couple virtual machine solutions and a couple of dozen workaround, you should not take more than a couple of hours to be completely up and running and that (genuinely; without the slightest tone of sarcasm in my voice) is not such a bad thing at all.

My overall experience after spending a day playing with Linux with the idea of using it as my primary work environment is that it has come a long way and I encourage each one of you to try it for a month as a primary work OS; even if it happens to be on a VM! With Visual Studio Code, .NET and SQL Server all running on it, there should not be any reason why you aren’t taking Linux for a test drive.

On another different note, I am loving the new Microsoft for making things like this even possible. It takes a lot of courage for a company of Microsoft’s size to embrace a truly open world where everything they build from Development platforms to development tools and even databases run on multiple platforms.

Here is a big thumbs-up to both the DotNetCore team and the SQL Server team for embracing openness. When we have open choices like these for developers, everyone wins. I’m genuinely impressed with what I have experienced and I’ve been on this VM as my primary machine for a week and nothing has broken. Pure Awesomeness.

Update: After using the Virtual Machine for a few days I finally took the plunge and decided to move to Linux on my work machine. All the GPU issues I had to work around in this post are non-existent on a bare metal install and the same Ubuntu + Gnome combination has been working really well for me during the past few of days.

posted on Tuesday, January 24, 2017 12:06:23 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Tuesday, November 1, 2016 by Rajiv Popat

Getting Started With Aurelia And Type-Script on Visual Studio 2015

If you’re interested in and working with Javascript frameworks, you’ve probably heard of Aurelia by now. It’s a compelling competitor to frameworks like Angular and React.  While getting started with Aurelia itself seems pretty straight forward, getting Aurelia working with Type-Script and making it all work Visual Studio 2015 has it’s own share of hiccups.

The aurelia team provides starter projects that they call skeletons that you can download and get up and running really quickly. However when I tried using them, the skeletons seemed to have issues which were both time-consuming and frustrating to resolve. Even the skeleton that was supposed to run with .NET MVC (and had a “.sln” solution file) would not even compile without errors. And these skeletons come with a lot more than what you would like have when you are just trying to get an initial hold of Aurelia and type-script. This left me with no other options but to star fresh and create my own basic skeleton where I can try out different Aurelia features.

If you’ve tried to get started with Aurelia + Typescript and you are a .NET programmer who lives inside Visual Studio, the goal of this post is to get you up and running with Aurelia and Typescript inside Visual Studio 2015.

To being with you’re going to need Typescript working inside your Visual Studio 2015. The simplest way I’ve found to do this is, just un-install older versions of Visual Studio 2015 and install Visual Studio 2015 Update 3 from scratch. You could use that link, but then if you have an MSDN subscription you are better of downloading an offline ISO from there and using that, which is what I did.  Initially, I tried in place update of Visual Studio 2015, and the installer kept crashing for some reason (this of-course could be because of the fact that I was on a weak Wi-Fi).  The MSDN ISO (a 7 GB download) worked smoothly after an uninstall of my existing visual studio; followed by a fresh install.

With Visual Studio 2015 Update 3 (with Core 1) loaded, you’re also going to need Typescript support inside Visual Studio so your Typescript files are compiled and converted to JS files each time you save them. To do that you can grab the Visual Studio Typescript plugin from here and install that. You will also need Node Package Manager (NPM) working on your machine and the simplest way to do that is download and get  Node JS installed on your machine.

With that done we’re ready to start our first hello world project with Typescript + Aurelia.

As I said before, the easiest way to do this would have been to download and use the skeletons, but given that the skeleton’s provided by Aurelia team didn’t work for me;  I was left with no option but to build my Aurelia app from hand and get started. Honestly, building your first app by hand actually works out better because it gets you a fresh new insight into many underlying concepts that you will typically not have to pickup if you use a ready made skeleton instead.

Since we’re going to be working with Visual Studio 2015 as our IDE of choice, let’s go and create a blank ASP.NET Web Development project inside Visual Studio in a folder of your choice. Open the solution file and keep the solution open in Visual Studio as you proceed with the below.

Once the project is created start command prompt and go to the specific location where you created the project. Note: go inside the project folder (not the folder with contains the .sln file – but the one that has your web.config file.):

Once there start by typing in the following commands:

npm install jspm
jspm init

JSPM is the JavaScript Package Manager which let’s you fetch and use various  Javascript modules you will need to get started with Aurelia. In the above diagram we switch to the project folder (shown in the screenshot + code snippet),  do a NPM install of JSPM which fetches JSPM on your machine. Once there we initialize jspm in our project folder (jspm init) where it will create new project asking you basic questions:

We select the default answer by just hitting enter, except for picking the Transpiler, where we will use TypeScript instead of the default Transpiler JSPM uses (babel).

Once that is done we continue with rest of the defaults and finish our “jspm init”.  We then install the required underlying frameworks in our project by doing:

jspm install aurelia-framework aurelia-bootstrapper bootstrap

This should pull all files pertaining to Aurelia framework, the aurelia bootstrapper and the bootstrap framework (which are the very basic things we need to start a simple web application with Aurelia and Typescript). If all goes well your folder structure should look like this inside Visual Studio with “Show All Files” selected in Solution Explorer:

We now need to start writing code for our project. The first thing we do is right click the config.js file in the solution explorer and say “Include in Project”.

Once done we open the file and add the highlighted line :

This tells the Transpiler to look for code in the “src” folder. Of course we don’t have that folder in our solution so we create that by right click solution explorer and clicking create folder and adding the src folder. Now we have a place where we will write our Aurelia code. But any web-server that we will also use to host the code will need a startup file to begin running the application – which usually is index.html. So let’s code the following index.html by hand:

<!DOCTYPE html>
<html>
  <head>
    <title>Aurelia</title>
  </head>
  <body aurelia-app>
    <script src="jspm_packages/system.js"></script>
    <script src="config.js"></script>
    <script>
      System.import('aurelia-bootstrapper');
    </script>
  </body>
</html>

This is a simple standard index.html file most Aurelia applications will typically need. We are adding two JS files that aurelia needs. The first is system.js and the second is where our configuration is stored (config.js). We then use System.import (from system.js) to import the aurelia-bootstrapper.

Also note the “aurelia-app” tag in body. A few important pieces are getting connected in the above code and aurelia is using convention to connect the pieces. The index.html tells aurelia to use the config.js file. And as we’ve seen before, the line we added in config.js tells aurelia that the custom aurelia code would be in the “src” folder. The “aurelia-app” tells aurelia to be default look for “app.js” as an entry point. Note: we haven’t specified app.js – but the aureila-app tag itself (by convention) tells aurelia to use app.js by default. You can of-course override the convention but that’s for another post. Right now, let’s just create a app.js in the source folder.

We could drop an app.js file inside source, but remember we are planning on using TypeScript throughout the project, so instead of an app.js, we will use “app.ts”. We will work on the typescript file (app.ts) and let Visual Studio to generate the .js fie each time we save the “.ts” file. So let’s right click “src” folder and add a Typescript file and call it “app.ts”.  Because typescript provides added intelligence and compile time validations, it needs what we call typing files which allow visual studio to validate your TS code. Which is why Visual Studio will ask you this:

Going ahead we will grab out typing files manually so say no to the above for now and let’s proceed. We’re going to talk more about typings later in this post.

In your blank app.ts add the following lines:

export class App
{
    Message: string;
    constructor()
    {
        this.Message = 'Hello World';
    }
}

Note in the above code, we have a simple type-script class (this will translate to a JS function) and string Variable called Message. In the constructor we add a default value to the field. This app.ts, will get translated to app.js when we save it and will act as a view-model. Now that we have a “View Model” let’s go ahead and make a View. Aurelia view are simple HTML pages surrounded by a template tag. So inside “src” folder let’s add a new app.html and add the following lines:

<template>
    Message received from View-Model: ${Message}
</template>

The Message that we set in the View-Model should now flow to the view and when you compile and run the project at it’s root, you should now see:

Congratulations! You have your first Aurelia project with TypeScript running now. Now let’s do something more meaningful and try to create a screen to add customers to a List of Customers. To do that let’s start by making a Customer class by adding a “Customer.ts”. For now let’s modify the blank Customer Class so that it has a CustomerName attribute and looks like this:

export class Customer
{
    CustomerName: string;
    public constructor()
    {
    }
}

We will now need to go ahead and use the Customer inside app.ts and then create a function inside app.js that allows us to create a customer to the list of customers. To do that we modify our app.ts:

import { Customer } from './Customer';
export class App
{
    CurrentCustomer: Customer;
    Customers = new Array<Customer>();
  
    constructor()
    {
    }

    addCustomer()
    {
        if (this.CurrentCustomer)
        {
            this.Customers.push(this.CurrentCustomer);
            this.CurrentCustomer = new Customer();
        }
    }
}

In the above code we use Import to import the Customer class into our App class so that we can use it in our code. Then we create an array of Customers (which will hold a list of customers) and a specific Customer (which the user will add using the UI). The “addCustomer” method adds the current customer to the list. To put sense into all of this let’s create a UI front end which has a textbox, and button called “Add Customer” which adds the customer whose name you type in the textbox to a List of customers which is re-presented by a “UL”. The final view (index.html) looks like this:

<template>
    <form submit.trigger="addCustomer()">
        <input type="text" value.bind="CurrentCustomer.CustomerName" />
        <button type="submit">
            Add Customer
        </button>
        <ul>
            <li repeat.for="Customer of Customers">
                <span>
                    ${Customer.CustomerName}
                </span>
            </li>
        </ul>
    </form>
</template>

Notice that in the code above I have a form whose submit triggers the addCustomer method which we write in our View-Model. There is a textbox which we “bind” to the CustomerName of the Current Customer, which again is defined in our view model. We have a simple submit button and a UL where the LI’s repeat for every Customer in the “Customers” array which is defined in our ViewModel. The UI looks like this:

As we type the name of the customer and click the add button the customers get added to the list:

And we can do this with multiple customers:

The binding of the textbox with the CurrentCustomer.CustomerName ensures that the value passes from the view to the view-model. Each Time addCustomer is called, we create a new Customer object and hence the textbox blanks out after the existing customer is added to “Customers” array which is bound to the UL using a “repeat.for” loop.

So far so good. Everything we’ve done thus far, compiles, builds and runs.

However as you start going into deeper Aurelia, you will realize that you will need to use more complex concepts like Dependency injection (where you would like to inject services into your view-models). The start project we have created works but isn’t fully ready to handle imports because we have the typing files missing. Remember we said we’ll discuss typings later in this post? This is the part where we now need to address typings to move ahead.

To virtually use any advanced feature in aurelia you will have to import the aurelia framework in your code. For example if you want to use the dependency injection of aurelia, your code to do so would look like this:

import { inject } from 'aurelia-framework';

Put that code in your app.ts and you’ll immediately realize that visual studio starts complaining:

And this is because Visual Studio knows nothing about the Aurelia-framework even though we had done a “jspm install aurelia-framework” right when we started. We had done the install with the command prompt using jspm but visual studio (and typescript) still require typing files for aurelia framework before they let you import specific components of the framework inside your TS files.  The simplest way to grab Typing files is to add a “typings.json” file in your project root and add the following lines to it:

{
  "name": "AureliaHelloWorld",
  "dependencies": {
    "aurelia-binding": "github:aurelia/binding",
    "aurelia-bootstrapper": "github:aurelia/bootstrapper",
    "aurelia-dependency-injection": "github:aurelia/dependency-injection",
    "aurelia-event-aggregator": "github:aurelia/event-aggregator",
    "aurelia-fetch-client": "github:aurelia/fetch-client",
    "aurelia-framework": "github:aurelia/framework",
    "aurelia-history": "github:aurelia/history",
    "aurelia-history-browser": "github:aurelia/history-browser",
    "aurelia-loader": "github:aurelia/loader",
    "aurelia-logging": "github:aurelia/logging",
    "aurelia-logging-console": "github:aurelia/logging-console",
    "aurelia-metadata": "github:aurelia/metadata",
    "aurelia-pal": "github:aurelia/pal",
    "aurelia-pal-browser": "github:aurelia/pal-browser",
    "aurelia-path": "github:aurelia/path",
    "aurelia-polyfills": "github:aurelia/polyfills",
    "aurelia-route-recognizer": "github:aurelia/route-recognizer",
    "aurelia-router": "github:aurelia/router",
    "aurelia-task-queue": "github:aurelia/task-queue",
    "aurelia-templating": "github:aurelia/templating",
    "aurelia-templating-binding": "github:aurelia/templating-binding",
    "aurelia-templating-resources": "github:aurelia/templating-resources",
    "aurelia-templating-router": "github:aurelia/templating-router"
  },
  "globalDevDependencies": {
    "angular-protractor": "registry:dt/angular-protractor#1.5.0+20160425143459",
    "aurelia-protractor": "github:aurelia/typings/dist/aurelia-protractor.d.ts",
    "jasmine": "registry:dt/jasmine#2.2.0+20160505161446",
    "selenium-webdriver": "registry:dt/selenium-webdriver#2.44.0+20160317120654"
  },
  "globalDependencies": {
    "url":
"github:aurelia/fetch-client/doc/url.d.ts#bbe0777ef710d889a05759a65fa2c9c3865fc618",
    "whatwg-fetch": "registry:dt/whatwg-fetch#0.0.0+20160524142046"
  }
}

This will provide details of practically all aurelia typing files we are going to need for now and future. Once you have created this file and saved it go to command prompt, navigate to the folder that has the typings.json file (i.e. same folder as the one which holds your web.config) and type:

npm install typings –g

This will install the typings module globally. Now to grab relevant typing files based on your typings.json type:

typings install

Now we’ve fetched typing files but Visual Studio is still blissfully unaware about the fact that we’ve pulled the typings. You should also see the “typings” folder in your source explorer. However To make Visual Studio aware of the typings, we will need to add a typing definition file inside our source folder – the one which our Transpiler is watching. We can call this file anything as long as it has a “d.ts” extension but for now we’ll call “main.d.ts” and will place it inside the src folder. If you notice inside the typings folder you realize that is already has a typing definition file called “index.d.ts” – and it references all the necessary aurelia files;  so if our “main.d.ts” just references that file we should be done. Let’s go to our newly created blank “main.d.ts” (inside the src folder) and add this line:

/// <reference path="../typings/index.d.ts" />

With this done we now have the typings referenced properly and visual studio should stop throwing “cannot find module ‘aurelia-framework’.” error and that specific error should get fixed. However now when you fire a build you should see dozens of these two errors:

Build:Cannot find name 'Promise'.
Build:Cannot find name 'Map'.

This is because Aurelia typings internally use promise and collections. To fix this error we can use the Nuget package manager inside visual studio and install Typescript definition for ES6 promise and collections. The command to do that (inside Visual Studio Nu-get package manager) is:

Install-Package es6-promise.TypeScript.DefinitelyTyped

Install-Package es6-collections.TypeScript.DefinitelyTyped

Once you do that and once typings for promise and collections are installed your build should compile successfully. However, if you start using advanced features like dependency injection you will encounter some more build errors. For example, let’s modify your “app.ts” to use dependency injection:

import { Customer } from './Customer';
import { inject } from 'aurelia-framework';

@inject(Customer)
export class App
{
    CurrentCustomer: Customer;
    Customers = new Array<Customer>();
  
    constructor(injectedcustomer)
    {
           
    }

    addCustomer()
    {
        if (this.CurrentCustomer)
        {
            this.Customers.push(this.CurrentCustomer);
            this.CurrentCustomer = new Customer();
        }
    }
}

Notice the lines in bold which are using out of the box dependency injection of aurelia. In other words, aurelia automatically creates an object of Customer class and pushes it in the constructor. However the moment you actually do this and hit a build you should see compilation error:

Build:Experimental support for decorators is a feature that is subject
to change in a future release.

Set the 'experimentalDecorators' option to remove this warning.

To Overcome this error you will need add a new tsconfig.json in your project root and add the following lines to it:

{
  "compilerOptions": {
    "noImplicitAny": false,
    "noEmitOnError": true,
    "removeComments": false,
    "sourceMap": true,
    "target": "es5",
    "experimentalDecorators": true
  },
  "exclude": [
    "node_modules","jspm_packages"

  ]
}

The experimentalDecorators value of true ensures decorators like inject are allowed. Exclude on node_modules and jspm_packages ensures that the typescript compiler excludes those when it fires a build. Fire a build now and your build should fire successfully.  Run your code and it should just as before because we aren’t doing anything in particular with dependency injection here. In fact it’s actually a bad example of dependency injection but I included it in this post because the post covers the setup of a starter project that let’s you try out and learn everything that aurelia has to offer while using it with Typescript, inside Visual Studio 2015, so adding a right “tsconfig.json” and getting the typings upfront is a good idea (even if you are not using dependency injection or other advanced aurelia concepts).

I honestly believe that while the aurelia team is doing an amazing job documentation and videos of aurelia itself, but mixing aurelia with type-script and getting it all to run on visual studio 2015 can turn out to be a bit daunting for someone who is starting out his aurelia + typescript journey because there is no single place to get started. It would be really nice to not have to go through so many steps to just set up a basic development project where you can try out and learn features aurelia (with Typescript) has to offer, while working inside visual studio.

I know you can create projects using Aurelia CLI tools but even those had the similar typings related issues that I highlighted in this post. And getting those to work was an equally daunting task. Now that I have been working in aurelia for a few days, I can take a skeleton and make that work too, but as far as I am concerned, the learning curve to get into aurelia itself has been much lower than the learning curve required to get into aurelia with typescript and get it all to work inside visual studio. All I can do is hope that the Aurelia team builds some more documentation around getting started with Aurelia + Typescript. In the meantime, this post you get you on your way.

posted on Tuesday, November 1, 2016 4:03:18 PM UTC by Rajiv Popat  #    Comments [3]
Posted on: Saturday, October 8, 2016 by Rajiv Popat

Practical IoT Projects for Regular Nerds - Part 1.

There have been a lot of conversations around IoT lately. As someone who has done his major in accounting and as someone who builds business and financial applications for a living, I had a lot of excitement and some reservations about starting on my IoT journey. I mean I am just a regular nerd without any electronics background. Should I be playing with Microcontrollers and live current? After a few weeks, I’m happy to announce that the journey has been fun and I think it has been a journey worth sharing with you.

The basic underlying idea is to control things over the internet and hence the need for a programmable chip or a microcontroller - a small tiny independent board with a programmable chip that can run code in an infinite loop. The code controls the chip and the chip controls the devices or things it is connected to.

The “control” in the microcontroller can be as simple as turning a LED on / off or something as complex as building a smart home, with lights, fans, entertainment systems and water pumps controlled using your code.

Most articles and you-tube IoT examples out there are either way too complex involving complex circuits and code, or way too simplistic and impractical where someone shows you how to make a LED blink  with your code; which fortunately is a good start; but unfortunately does not allow you to anything practical with your microcontroller, which then puts your micro-controller in the same category as your gym membership - something you own but don’t actually use.

My goal with this series of posts, is to get started on IoT and provide enough insight on the topic to enable you to build something real and practical with it. The goal is also to take you to a point where IoT goes from yet another buzzword that electronics guys should be concerned about, to a practical, real, affordable and simple tool that you can use to build useful projects with.

In the first post we’ll cover some very basic concepts around the Arduino chipset (which is the micro-controller we are going to use for our IoT experiments) and write some basic code that runs on the chipset.

In the posts that follow we will attach some interesting modules to Arduino and write code to control these modules. Going ahead, we’ll get these modules and the microcontroller connected to your home Wifi and we’ll show you how you can control the microcontroller (and the modules connected to it) over the internet; and we’ll finally move on to working with real live electrical devices like, light bulbs and fans and control those with code.

All circuits we build in the process will be open sourced. All code we write in this process will also be on open sourced and  will be posted here on this blog for you to use.

When I started with my IoT journey a few weeks ago, I knew nothing about physics (or electronics) other than some basics I had picked up in high school, most of which I had never paid any attention to or had forgotten over time. Long story short, I’m just a regular nerd who writes business applications / CRUD screens for a living  and hence the series of these posts doesn’t require you to know physics or electronics to start. You will need to go out and buy a couple of cheap gadgets if you want to try some of these examples yourself but your overall investment will be less than 25 dollars.

And as a final disclaimer, I have never officially studied electronics so everything I write about here, is just a regular nerd trying to connect wires and have some fun. If something explodes (or if you cook a chipset or two) the responsibility is all yours!

Sounds good? Let's begin.

To start playing around with IoT we’re going to write code that controls devices so you’ll have to shell out some money and buy some basic devices. Even though there are platforms like the raspberry pi which make IoT much simpler, let’s start with buying a much cheaper micro controller which actually gets you a deeper understanding of the circuits you will be making, will make a smaller hole in your pocket and will help you stay away from having to run and entire Linux clone on your microprocessor like Raspberry Pi does.

Here are a few things you may want to go out and buy if you want to follow along:

  1. An Arduino UNO chipset - You can buy the original one or a cheap clone. I got one from amazon at about 7$.
  2. A Breadboard - so that you can connect wires and devices together without needing soldering equipment. I got mine from amazon at about 2$.
  3. Some jumper wires - so that you can connect devices on your Arduino and your breadboard. I picked up a neat set of male-to-male, male-to-female and female-to-female at Amazon. Cost of the whole kit? About 2$.
  4. A Few LEDs (and a few resistors) - which is the very first thing we will control with our code. You can also grab some resistors at 150 for just 1$ and a few LEDs for 3$.
  5. An ultrasonic transmitter - we will not be using it in the first demo, but we will need this in the third post, so it may be a good idea to get everything you will need in one shot. I picked up mine at 2$.
  6. A Wi-Fi module - that will eventually let you get connected to the internet over your home Wi-Fi connection and let you control the Arduino over an internet connection. Price? About 3$. Again, not something we will use in the first post, but something that we will use pretty soon.
  7. A Relay - that will let us control real live current / devices using our Arduino - we’re not going to use this for the next couple of posts but at 2$ a piece it’s something you may also want to order along with everything else mentioned above.

In under 25$ you have all you need to work on some meaningful IOT experiments and try out a few things.

We’ll get started with Arduino itself in this post and make it control a basic on-board LED light before we do some more interesting things with it. I do realize this example is highly non-practical but it is really simple, gets you used to the Arduino IDE and gets you familiar with the environment. So, let’s get started by making a LED blink, but before that lets begin by knowing our Microcontroller.

You can read article after after to understand the Arduino, but to start messing around with it, all you need to know is that is has 2 power output pins (using which connected devices can draw power) - one is a 3.3V power output pin, the other  a 5V power output pin. Apart from the power pins it also has ground pins labelled “GND” and all circuits that you make will usually end / complete with the ground pin. Put simply, your wiring will start with the power pin and end in the ground pin, and you will control everything else that’s in between (i.e. everything that’s connected to your IO pins).

Depending on the device you are connecting to your Arduino you can decide the power output you want to use. Most devices will have specs which will tell you how much power they expect. Exercise some common sense when you pick the power pin - for example, don’t connect your jumper wire to a 5V pin when the device you are connecting expects 3.3V - if you do there is a high chance you will cook your device.

Apart from the two power pins there are also some digital pins where you write a high or a low. Think of writing a “high” as turning a switch on and think of writing a “low” signal as turning the switch off. Out of all these pins, pin 13 is special because it has an in-built LED (light) attached to it that you can control with your code.

To write code we’re going to get the Arduino studio, which you can download from here and install using a simple installer. Once done, you connect your Arduino to the USB port of your laptop (which is one of the sources from which Arduino gets it’s power). Of course Arduino itself, can also work without a machine and you can hook it up with a battery or direct power, but for now since we will be uploading our code from our machine to the chip, it makes sense to have it connected with our laptop using the USB port.

Once connected go to your Arduino studio and pick the right port your Arduino is connected to. Usually the IDE automatically detects this but if it doesn’t you can try different ports and try to upload your code on each one till it succeeds. All code that typically runs on the microcontroller is referred to as a sketch and each sketch has a loop, which keeps on running as soon as the sketch is uploaded and the microcontroller is powered on.

An empty sketch looks like this.

Setup function is where you write code that runs only once. Anything you write in in “loop” runs in an infinite loop while the microcontroller is powered and on.

Now it’s time to connect the dots and assemble everything we’ve read thus far. Remember when we were talking about writing highs (which is same as turning the pin on) and lows(which is same as turning the pin off)? We know pin 13 has an in-built LED (light) and if we can write a high to pin 13, it should turn the switch on and the LED should glow. If we then write a low to the same pin the LED should shut off. If we do that in a loop function and wait 5 seconds between the on and the off, we should have an LED that keeps on blinking every five seconds the moment you power on the microcontroller. Simple enough? That’s exactly what we are doing in the code below:

int InBuiltLedPin = 13;

void setup() {
  // Let's set Pin 13 as Output Pin
  // Which means we will write high's and low's to it.
  pinMode(InBuiltLedPin, OUTPUT);
}

void loop() {
  // Turn on the Pin (and LED on board)
  digitalWrite(InBuiltLedPin, HIGH);
  // Wait for 5 Seconds
  delay (5000);
  // Turn off the Pin (and LED on board)
  digitalWrite(InBuiltLedPin, LOW);
  // Wait for another 5 seconds.
  delay (5000);
}

Verify your sketch (which is the same as compiling your code):

And once verified, upload your sketch on your Microcontroller:

And if all has gone fine you should see your Microcontroller LED blink on in five seconds (notice the Red LED next to pin 13 light up in the below picture):

And off in another five seconds:

Of course, this will continue in a loop till you switch off the microcontroller by pulling the USB cable off, or till you upload a new sketch.

And with that, you have just controlled an onboard LED with your code. In the next post, we will attach an external LED to the Arduino using a breadboard and try and change the same code slightly to make that LED blink. If you’ve never worked on electronics before the next post will introduce you to a lot of basic concepts like breadboards and resistors. And from there we’ll be ready to build some real life projects. So stay tuned for the next post.

posted on Saturday, October 8, 2016 6:33:14 PM UTC by Rajiv Popat  #    Comments [0]