free html hit counter
Posted on: Friday, November 4, 2022 by Rajiv Popat

Problems Needing Binary Tree Searches Are All Around Us.

The other day someone asked me why I was going through masters curriculum at this age. Their argument:

Most of what you are learning is theoretical and useless. What’s the point of revising data-structures at this point in your life?

Then last week, someone at work was working on a specific problem. In the last release they had 21 check-ins and one of the check-ins had introduced a bug. The team was trying to find which check-in introduced the bug.

To find the erroneous check-in, they were applying each check-in one at a time to the old codebase and then running the application to see if the bug could be reproduced. When I heard that they were doing that I spontaneously responded with:

You’re trying to use linear search algorithms to solve a problem that would be solved much more efficiently and with much less time complexity using a simple binary tree search.

What I meant was that instead of taking each check-in one at a time applying it and then running the code, the team could have just taken the first 11 check-ins together, applied them on the server in a single shot and then tested. Either ways, if the bug was found or not in the first 11 check-ins, we would know if we should look at the left side of the tree or the right side of the tree and with that one attempt, we would have eliminated 10 tests right away. That’s 10 times of applying check-ins, firing a build, running the code and trying to reproduce the bug.

binarytreerealworld


Put simply, you merge 11 check-ins in a single shot and run. If you find the bug, the bug lies in one of the check-ins between check-in 1 to check-in 11 and if you don’t find the bug you know the issue lies with check-in 12 to 20. And then you repeat this process depending on which side of the tree (1-11 or 12-21) the bug lies in. Exactly how the binary tree search algorithm works.

During the times of Covid a collogue of mine had pointed me to an article that showed how you can apply binary tree search to do effective Covid testing and save a bunch of testing kits. It’s the exact same logic. If you have 100 blood samples, mix the first fifty samples and test. If you don’t find Covid in the mixed sample you’ve just saved yourself 50 test kits. If you do find Covid, mix the first 25 samples and do another test. Again, binary tree search.

Of course we don’t hand write code for binary tree search in the real world when we are coding these days. Most modern day languages provide us out of the box search functions which use the most efficient algorithms under the hood.

But that doesn’t mean we can’t see binary tree search as a thought construct. Real world Binary tree search based use-cases exist in our day to day life, and identifying them, helps you solve these problems way faster, even when the problems have nothing to do with writing code. Problems that benefit from binary tree searches, are all around you. The real question is, can you spot them?

Oh and yes, there is a point of revising some of your data structures at any point in your life because that knowledge itself is not as useless or impractical as you might think it is.

posted on Friday, November 4, 2022 9:47:07 AM UTC by Rajiv Popat  #    Comments [0]
Posted on: Monday, May 11, 2020 by Rajiv Popat

Automated Testing Of Microservices Without Test Cases.

I love my test cases. But real test cases are so much more than just input output testing. Most folks who I see doing Unit test cases are doing them wrong by doing just some basic input output end to end tests.

If all you're looking to do is write some simple input output test cases to verify your Web API and check if they retain the same request response values across versions of the same API that you deploy so that you can do a quick regression, you shouldn't even be writing test cases.

Diffy from the twitter team actually does a better job than writing test cases manually if this is all you're trying to achieve. Diffy takes a slightly different take on input output testing. Instead of writing test cases, if you have a fully tested stable version of a micro service or WebAPI in production and are planning on rolling out another version of the same service; we'll call this new version the release candidate; Diffy can help automate a lot of basic regression testing without even having to write any test cases.

When you want to launch a new version of release candidate Diffy allows you to make the same call to both production and the release candidate and compare differences so you know if something is broken or what's different between the release candidate and the production version.

Here is a simple example of how you can put Diffy to use for testing WebAPI's without writing Test Cases Explicitly:

To keep things simple let's start with a simple WebAPI with one simple method which are are going to assume is hosted in production. Let's say all the service does is multiply a number by five. We've written the code in a strange convoluted way (by adding the same number five times) so that we can demonstrate refactoring and then testing again using Diffy. So to begin with our controller looks like this:

MultiplyByFiveController

Let's call this service 1 and let's assume this is the version of the service now running in production:

service1running

Now Let's go ahead and deploy another identical release candidate of the service in a different port this time. This time let's call it service 2:

service2running

Ok, so now we have two identical services running. One is in production and one is a Release Candidate. Now let's get Diffy up and running to see it' request and response models are identical. That will allow us to get started with some basic testing.

To get Diffy you can get the latest diffy-server.jar from here. Once I get the Diffy jar and place it on my machine, I just need to install Java on my machine and I can fire this command to get Diffy pointing to both my services so that it can start comparing them:

startdiffy

Command:

Command:  java -jar .\diffy-server.jar -candidate="localhost:5008" -master.primary="localhost:5000" -master.secondary="localhost:5000" -service.protocol="http" -serviceName="MyService" -proxy.port=:31900 -admin.port=:31159 -http.port=:31149 -rootUrl='localhost:31149' -summary.email=''

Here Candidate is the service which is the release candidate; which basically means the newer version we are going to deploy and want to test.

Primary master is the service which is in production. Ideally, you would deploy two copies of the API in the production environment since Diffy does analysis of Request and Response objects and how they vary between two versions of production API allows it to filter noise. Let's say for example a particular attribute of the service in production was returning a random value each time, Diffy would see different values for these attributes being returned in two instance of production and understand that the specific attribute results in 'noise' and difference in values should be ignored. This second version of the production service is called the secondary master, but to keep things really simple let's keep both of those the same; which is why both primary and secondary URL in the above command point to the same endpoint.

Now if all does well, Diffy would have started a proxy where you can send the production traffic and Diffy will send the traffic to both production and Release Candidate to see if they have identical requests and responses. So since in the above command we asked Diffy to spawn a proxy on port 31900 let's send some traffic to that port and URL using postman:

GetCallOnDiffyProxy

So I make call to the Diffy proxy exactly as if I am making a call to the service. I pass a value of 8 for the service to multiply by 5. I also add a Canonical Response header because that helps Diffy identify and group calls using that name (see the word "Multiplication" in the screenshot below? that's because of the canonical resource header) but that's not mandatory. If you don't do that Diffy just groups your calls under a group called "Unidentified Endpoint". Once I make that call that I can hit the Diffy console to see if everything went all right.

DiffyDashboard

The above console shows that Diffy received a request which was routed to both the services and the difference in inputs and output of these services was zero (i.e. 0 diffs) and hence my release candidate is safe to deploy and continue to work with all production calls.

Now let's go to the Release candidate and change the internal logic from adding the same number five times to multiplying by five directly (remember we added the same number five them initially?):

CodeChangesAndRefactoringForDiffy

Note that I've just refactored the code and made changes to the service internally but the outer signatures and request response associated with the release candidate is still the same. Now I want to quickly test the release candidate for regression to see if it's safe to send to production. So I make the same postman call I made earlier again! And once I make the same post man call again, I check the Diffy Dashboard:

DiffyDashboardPostRefactoring

Notice how Diffy now shows 2 calls, the second request is for the call we made after refactoring. However since the code changes were internal to the service, the request response elements are still identical so Diffy tells me none of my services are failing and it's still safe to take my release candidate to production. The refactoring I did is safe to deploy to production.

Now let's go ahead and make a mistake on purpose. Let's say instead of multiplying by 5 we end up multiplying by 8 in our release candidate and we do this by mistake.

MultiplyByEightController

Now that we've made a mistake that effects the end output that the clients consuming the release candidate will notice and be effected by, let's see if Diffy can catch that. Lets make the same post man call a couple of additional times. Now I make two more post man calls on the proxy just to see how Diffy captures that:

DiffyFailing

Because I made two more calls to the proxy, Diffy captured those calls, passed them to production and release candidate both and noticed they weren't behaving identically and hence started showing me those two calls as Diffs. 2 out of my 4 calls are considered failures and I have a failing rate of 50%.

And just like Unit Test Results Diffy tells me what's wrong in the two calls that failed:

DiffyDifferences

I can further drill down and compare my request and response objects too. If I had huge response objects it helps to see exactly where my production and release candidate response objects vary:

DiffyDifferencesDrillDown

Like I said, this does not replace writing good Unit Test Cases or Test Driven Development, but if all you are interested in doing is writing basic input output test cases and all you need is basic peace of mind for some regression test cases, you don't need to write test cases just for that. Why not use Diffy instead?

Of course, this is just the tip of the iceberg and things get even more interesting where you can automatically take production traffic and route it to Diffy instead testing it explicitly. A lot of the testing we did here with Diffy can also be automated but that's for another day. For now this should tell you what Diffy is all about and get you playing with Diffy!

Go ahead and give Diffy a try. See if makes your automated input output testing or your basic integration testing that much simpler so you don't have to write and maintain test cases for basic micro service regression.

posted on Monday, May 11, 2020 8:47:27 PM UTC by Rajiv Popat  #    Comments [1]
Posted on: Monday, November 11, 2019 by Rajiv Popat

Why Developers Should Care about GRPC

GRPC has been around for quite some time but it has recently been integrated into .NET Core 3.0 and the tooling support with it is just first class now.

If you write Rest WebAPI / Microservices using .NET Core, you send JSON data over HTTP requests. The service does its work and sends a JSON response back.

Till the time your request object reaches the service it waits and doesn’t begin processing. Then it does it’s work and sends you a response back. Till your browser or client gets the response back fully there is not much the client can do and basically waits. That’s the request-response model we’ve all grown up with.

We’ve had various takes on improving this basic design in the past. GRPC is Google’s take on solving the problem of making RPC calls and leveraging data streams compared to the standard request response model.

Without going into too much theory, GRPC uses Google’s Protocol buffers to generate code which then sends data using specialized streams which happens to be really fast and as the name suggests, allows streaming of both request and response objects.

Streams are better because you can use the data as it comes in. A crude example? Instead of downloading the whole video when you stream a video you can watch the video as it downloads. GRPC uses the same approach for data. If this doesn’t make sense, read on and by the time you’ve mucked around a bit with the code, it will all start making sense.

For this example we’ll use visual studio code. The tooling is much simpler with Visual Studio 2019 but I prefer to use Visual Studio Code as an IDE of choice because it shows me what’s going on under the hood. With visual studio code, I use following plugin for getting proto file syntax highlighting and support directly inside my IDE:

For syntax highlighting you can also use additional plugins like this one:

protoplugin2

I have .NET Core 3 installed on my machine. 

The first thing I do is:

  1. Generate a server project: This is like your Web API that is going to be consumed by the client.
  2. Generate the client project: This is your client that is going to consume the server and get the data by invoking an endpoint/method on the server.

I generate the server-side project using:

grpcserver

The -o specifies the output path and creates a folder called 'server' where the GRPC service is generated.

I reference the following nugets by hopping into the terminal of VS Code:

Dotnet add package GRPC.Net.Client
Dotnet add package Google.Protobuf
Dotnet add package Grpc.Tools

Here are the repositories of these three nugets if you want to know more about them:

GRPC.NET Client.

Google Protocol Buffers

GRPC Tooling. 

Once I've stubbed the code out and added the necessary packages to the project. I build the server using:

Dotnet Build

And then I open the code with VS Code.

grpcprojectserverstructure

Notice the Protos folder? That has the proto files .NET tooling generated for us. Think of the proto files like your WSDL files if you come from a web service world. Proto files are specifications for your service. You write them by hand. You primarily use them to describe your request objects, response objects and your methods. Here is the example of the proto file that I wrote:

protofile

The above proto file basically says:

  1. I have a request object with the “companyName” attribute that is ordered 1 in the list of attributes. This is the request object because I will be passing the company name whose users I want to fetch.
  2. I have a response object with these attributes: userName, firstName, lastName and address. The numbers next to them is the order in which these attributes will be serialized.
  3. I have a method that takes a company name and streams back the list of users to the client. This is indicated by: “rpc GetUserDetails (UserRequest) returns (stream UserResponse);” line of code that you see in the above screenshot.
    The GetUserDetails it the method that accepts a user request and returns a stream of UserResponse. (By default, a stream would be an array of objects that would be streamed to the client).

Every time I add a .proto file I add it to the servers project (.csproj) file:

serverprotofile

Once I’ve done that, I fire the build and Google Tooling nugets generates the c# files for me in the background to actually generate the real request and response classes. With Visual Studio 2019 this tooling is hidden under the hood. With VS code the tooling fires when you build your project using the “Dotnet build” command.

Once I have the stubs I can write the service. In the service, I fetch some hard-coded values from a function. Typically, I would do this fetching from a database/service but for now, let’s keep this simple and focus on GRPC.

Once I fetch the data I just push the data back to the client but instead of sending the data in a response object that is pushed to the client all at once and waiting for the client to "download" the response, I use GRPC to stream the data one user at a time back to the client:

grpcserveractualservice

Typically, I would have just returned the users I get from GetUserFromDb back to the client but that would generate a regular response and I want to stream the users back to the client so I write them asynchronously to the response stream. Also notice the Task.Delay? I do that to simulate any delays that might actually be happening on the server as you process and return each user. This shows that each user that is processed is streamed back to the client even as the server continues it’s processing with additional users.

Each user that I write to the stream now flows back to the client and the client can start doing whatever it wants to do with it rather than waiting for the whole response to complete.

On the client-side, I write a simple .NET Console Application that makes a call to the server. The only thing the client needs to generate code to call the server is a copy of the proto files which contains the specs for the entire service. You would send your proto files to your clients or publish them somewhere.

I copy the same proto files on the client side and include them in my client project as “Client” files. Here is how I modify the project (.csproj) file:

clientsideprojectfile

I modify my client project to include a copy of the same .proto files and then I can fire a build. This generates all the stubs I need on the client-side to call the server.

Once this is done I start writing the client.

clientsidecode

Notice how I am using the Dangerous Accept Any Server Side Certificate Validator in the code above? That’s just for non-production because I am running this without any valid SSL certificate. In your production you would get a real certificate.

See how I am using the while loop to iterate through the response stream? This allows me to get each user from the stream as the server writes to the stream. And once I get the current item from the stream? Well, I am just showing each user on the console as soon as the server processes the user and writes the user object to the stream.

Now when I run the client the client calls the server, starts listening to the stream for response and starts dealing with partial responses as and when these are streamed by the server.

finaloutput

This is cool, because:

  1. The response is streamed over a channel that is much more optimized compared to JSON data being sent over HTTP using Rest. There are posts that seem to suggest that GRPC is 7x to 10x faster than JSON over rest.
  2. I can do the same streaming I did on the response object while receiving data, even when I send data using the request object. So, if you want to send huge data to the server but don't want to wait till the entire data is sent before the server starts processing it, GRPC works for that too. Put simply, it supports two-way streaming.

The post is long, but the actual implementation is tiny and super simple. If you’ve not tried GRPC before I highly recommend downloading the entire sample project I described in this post from here (it’s listed under the HelloGrpc folder) and running the server first and then the client and mucking around with the code.

Given the level at which Visual Studio Code and Visual Studio tooling for GRPC is right now, I personally think it’s really easy to pick up and most Web API developers will benefit from having this additional arrow in their quiver.

If you are a web developer who writes APIs and who cares about performance and payloads, you should care about newer better ways of communication between servers and clients compared to the traditional rest based WebAPIs that send data over JSON.

We moved from XML to JSON because the payloads were smaller in JSON. GRPC is the natural next step for smaller payloads, better compression and two way streaming of data.

Go on, give it a try. It’s super easy and well worth the few minutes you will invest in learning it. Chances are you can put it to good use right away and see huge gains in performance and end-user experience.

posted on Monday, November 11, 2019 1:54:17 PM UTC by Rajiv Popat  #    Comments [0]
Posted on: Saturday, April 4, 2015 by Rajiv Popat

Easier Than Fizz Buzz - Why Can't Programmers Print 100 to 1?

I recently wrote a book on how we as professionals need to stop whining and start focusing on developing our skills.

One data point that I obtained for the book (but didn't quite include in the book because it was too programmer centric) was based on 22 job interviews for programming positions I conducted for one of my clients over a period of two months.

Though this is hardly a considerable sample size, it did reveal some interesting facts about programmers. There were two seemingly disconnected questions that we asked at completely different moments of time during the interview:

  1. Talk about a few things in your current organization or manager that you don't like / aren't happy with.
  2. Solve a simple programming problem (one that was much easier than the famous Fizz Buzz problem).

The goal was to study the correlation between whining and coding abilities. Here's a subset of the data we collected (of course I wasn't carrying stop watches in the interviews so the minutes have been rounded up to an interval of 1):

Even though there are some exceptions in the above data set if you look at the graph what's evident is that there seems to be a strong co-relation between whining and being able to solve ridiculously simple programming problems.

That was interesting. But what was even more interesting was the actual program the candidates were being asked to solve. If Jeff Atwood wonders why programmers can't program, when they can't solve Fizz Buzz; here's a problem that is much more easier than Fizz Buzz and yet:

  1. About 14% just couldn't solve the problem in less then 10 minutes - which is when we moved on to the next question.
  2. About 40% took more than 5 minutes to solve the problem and / or had to be corrected more than once.
  3. Only about 14% could solve this problem in 2 minutes or less.
  4. About 82% had to be corrected at-least once before they solved the problem. (which means they actually got it wrong the first time around!)

And the problem they were solving?

Print 100 to 1.

That's it.

That was the question.

The Catch?

You need to start with "for(int i=0;" and continue from there - you cannot write anything before "for(int i=0;" and you can't use two loops.

[Update: This is supposed to be a code snippet which already exists inside a function, so you can safely assume that inclusion of headers and declaration of the functions etc. is already done for you and you don't need to worry about that.]

Go ahead. Try it out. The answer really won't take you more than 2 minutes and should not take more than 4 lines of code including the curly braces but you can write as many lines as you want.

If you get the right output without mistakes in a reasonable amount of time we consider the answer correct.

Go on. Try it. And once you've solved it - go on and make it a part of your interview process and see countless programmers fumble, take really long pauses, struggle and even give up on the question.

Personally, I came across two programmers who said they could not do it because the question was too complicated after over 10 minutes of struggling with the problem.

While this little experiment establishes correlation between whining and skills it doesn't establish any causation. In other words the data doesn't really tell us if programmers whine because they just don't have the skillsets to do their job, or programmers don't have the skillsets to do their job because they whine. 

Maybe our programmers are not skilled because they whine a lot or maybe they whine a lot because we've lowered our bars of what we expect from our programmers and don't demand or challenge them enough to even practice the most basic programming skills.

Either ways the sad reality of where the IT industry stands today is that you don't even need Fizz Buzz to differentiate a bad non-programmer from a good one - Just asking them to print 100 to 1 is usually good enough.

[Update: A lot of folks seemed to get an idea that this is a black and white question and that you can make a hiring decision based on this. It's not. But it does give you an important data point to evaluate someone. For example, if someone clears this question and then fumbles at other basics it might be a reason to not hire him / her. At the same time if someone doesn't answer this and go on to answers other complex algorithmic questions really well, you may decide to hire him / her. Putting the candidate at ease is also important here. The candidates should not be asked or pushed to solve the question in less than 2 minutes. The goal here isn't to stress out the candidates. The goal is to watch them think about and solve a simple problem. Merely present the problem to them and watch their approach and time taken. Couple that up with their tendency to whine and the question provides some very useful insights about a person's approach towards solving problems and their ability to ship.]

[Update: Thanks to the folks who were rightly annoyed by the confusing visualization / chart done in the original post and pointed out how confusing that data visualization was. Special thanks to Jacob for creating the scatterplot using the same dataset. Post updated with the scatterplot.]

posted on Saturday, April 4, 2015 8:11:10 PM UTC by Rajiv Popat  #    Comments [65]