Category Archives: Uncategorized

Moving Projects off of CodePlex

On March 31, Codeplex announced its plans to shut-down. Although this might be bad for competition in the hosted-repository space, I understand why Microsoft felt the need to shut-down its service.

Personally, I had 3 projects hosted on Codeplex: HTML-IDEx, UpdateVB, and NotifierForVB. I have decided to archive the latter two in a versioned zip file. Since HTML-IDEx has had some fairly recent membership requests, I moved it to GitHub.

Making the choice to archive UpdateVB and NotifierForVB was a tough one. After all, they were both artifacts of my first early years of programming and were used extensively in the software I created during that time. However, they both had not received any updates in nearly 7 years. Since I was quite the inexperienced programmer at that time, I also expect these libraries to be riddled with small issues and bugs.

UpdateVB is a unique case, since I actually started working on its successor in 2014. Thus, the decision to archive it was purely for historical purposes. If you are looking to use UpdateVB, please use instead.

NotifierForVB, on the other hand, is a library that’s about 30 lines of code and strongly tied to the Krypton Toolkit (which is apparently now open source). Due to its simplicity and lack of usefulness, I’ve decided to archive it for historical purposes without a successor.

HTML-IDEx is my only project hosted on Codeplex that had real community collaboration. Further, it is the only open-source project owned by BrandonSoft. Due to a few membership requests in 2017, its popularity, and its tie-in with BrandonSoft, I’ve decided to package the source code and move it to GitHub. If git is not your thing, a versioned zip archive will also be hosted here.

It was a tough decision to officially deprecate and decommission two of my older projects, but it was in the best interests of those who may have wanted to use the libraries (Definitely don’t use them!) and for me and my dwindling free-time.

Thank you to everyone who supported, used, and helped create UpdateVB, NotifierForVB, and HTML-IDEx. You can view the projects in the Archive section of the code page.

Simple RPC With Thrift

A key aspect of building a server-based (cloud-based in today’s lingo) service is communication. The (often times remote) client needs to communicate with the server. Further, sometimes other processes on the server also need to communicate with each other. There are several ways to accomplish this, one of which is with RPC.

Many blooming programmers and hackathoners today will jump straight to “Why not just create a simple REST API using JSON serialization?”. On the surface, there are many good things about this. I’m sure you’ve heard them all:

  1. REST is simple and stateless. The semantics of a REST API are widely used and therefore easy to use once you’ve learned them. With a REST API on your server, it’s super easy to manipulate data and debug operations using tools like Postman.
  2. REST encourages readability. I’m all about readability everywhere in computing. Code should be readable, text should be readable, interfaces should be readable. It only makes sense, then, that an API should be readable as well. REST encourages this. It’s very easy to tell what GET api/users will do.
  3. REST encourages readable serializations. Since the API endpoints are readable, it’s only natural to make the response readable as well. Today, most APIs accomplish this by serialiazing data in the easy-to-read JSON format. This way, data is easy to read and easy to manipulate.

This is all well and good. Hackathoners and newcomers should not feel discouraged from using REST/JSON to create an interface to their cloud application. There’s one problem that I’m sure you’ve noticed, however.

JSON is heavy. When you’ve implemented a distributed, load-balanced, fully cached, and 100% optimized service, the largest bottleneck is transmission time from the server to the client, especially if the response object is large. In fact, on all of the teams I’ve worked on at various companies (except one), complaints about transmission time for huge serialized objects were extremely common.

Also, REST is cumbersome. I need more than two hands to count how many times I’ve coded up, from scratch, an interface layer that handles REST-style requests and serves the response in JSON. At one point I thought it a good idea to create a C# tool which actually generates a PHP REST interface when given the dataschema. Why did I have to do this? Surely, someone else has already done it!

Enter Facebook’s Thrift, which is open source and Apache licensed. For me, Thrift’s biggest features are how it solves the problems I’ve mentioned above. In order to use Thrift, you design a schema to represent both your objects and your service. The Thrift compiler then uses your schema to generate a client and server for you, meaning that you no longer have to handle the communication or serialization problems.

As a contrived example, say that I wanted to make a simple service to get my server uptime. I first design a Thrift schema:

service UptimeService {
    i32 getUptimeInDays();

Then, I compile the schema using Thrift

thrift --gen py uptime.thrift

I am going to use Python for my client and server, so I use the --gen py flag. Thrift has many supported languages, however.

I can then use the generated Python libraries to write my server implementation:

import UptimeService
import subprocess

from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol
from thrift.server import TServer

class UptimeHandler:
    def getUptimeInDays():
        return int(subprocess.check_output('uptime').split(' ')[3])

if __name__ == '__main__':
    handler = UptimeHandler()
    processor = UptimeService.Processor(handler)
    transport = TSocket.TServerSocket(port=9090)
    tfactory = TTransport.TBufferedTransportFactory()
    pfactory = TBinaryProtocol.TBinaryProtocolFactory()

    server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)

And I also use the generated Python libraries to write my client implementation:

import UptimeService

from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol

def main():
    transport = TSocket.TSocket('localhost', 9090)
    transport = TTransport.TBufferedTransport(transport)
    protocol = TBinaryProtocol.TBinaryProtocol(transport)
    client = UptimeService.Client(protocol)
    print("Server uptime: {} days".format(client.getUptimeInDays()))

if __name__ == '__main__':

Of course, this client implementation only works when run from the server itself. If I wanted to get the uptime remotely, I would just change ‘localhost’ to my domain name.

And that’s it! All of the networking details and serialization is handled by Thrift. Of course, a big portion of this post went on about how JSON is too big and too heavy. If one compares performance of various Thrift-like libraries, they will notice that Thrift is definitely not the fastest. Instead, libraries like Avro, Protobuf, and CapnProto are much faster and also more compact.

However, a major pain point for me across the years has been implementing the actual interface layer. Writing client code to summon an HTTP Request and read the response from the server gets old after a while. This is something that Thrift handles for you. As you can see from the example above, Thrift handles all serialization, deserialization, and message-passing for you. All you have to worry about is defining a schema — Thrift gives the rest for free!

Why I am Ditching

Over the past years, I have written several papers that I am proud of. I wanted to put all of these papers in a place where people would be able to find them and learn about the arguments I was making. I found, which seemed to be exactly what I was looking for.

Since posting my papers to Academia, their views have skyrocketed. I have also been contacted by several people thanking me for my work. The interactions between papers, data, and readers that Academia provide are extremely interesting.

However, it seems that Academia has recently changed its mindset. When I first started using them, their landing page was a place to search for papers and see the most popular pieces of research. Recently, their landing page changed to something else: A simple prompt to register for an account.'s new landing page, emphasizing account-creation over paper-finding’s new landing page, emphasizing account-creation over knowledge-finding

The new landing page sends a clear message:’s goals have changed. Instead of wanting their users to find free research in the quickest way possible, they want their users to make an account. They do not want to spread free knowledge, they want to increase the number of people who are registered on their website.

Further, now requires an account to download papers.

These are not values that I would like my papers to be associated with. Although Academia’s high search engine rankings are beneficial to the popularity of my papers, sacrifices must be made to ensure that my papers are free to access (Both in the sense of cost and in the sense of accessibility). Blocking curious minds behind an account creation process is not free.

In the coming days, I will be moving all of my papers to a specific “Papers” page of this website. Unless Academia changes its ways, I will no longer be using them to host my research.

The Importance of a Quality Office Chair

I write this as I sit in an office chair in a Bellevue hotel room. I try and try again to muster the mental strength to focus on the latest addition to the project I’m working on. However, for every bit of mental energy expended on the process of opening a new vim window, I spend about five seconds opening up a new tab and browsing to {unproductive site} [1]. This is an entirely unbalanced distribution of time, and after about an hour, I still haven’t added anything useful to the project.

Why is this? Out of all possible causes and correlations, I blame it on the chair I’m sitting in. The chair has a long base (The part that makes direct contact with the buttocks), meaning that in order for the back to be pressed against the back of the chair, the sitter must slouch or extend his legs in an uncomfortable fashion. Thus, I am slouching.

This slouching seems to make me inattentive and easily distracted. I am almost too comfortable to be focusing. If I wanted to focus, I would be sitting up relatively straight.

My theory about the chair reminds me of several different scenarios, which I will describe in a very Freudian fashion.

  1. The chair in my home office forces me to sit up straight. However, it lacks solid lower back support. Thus, I am able to focus for a good hour or so, but as soon as I feel back pain, I begin to unfocus and slouch. This slouching causes me to continue to remain inattentive to the task at hand.
  2. The seats of an airplane also force me to sit up straight. As much as people complain about the discomfort of airplane seats, I find them to be very comfortable. Albeit, I think people complain about the lack of respect for a “personal bubble” more than anything. While I’m in an airplane, I am able to focus very well. However, this may be due to my lack of accessible Internet connection.
  3. The chairs in my University library seem to encourage focus. However, there are chairs in which I do not focus – those that are the ones in which cannot reach the table comfortably.
  4. The chairs at my previous employment were very nice Steelcase chairs. While sitting in them, I was able to stay focused for several hours at a time without distraction. This could probably also be attributed to the work environment and the other focused workers who surrounded me.

Although causation is not clearly implied, it is clear that the chair I am sitting in has some sort of correlation with my level of focus. It is obvious that a good chair is necessary for back health, but a good chair may also be just as important for focusing on the task at hand.

1 Usually Reddit, Hacker News, or Wikipedia

What Over-Mocking Revealed to Me

Recently, I took the idea of “units” in code to the extreme. After researching several programming methodologies and learning about the advantages and disadvantages of some of the methodologies, I found myself very keen of the “unit” methodology – all functions and methods are their own functional “unit”, with their own unit tests and functionality. All of these functional units should be “unit-testable”, meaning that they should not rely on a long list of other functions to be called first. From my understanding, having code like this also contributes to the Clean Code methodology, although I have not read the hailed guide.

In making code that follows the “unit” methodology, one simply assumes that all other functions and methods besides the one in question work properly. That is, there should be no unexpected bugs or kinks within them. They are summed to have complete test coverage and not have any special failure cases. Thus, when testing units of code, one can safely mock out all external functions (even those from within the same class, module, or project) and ensure that the code still follows the intended flow-of-logic.

When one attempts to write these tests, however, they will quickly notice as I did that the tests are no longer testing specific input and output, but rather that only the code-flow is being tested. That is, the only thing being ensured is the fact that expected lines are executed. Although this is a good thing because it is good to ensure that code is flowing as intended, it doesn’t actually test any specific corner cases, or that output is as expected (a big problem).

If there exists a test where all references are mocked and only code-flow is tested, it must then follow that there is another test that tests input and output, ensuring that output is as expected and that certain input generates the proper output. After all, these are the important tests that ensure that the user will not be surprised when they provide a string as a parameter to a multiply method.

However, it seems extremely tedious to think of and write two types of tests for every unit. Not only does the programmer need to think of proper ways to mock references, but he also has to think about possible corner-cases that may break his code. The aforementioned StackOverflow post [1] and long sessions of thinking allowed me to come to a conclusion.

Since the tests where references are mocked require knowledge of the actual code, these are called white-box tests, meaning that it is easy to see what goes on in the “box” – the unit of code. The tests where only input and output is tested are known as black-box tests, because the test-writer shouldn’t care what goes on in the box, only that certain input results in certain output.

Given the requirements of both white-box tests and black-box tests, it is easy to see who should be writing the tests. White-box tests should be written by the developer himself at the time of writing code. These tests ensure that there are no variables where there should have been different variables, that all necessary code executes, and that nothing is left out. The creation of these white-box tests also gets the developer to think about possible problematic inputs.

When the white-box test-assisted code is complete, the code is then given to a quality engineer, who writes the black-box tests to ensure that all inputs, no matter how wacky, generate expected results. This ensures that the end-user (whether it be other developers, clients, or simply other functions within the same module) doesn’t get stuck on any unexpected behavior. The quality engineer is the perfect person to write these tests, as he doesn’t know how the code works on a technical level, only what it is supposed to do and how it should react to certain inputs.

This makes the idea of functional “units” a bit more understandable. Someone who knows the code should write tests to ensure that the flow is as intended, and someone unaffiliated should make sure input is as expected. Of course, on a single-developer team, both jobs are for that single developer.

With that being said, white-box tests are not always necessary. If a method is simple enough, as in get_first_elem_of_array(int* arr) -> int, it doesn’t need to have a white-box tests associated with it. It is easy to see that the code should function as required. However, if a function is more complicated, a white-box test should be written.

White-box tests are something special, however. Since they are written based on the specific flow of code possessed by a functional unit, the test’s passing is entirely reliant on the code that was in-place at the time of writing the test. If the code in the function was changed, the test will fail. This may strike some as a bad thing; however, it forces the developer to design easily-testable code, even if making just a small change. After all, small changes can indeed break things, so small changes should be tested. As long as the same functionality is maintained, the black-box tests should not fail.

I am executing this newfound understanding of functional units while working on PyCFramework, and so far, it has produced very high-quality, modular, extensible code. Although writing tests takes a large chunk of time, the process of writing tests has forced me to think about the design of my code, how it could be improved, and what mistakes I may have made while coding.