Category Archives: Development

Moving Projects off of CodePlex

On March 31, Codeplex announced its plans to shut-down. Although this might be bad for competition in the hosted-repository space, I understand why Microsoft felt the need to shut-down its service.

Personally, I had 3 projects hosted on Codeplex: HTML-IDEx, UpdateVB, and NotifierForVB. I have decided to archive the latter two in a versioned zip file. Since HTML-IDEx has had some fairly recent membership requests, I moved it to GitHub.

Making the choice to archive UpdateVB and NotifierForVB was a tough one. After all, they were both artifacts of my first early years of programming and were used extensively in the software I created during that time. However, they both had not received any updates in nearly 7 years. Since I was quite the inexperienced programmer at that time, I also expect these libraries to be riddled with small issues and bugs.

UpdateVB is a unique case, since I actually started working on its successor in 2014. Thus, the decision to archive it was purely for historical purposes. If you are looking to use UpdateVB, please use update.net instead.

NotifierForVB, on the other hand, is a library that’s about 30 lines of code and strongly tied to the Krypton Toolkit (which is apparently now open source). Due to its simplicity and lack of usefulness, I’ve decided to archive it for historical purposes without a successor.

HTML-IDEx is my only project hosted on Codeplex that had real community collaboration. Further, it is the only open-source project owned by BrandonSoft. Due to a few membership requests in 2017, its popularity, and its tie-in with BrandonSoft, I’ve decided to package the source code and move it to GitHub. If git is not your thing, a versioned zip archive will also be hosted here.

It was a tough decision to officially deprecate and decommission two of my older projects, but it was in the best interests of those who may have wanted to use the libraries (Definitely don’t use them!) and for me and my dwindling free-time.

Thank you to everyone who supported, used, and helped create UpdateVB, NotifierForVB, and HTML-IDEx. You can view the projects in the Archive section of the code page.

Simple RPC With Thrift

A key aspect of building a server-based (cloud-based in today’s lingo) service is communication. The (often times remote) client needs to communicate with the server. Further, sometimes other processes on the server also need to communicate with each other. There are several ways to accomplish this, one of which is with RPC.

Many blooming programmers and hackathoners today will jump straight to “Why not just create a simple REST API using JSON serialization?”. On the surface, there are many good things about this. I’m sure you’ve heard them all:

  1. REST is simple and stateless. The semantics of a REST API are widely used and therefore easy to use once you’ve learned them. With a REST API on your server, it’s super easy to manipulate data and debug operations using tools like Postman.
  2. REST encourages readability. I’m all about readability everywhere in computing. Code should be readable, text should be readable, interfaces should be readable. It only makes sense, then, that an API should be readable as well. REST encourages this. It’s very easy to tell what GET api/users will do.
  3. REST encourages readable serializations. Since the API endpoints are readable, it’s only natural to make the response readable as well. Today, most APIs accomplish this by serialiazing data in the easy-to-read JSON format. This way, data is easy to read and easy to manipulate.

This is all well and good. Hackathoners and newcomers should not feel discouraged from using REST/JSON to create an interface to their cloud application. There’s one problem that I’m sure you’ve noticed, however.

JSON is heavy. When you’ve implemented a distributed, load-balanced, fully cached, and 100% optimized service, the largest bottleneck is transmission time from the server to the client, especially if the response object is large. In fact, on all of the teams I’ve worked on at various companies (except one), complaints about transmission time for huge serialized objects were extremely common.

Also, REST is cumbersome. I need more than two hands to count how many times I’ve coded up, from scratch, an interface layer that handles REST-style requests and serves the response in JSON. At one point I thought it a good idea to create a C# tool which actually generates a PHP REST interface when given the dataschema. Why did I have to do this? Surely, someone else has already done it!

Enter Facebook’s Thrift, which is open source and Apache licensed. For me, Thrift’s biggest features are how it solves the problems I’ve mentioned above. In order to use Thrift, you design a schema to represent both your objects and your service. The Thrift compiler then uses your schema to generate a client and server for you, meaning that you no longer have to handle the communication or serialization problems.

As a contrived example, say that I wanted to make a simple service to get my server uptime. I first design a Thrift schema:

service UptimeService {
    i32 getUptimeInDays();
}

Then, I compile the schema using Thrift

thrift --gen py uptime.thrift

I am going to use Python for my client and server, so I use the --gen py flag. Thrift has many supported languages, however.

I can then use the generated Python libraries to write my server implementation:

import UptimeService
import subprocess

from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol
from thrift.server import TServer

class UptimeHandler:
    def getUptimeInDays():
        return int(subprocess.check_output('uptime').split(' ')[3])

if __name__ == '__main__':
    handler = UptimeHandler()
    processor = UptimeService.Processor(handler)
    transport = TSocket.TServerSocket(port=9090)
    tfactory = TTransport.TBufferedTransportFactory()
    pfactory = TBinaryProtocol.TBinaryProtocolFactory()

    server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)
    server.serve()

And I also use the generated Python libraries to write my client implementation:

import UptimeService

from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol

def main():
    transport = TSocket.TSocket('localhost', 9090)
    transport = TTransport.TBufferedTransport(transport)
    protocol = TBinaryProtocol.TBinaryProtocol(transport)
    client = UptimeService.Client(protocol)

    transport.open()
    print("Server uptime: {} days".format(client.getUptimeInDays()))
    
    transport.close()

if __name__ == '__main__':
    main()

Of course, this client implementation only works when run from the server itself. If I wanted to get the uptime remotely, I would just change ‘localhost’ to my domain name.

And that’s it! All of the networking details and serialization is handled by Thrift. Of course, a big portion of this post went on about how JSON is too big and too heavy. If one compares performance of various Thrift-like libraries, they will notice that Thrift is definitely not the fastest. Instead, libraries like Avro, Protobuf, and CapnProto are much faster and also more compact.

However, a major pain point for me across the years has been implementing the actual interface layer. Writing client code to summon an HTTP Request and read the response from the server gets old after a while. This is something that Thrift handles for you. As you can see from the example above, Thrift handles all serialization, deserialization, and message-passing for you. All you have to worry about is defining a schema — Thrift gives the rest for free!

Introducing download-sweeper

One of the biggest issues on any system of mine is the cluttering of the Downloads folder. In the modern Internet age, we download a ton of files. Checking right now, the contents of my Downloads folder sum to about 15GB in filesize. Although that probably isn’t too much of an issue given the cheapness of storage today, it remains troublesome when hand-searching through the files.

When I only ran Windows, I used Cyber-D’s Autodelete to delete my old downloads. This worked perfectly for me, except for the fact that sometimes it would delete files that I forgot that I wanted. But with Windows, I could always find those files in the recycle bin.

Fast forward several years, and I now run Linux as my primary operating system. Without doing much research to see if a program already existed, I drafted a “spec” for download-sweeper, a program that would delete old files in the Downloads directory, but also allow the user a “grace period” where they could still recover old Download files if they were removed.

Sure, this could probably be created in less than 100 lines of C, but I was looking to create a robust and portable solution that would allow me to quickly make changes if I needed to. Thus, I created a clean (~350 lines) Python solution that would do just that.

Further, when integrated with systemd, everything works perfectly. I currently have the application deployed on my Laptop, my Desktop, and on the webserver that’s running this blog. Of course, there aren’t any downloads on the webserver, so I use it to act as a “virus/malware quarantining tool”.

You can view the project on its GitHub page, here: https://github.com/brandonio21/download-sweeper

What Over-Mocking Revealed to Me

Recently, I took the idea of “units” in code to the extreme. After researching several programming methodologies and learning about the advantages and disadvantages of some of the methodologies, I found myself very keen of the “unit” methodology – all functions and methods are their own functional “unit”, with their own unit tests and functionality. All of these functional units should be “unit-testable”, meaning that they should not rely on a long list of other functions to be called first. From my understanding, having code like this also contributes to the Clean Code methodology, although I have not read the hailed guide.

In making code that follows the “unit” methodology, one simply assumes that all other functions and methods besides the one in question work properly. That is, there should be no unexpected bugs or kinks within them. They are summed to have complete test coverage and not have any special failure cases. Thus, when testing units of code, one can safely mock out all external functions (even those from within the same class, module, or project) and ensure that the code still follows the intended flow-of-logic.

When one attempts to write these tests, however, they will quickly notice as I did that the tests are no longer testing specific input and output, but rather that only the code-flow is being tested. That is, the only thing being ensured is the fact that expected lines are executed. Although this is a good thing because it is good to ensure that code is flowing as intended, it doesn’t actually test any specific corner cases, or that output is as expected (a big problem).

If there exists a test where all references are mocked and only code-flow is tested, it must then follow that there is another test that tests input and output, ensuring that output is as expected and that certain input generates the proper output. After all, these are the important tests that ensure that the user will not be surprised when they provide a string as a parameter to a multiply method.

However, it seems extremely tedious to think of and write two types of tests for every unit. Not only does the programmer need to think of proper ways to mock references, but he also has to think about possible corner-cases that may break his code. The aforementioned StackOverflow post [1] and long sessions of thinking allowed me to come to a conclusion.

Since the tests where references are mocked require knowledge of the actual code, these are called white-box tests, meaning that it is easy to see what goes on in the “box” – the unit of code. The tests where only input and output is tested are known as black-box tests, because the test-writer shouldn’t care what goes on in the box, only that certain input results in certain output.

Given the requirements of both white-box tests and black-box tests, it is easy to see who should be writing the tests. White-box tests should be written by the developer himself at the time of writing code. These tests ensure that there are no variables where there should have been different variables, that all necessary code executes, and that nothing is left out. The creation of these white-box tests also gets the developer to think about possible problematic inputs.

When the white-box test-assisted code is complete, the code is then given to a quality engineer, who writes the black-box tests to ensure that all inputs, no matter how wacky, generate expected results. This ensures that the end-user (whether it be other developers, clients, or simply other functions within the same module) doesn’t get stuck on any unexpected behavior. The quality engineer is the perfect person to write these tests, as he doesn’t know how the code works on a technical level, only what it is supposed to do and how it should react to certain inputs.

This makes the idea of functional “units” a bit more understandable. Someone who knows the code should write tests to ensure that the flow is as intended, and someone unaffiliated should make sure input is as expected. Of course, on a single-developer team, both jobs are for that single developer.

With that being said, white-box tests are not always necessary. If a method is simple enough, as in get_first_elem_of_array(int* arr) -> int, it doesn’t need to have a white-box tests associated with it. It is easy to see that the code should function as required. However, if a function is more complicated, a white-box test should be written.

White-box tests are something special, however. Since they are written based on the specific flow of code possessed by a functional unit, the test’s passing is entirely reliant on the code that was in-place at the time of writing the test. If the code in the function was changed, the test will fail. This may strike some as a bad thing; however, it forces the developer to design easily-testable code, even if making just a small change. After all, small changes can indeed break things, so small changes should be tested. As long as the same functionality is maintained, the black-box tests should not fail.

I am executing this newfound understanding of functional units while working on PyCFramework, and so far, it has produced very high-quality, modular, extensible code. Although writing tests takes a large chunk of time, the process of writing tests has forced me to think about the design of my code, how it could be improved, and what mistakes I may have made while coding.

   [ + ]

1. https://stackoverflow.com/questions/32622040/python-unit-testing-should-other-classmethods-be-mocked/32624367?noredirect=1#comment53142597_32624367

Instantly Filling a Progressbar in .NET

I ran into a problem the other day while working on an application. Sometimes the Progressbar that monitored an event would not fill in as fast as the event completed. Thus, in order to get the progressbar to instantaneously fill, simply specify a value that is below the current value.

For example,

progressbar.Value = 100;
progressbar.Value = 99;
progressbar.Value = 100;

Will instantly fill the progressbar object to 100%. Hope this helps!