Category Archives: Programming

The Power of Markup Languages

An Anecdote

Recently, I’ve been trying to make myself learn how to write rich documents within plaintext files. I want to do this for a few reasons:

  • I can write everything in vim. Let’s face it — Word processors are pretty ugly. I really don’t like working on documents in Microsoft Word or LibreOffice when I can instead work on my documents in vim. Not only the look, but also the speed. I am able to edit so much faster using vim than any other editor on the market.
  • Portability is key. Have you ever received a paper from a colleague only to be found that it is in some format that you can’t read? (Cough Pages Cough). I, personally, really hate when this happens. In order to solve the problem, why not just write everything in plaintext and distribute it that way?
  • It prepares for the future. In the future, Microsoft Office, Pages, LibreOffice, and the like will probably be out the window. In fact, Windows, Mac OSX, UNIX, or anything you hold dear in the computing world will probably be thrown out the window. If this is the case, the text that we write needs to be unformatted. That way, the text can be viewed on any operating system with any text editor.

The Solution

The solution, I have found, is to write everything using Markdown (Using the CommonMark Spec) and LaTeX. In this way, I will be able to express all of my thoughts in a clear, succinct, portable, and easy-to-read fashion.

One of the great things about these two markup languages is that they are everywhere. You don’t really know until you start using it without thinking, but almost every program supports it. It’s great expressing your thoughts in Skype or GChat because you can bold important words or italicize others. What a good way to express your thoughts!

Because of my recent change in heart, I will be writing all of my blog posts in plaintext from now on and leaving it to the WordPress text rendering system to render everything (hopefully properly). There are also many, many plugins making LaTeX and Markdown rendering possible. It’s really great.

Look how easy it is to express my favorite finite automata:
$M_{favorite} = {Q, \Sigma, \delta, q_{0}, F}$
M_{favorite} = \{Q, \Sigma, \delta, q_{0}, F\}

I will probably report back about this venture as I learn more.

Instantly Filling a Progressbar in .NET

I ran into a problem the other day while working on an application. Sometimes the Progressbar that monitored an event would not fill in as fast as the event completed. Thus, in order to get the progressbar to instantaneously fill, simply specify a value that is below the current value.

For example,

progressbar.Value = 100;
progressbar.Value = 99;
progressbar.Value = 100;

Will instantly fill the progressbar object to 100%. Hope this helps!

Building an Offline Program with Online Syncing

As I become more and more experienced with software development, I am very quickly discovering the limitations of the web. Having a desktop application that constantly needs to retrieve large amounts of data from the Internet is very, very slow. Sometimes, when network speeds are spotty, a simple interaction can take up to 30 seconds, which is definitely unacceptable for the end-user.

In order to solve this problem, I decided to restrict myself from looking at online solutions. I wanted to create a solution on my own.

First, I wanted to create a robust system for managing a database for an offline program. I looked into several different services, including Microsoft’s SQL Stack and a local MongoDB server. However, I found that most of these solutions were either too difficult to learn (I was used to setting up a MySQL server using cPanel and phpMyAdmin) or that they over-complicated the issue. Then, I came across SQLite, which happened to be exactly what I was looking for. It was a lightweight, speedy, local replica of MySQL. After spending about an hour reading the documentation, I realized that it was also super easy to setup. I began working on a library that would make working with a local SQLite database super easy.

Thus, SQLiteDatabaseManager was born. The project was meant to house functionality that would be used in all of my projects, and here are some of the key features that were packed into it:

  • The ability to quickly create a local database
  • The ability to quickly insert, delete, and update records knowing the table names, the fields, and the new values.
  • The ability to have a small table-verification system that allowed the user to define table schemas and check the existing SQLite tables against these schemas for integrity verification.
  • The ability to retry operations when they fail (the fail could be a result of a spontaneous locked file and a retry could fix everything)

SQLiteDatabaseManager worked well because all you needed to do was create a new instance of it, and everything worked out of the box. A new SQLite database was setup and managed by the library. After rigorous testing, this worked perfectly. Of course, the library still needs major work and many more features, but it does what it was intended to do at the moment. My offline data management system worked.

Now the problem was not just managing data offline, but getting it to sync to an online counter-part when a network was available (In a perfect world, all operations would be done on the local SQLite database (fast) and then they would be replicated on the duplicate remote MySQL database (slower) on a background thread). The idea was simple.
Local SQL and Online SQLThe SQLite database and the MySQL database would communicate with eachother via a simple REST API. The databases would be identical and the ID numbers created by the SQLite database would simply be moved to the MySQL database.

 

Creating the REST API for the remote MySQL portion of this project was rather simple. Relying on the methods of the LAMP stack, I used a set of PHP files that would receive GET, POST and DELETE requests so that data could be managed and manipulated fairly easily. Using PHP’s json_encode() function also proved very useful for creating readable responses.

Thus, with the creation of a working SQLite local database and a working and accessible remote MySQL database, I went ahead with the first iteration of implementing this idea. So far, I have run into a few problems.

 

An easily-solvable problem arose when thinking about how to handle many, many clients. Since there was only one remote MySQL database and many, many instances of the local SQLite database, it was a pain tracking when the local client’s changes actually had to be applied or if the remote version was actually newer. This problem was solved by creating a dateAltered field in both the remote and local databases that would contain the last date of modification of the data that it pertained to. Thus, the data with the latest dateAltered value was put permanently into the remote MySQL database.

But what about when there are many, many clients creating information. Surely, some of the client’s SQLite databases will assign ID numbers that coincide with other clients’ SQLite databases. Thus, when information is uploaded to the remote server, different pieces of information may have the same ID numbers. But since all data is modifiable, how is this detectable? This is a problem that I have not yet worked out a solution to since the first implementation of this design has a one-client one-server relationship. However, at the moment, I am hypothesizing that each data-entry contain a unique hash of its original state. Thus, database operations can be performed by referencing the ID of the element, but conflict-resolutions will use this hashed code. Thus, we will always be able to track if a certain piece of information has been modified from its original form or not. Thus, when uploading to the remote server, the server will manage some conflict resolutions and return the ID number of the object. The local clients will then reassign the ID number of the object to that ID number in order to match with the remote MySQL database.

 

I will continue to work on this project in the coming weeks, and I plan on making some detailed blog posts about it, so stay tuned! Eventually, I hope to have this process wrapped up in an easy-to-install and easy-to-implement library. We will see how that turns out.

 

I Have Finally Discovered the Meaning of Cookies

Everyone that browses the web has heard the term cookies thrown around every once in a while. Whether it is originally defined as data, evil little programs that hackers use, annoying things that cause popups, everyone has heard of cookies.

I, too, was once one of the people that thought they knew what cookies were. I thought that it was data that the WebBrowser stored to keep track of things. For example, remembered usernames and passwords and addresses were stored in cookies. Cookies were my friend and sometimes I needed to clear them in order to get a website to function properly.

I was only half-right, however. The other day I was assigned the task of making a JavaScript page that automatically refreshed itself every 5 seconds. Easy enough. I created the HTML layout, added in a few dynamic Java calls to get some data, and then added in the JavaScript method that would refresh the page. For reference, the method looked a little something like this:

function StartTime(){
        setTimeout("RefreshPage()",5000);
}
    
function RefreshPage(){
        if(document.Refresh.auto.checked)
        window.location.reload();
}

 

All was well and the page functioned as it should. That is, until I needed to add a toggle switch that would hide or display a <div> HTML tag. That’s when things got really tricky. The problem was not building the toggle button or the JavaScript function that changed the style=display property of the tag, but rather, after making this, I realized that every time the page was refreshed, the toggle would be flipped again.

So how could the webpage memorize the user’s choice on whether or not the toggle should be flipped? My initial thought was to make something like a global boolean variable that kept track of the state. Something like so:

<%! boolean hideDiv = false; %>

Then the thought occurred to me. What if many, many users are visiting the page at one time and they are all toggling this variable? Since the page is being rendered on the server, a toggling of the variable means that all users are affected by any toggle that one user makes. So how could I store the user’s preference in their browser locally without affecting any other user?

A-ha! So that’s what cookies are for! After some research, a website’s cookies are essentially stored in a semi-colon delineated string that contains many key-value pairs. For example, a cookie string could look like:

fruit=apples;vegetable=corn

This information is stored locally by the user and is stored specifically for the website that stores the cookies. The website can also get the cookies as well. So, in order to solve my problem, I created a cookie that tracked whether or not the div needed to be displayed. Something like:

displayDiv=false

This cookie was then saved every time the user pressed the toggle button (with its respective value, of course) and was retrieved every time the page needed to refresh.

Cookies are surprisingly fast, too. I expected the page load times to slow down significantly since data was being retrieved from the disk, but it didn’t seem to have much slowdown. Of course, this could be because we are running machines with huge amounts of memory and processing power and all we are trying to retrieve is a few bytes of text (don’t quote me on that — that’s not the exact filesize whatsoever).

So, until very, very recently I did not know exactly what cookies were. Hopefully this helps you discover their true meaning. In short,

cookies are a set of key-value pairs, specific to a website, that are saved and managed by the user’s browser. They come in a semi-colon delineated list and have the ability to expire after a certain amount of time. They are used to store data locally that is usually specific to the user.

For those that are curious, my implementation of saving and getting cookies looked a little something like this:

function setCookie(cname, cvalue) {
    var d = new Date();
    d.setTime(d.getTime() + (50*24*60*60*1000));
    var expires = "expires="+d.toGMTString();
    document.cookie = cname + "=" + cvalue + "; " + expires;
  }

  function getCookie(cname) {
    var name = cname + "=";
    var ca = document.cookie.split(';');
    for (var i=0; i<ca.length; i++) {
      var c = ca[i];
      while (c.charAt(0)==' ') c = c.substring(1);
      if (c.indexOf(name) != -1) 
      {
        if (c.substring(name.length, c.length).contains("="))
          return c.substring(name.length, c.length).split("=")[1]
        else
          return c.substring(name.length, c.length)
      }
    }
    return "";
  }

Note that by default, my setCookie method sets the cookie to expire after 50 days.

The Idea of Locking Programs

There have been several times during the application development phase where my clients have decided to back out on payments or abandon projects all together. During this time, they usually have a working copy of the application that I have worked on for the past few months. When this occurs, what is the right thing to do?

Is it acceptable to lock the user out of their program simply because they missed a payment or chose not to continue with the mutual agreement of the software? If it is not acceptable to lock out the user, is it acceptable to disable certain features since they did not follow through with their end of the agreement?

I think these problems fall under the category of software politics. These are the same politics that involve problems such as “Should software be free?” “Should all code be free?” “Is the commercialization of software beneficial to the industry?”. Usually, I avoid software politics except when it comes to crediting the software authors, which I do not have a hard time supporting.

I believe that the problem at hand, although it is indeed a software politics problem, it is also a problem of a voided agreement. The purchaser of the software agreed to follow through with payments or whatever else may be at stake. Thus, since they did not follow through, the software developer (being the owner of the software) has the right to do whatever they want to the software to deny or allow access.

Thus, I believe that it is perfectly acceptable to lock the client out of the software entirely. In fact, that is exactly the process that I have followed. Usually I implement this by reading an online text file that contains a boolean allowing or denying any user access. This text file is usually read when the program starts up.

Of course, this is a little inconvenient, as it requires the user to have internet access in order to actually access the meat of the program. This is something that I don’t really endorse, especially when it  comes to the gaming industry.

But I digress. In order to avoid this problem, I think that it is only right to still allow the user into the program if they don’t have internet access. Thus, the lockout check may be avoided by simply flipping the switch on the internet connection.

But there is a caveat to all of this: updates. When the user does not have an internet connection when launching the program, the program cannot check for updates. Thus, this small workaround leaves the user running an older version of their software. Although this may not be a problem, the user will have to deal with bugs that exist due to older versions of the code and a lack of features that come along with the newer versions.

Since I feel that locking the user out from their own software is justified, I think that I will publish a .NET library to do just this pretty soon. Since a separate .NET library would require an extra GET request from the internet, I think that it would be best to integrate this lockout library and the update library (update.net currently) in order to make a single, streamlined call to get information from the internet.

So, be on the lookout for that!