Tag Archives: C#.NET

Building an Offline Program with Online Syncing

As I become more and more experienced with software development, I am very quickly discovering the limitations of the web. Having a desktop application that constantly needs to retrieve large amounts of data from the Internet is very, very slow. Sometimes, when network speeds are spotty, a simple interaction can take up to 30 seconds, which is definitely unacceptable for the end-user.

In order to solve this problem, I decided to restrict myself from looking at online solutions. I wanted to create a solution on my own.

First, I wanted to create a robust system for managing a database for an offline program. I looked into several different services, including Microsoft’s SQL Stack and a local MongoDB server. However, I found that most of these solutions were either too difficult to learn (I was used to setting up a MySQL server using cPanel and phpMyAdmin) or that they over-complicated the issue. Then, I came across SQLite, which happened to be exactly what I was looking for. It was a lightweight, speedy, local replica of MySQL. After spending about an hour reading the documentation, I realized that it was also super easy to setup. I began working on a library that would make working with a local SQLite database super easy.

Thus, SQLiteDatabaseManager was born. The project was meant to house functionality that would be used in all of my projects, and here are some of the key features that were packed into it:

  • The ability to quickly create a local database
  • The ability to quickly insert, delete, and update records knowing the table names, the fields, and the new values.
  • The ability to have a small table-verification system that allowed the user to define table schemas and check the existing SQLite tables against these schemas for integrity verification.
  • The ability to retry operations when they fail (the fail could be a result of a spontaneous locked file and a retry could fix everything)

SQLiteDatabaseManager worked well because all you needed to do was create a new instance of it, and everything worked out of the box. A new SQLite database was setup and managed by the library. After rigorous testing, this worked perfectly. Of course, the library still needs major work and many more features, but it does what it was intended to do at the moment. My offline data management system worked.

Now the problem was not just managing data offline, but getting it to sync to an online counter-part when a network was available (In a perfect world, all operations would be done on the local SQLite database (fast) and then they would be replicated on the duplicate remote MySQL database (slower) on a background thread). The idea was simple.
Local SQL and Online SQLThe SQLite database and the MySQL database would communicate with eachother via a simple REST API. The databases would be identical and the ID numbers created by the SQLite database would simply be moved to the MySQL database.

 

Creating the REST API for the remote MySQL portion of this project was rather simple. Relying on the methods of the LAMP stack, I used a set of PHP files that would receive GET, POST and DELETE requests so that data could be managed and manipulated fairly easily. Using PHP’s json_encode() function also proved very useful for creating readable responses.

Thus, with the creation of a working SQLite local database and a working and accessible remote MySQL database, I went ahead with the first iteration of implementing this idea. So far, I have run into a few problems.

 

An easily-solvable problem arose when thinking about how to handle many, many clients. Since there was only one remote MySQL database and many, many instances of the local SQLite database, it was a pain tracking when the local client’s changes actually had to be applied or if the remote version was actually newer. This problem was solved by creating a dateAltered field in both the remote and local databases that would contain the last date of modification of the data that it pertained to. Thus, the data with the latest dateAltered value was put permanently into the remote MySQL database.

But what about when there are many, many clients creating information. Surely, some of the client’s SQLite databases will assign ID numbers that coincide with other clients’ SQLite databases. Thus, when information is uploaded to the remote server, different pieces of information may have the same ID numbers. But since all data is modifiable, how is this detectable? This is a problem that I have not yet worked out a solution to since the first implementation of this design has a one-client one-server relationship. However, at the moment, I am hypothesizing that each data-entry contain a unique hash of its original state. Thus, database operations can be performed by referencing the ID of the element, but conflict-resolutions will use this hashed code. Thus, we will always be able to track if a certain piece of information has been modified from its original form or not. Thus, when uploading to the remote server, the server will manage some conflict resolutions and return the ID number of the object. The local clients will then reassign the ID number of the object to that ID number in order to match with the remote MySQL database.

 

I will continue to work on this project in the coming weeks, and I plan on making some detailed blog posts about it, so stay tuned! Eventually, I hope to have this process wrapped up in an easy-to-install and easy-to-implement library. We will see how that turns out.

 

Visualizing my Operating System Use

In my work, home, and play life, I constantly switch operating systems. If I want to play games, I use Windows. If I want to get some hardcore development done, I use Linux. If I want to casually browse the internet, I use my mac. But the question remains: How often do I use each operating system?

The Tool

Being a software developer, I thought that the best approach to answering this question would be to create a tool to do it for me. I would call it OSTracker (Short for Operating Systems Tracker – An extremely advanced acronym that definitely required this parenthetical explanation). Originally, the plan was to have a service that could run on any operating system that tracked how long I was on the computer. These metrics would then be uploaded to a centralized remote repository and would be parsed and analyzed by some sort of scripting language, that would then display the results in a semi-pretty format.

In order to create something that could easily be run on any operating system that I would pickup, I thought of creating the service in Java. The service could then be called with the following calling conventions:

java ostracker <login/logoff>

This would be extremely useful because this would also allow me to easily port the project from Linux to Windows, Android, or any other modern operating system that I may come across.

The Unfortunate Lack of Direction

Unfortunately, my laziness got the best of me. After all, I did not need to worry about having a constantly running service, as all I needed to keep track of was the name of the operating system, the time of the event, and whether or not the event was a log off or a log on. Since all I needed to do was push these metrics once, I decided that it would be better if everything was handled on the remote repository itself.

Thus, I created a PHP Servlet that would process all events. The servlet is pretty simple, it essentially accepts the aforementioned details as parameters and plops them into a file on the server if they should be there.

How does the servlet know if the information should be there? Well, if the previously tracked action is one that makes sense in relation to the new action. For instance,

  • You just logged off Windows, your last action was logging off Windows. This pair of events doesn’t really make sense.
  • You just logged off Windows, your last action was logging on Windows. This pair of events makes perfect sense.
  • You just logged onto Linux, your last action was logging off of Windows. Again, this makes perfect sense. I just switched operating systems.

Now, I wanted to give the system the ability to track when I was on many operating systems at the same time (assuming multiple computers, I suppose), so the servlet handles that case well.

  • You just logged onto Linux, your last action was logging onto Windows. No problem.

The datafile is arranged in a format of <operatingSystem><time><action> and different events are delineated with a newline character. Thus, the PHP file just has to use all the information from the last event in order to determine if the next event is able to be processed. If so, a 1 is returned. If not, a 0 is returned. Here’s the code:

$actions = explode("\n", file_get_contents($filename)); 
$actions_count = count($actions) - 2; 
$lastAction = $actions[$actions_count]; 
 
if (strstr($lastAction, $os) == FALSE || strstr($lastAction, $action) == FALSE)
{ 
  $resp = file_put_contents($filename, "$os$time$logaction\n");
  echo 1; 
} 
else 
{ 
  echo 0; 
}

As you can see, it’s extremely simple.

The Client-Side

Linux

Now that I had something that was running on the server, I needed something to actually call the PHP file in order to push the metrics onto the server. Again, I considered writing this in Java as it would provide me with the ease of portability. However, laziness got the best of me. I decided to write proprietary scripts.

For UNIX, I wrote a bash script. The script is extremely simple. In order to account for a possible lack of internet connection, I decided to essentially “spam” the script until I got a proper response from the server. The PHP file is navigated to using the curl utility (in quiet mode, so that I don’t notice any spam whatsoever).

Again, nothing too hefty at all. All of the meat is handled by the curl utility.

#!/bin/bash

time=`date +%s`
action=$1

response=`curl -s# ${url}?system=${operatingsystem}&time=${time}&action=${action}
while [[ ${response} != 1 ]]; do
        response=`curl -s# ${url}system=${operatingsystem}&time=${time}&action=${action}
        if [[ ${resonse} == 0 ]]; then
                exit 0
        fi
        printf ${response}
done
exit 0

Windows

On Windows, it is essentially the same story. In order to recover from a lack of internet connection, I decided to plop the whole thing in a while loop so that it ran until it found a response. Not sure whether or not having a broken pipe would throw an exception, I also wrapped the whole thing in a try/catch that when caught simply looped back to the beginning of the method.

All of this was implemented in C#.Net and uses a WebClient to do the heavylifting and post the data to the server. The code is almost identical to that of the BASH script:

class Program
{
 static void Main(string[] args)
 {
 
 try
 {
 string action = "logout";
 WebClient wc = new WebClient();
 string time = ConvertToUnixTimestamp(DateTime.Now).ToString();
 string resp = wc.DownloadString(url + "?system=windows&time=" + time + "&action=" + action);
 while (int.Parse(resp) != 1)
 resp = wc.DownloadString(url + "?system=windows&time=" + time + "&action=" + action);
 }
 catch (Exception e)
 {
 Main(args);
 }
 
}

public static double ConvertToUnixTimestamp(DateTime date)
{
 DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0);
 TimeSpan diff = date.ToUniversalTime() - origin;
 return Math.Floor(diff.TotalSeconds);
}
}

Getting the Scripts To Run

Getting the scripts to run at the right time was probably the most challenging part. I didn’t have to worry about running the scripts many, many times since the PHP servlet took care of any situations that didn’t make sense (If we logged in twice, per se); however, automatically executing the scripts when the computer is turned on or off is a daunting task.

Linux

In order to get this to work on Linux, I called the script from my /etc/rc.local file, which launched the script on the computer’s start with no problems whatsoever.

Shutdown was a different story, however. Since during shutdown the network adapter is shut down very early, I had to make sure that my script would run before anything else in the shutdown process. I learned that in order to get something to run on shutdown, there is a small process to go through:

1) Copy your script to /etc/init.d
2) Make a symlink of it in /etc/rc0.d
3) Make a symlink of it in /etc/rc6.d

Then, within rc?.d, the symlinks are actually executed in ASCIIbetical order, so in order to make sure that my scripts executed first, I made sure to insert ‘A01’ at the beginning of the symlink names. I think that did the trick, since everything works properly!

Windows

Getting things to automatically start in Windows is a bit trickier. There were some suggestions on the internet to use the Group Policies Editor in order to add my programs as Startup/Shutdown scripts; however, these did not work for me for whatever reason.

To solve the startup issue, I made a shortcut to my script’s executable and plopped it into the startup folder in order to make sure that it got launched when Windows started.

Shutdown, however, is a bit more complicated. Since there is no way that  I know of to control Windows shutdown process, I decided to make a simple wrapper script for shutting down my computer. Instead of simply shutting it down, it calls my OSTracker script and then tells the computer to shut down. A easy, two line, batch script. The only caveat? It’s sitting on my desktop and I need to be sure to double click it whenever I need to shutdown my Windows machine. Shouldn’t be too hard to remember!

The Display

Now comes the fun part. How do I display this information to the public? I decided to go with a line graph. Although it’s probably not the best choice, it looks cool, and with a ‘1’ representing logged in and a ‘0’ representing logged off, it’s definitely easy to decipher a line graph like such.

In order to implement the graph, I used Chart.js. As a bonus, Chart comes with some pretty cool animations, so it made my page look just a tad nicer than I had originally intended.

Basically, the JavaScript page makes use of regular expressions to find the operating system, time, and action taken on each line of the statistics file. Nothing too interesting, so I’ll just skip the ramble.

The Product

As you could probably deduce if you actually read any of my rambling, this system is extremely hacked together. Nothing was really planned out and it took me about 6 hours to complete. However, this is my first spur-of-the-moment project in quite some time. It feels great.

Here are the results: https://brandonio21.com/OSTrackerStats/