Category Archives: Tired Rambling

I write not because I wish for others to read what I might say, but rather because I do not wish for my thoughts to vanish with time, as I will.

I say this as if I am a writer.

Segregation In Technology: A Short Rant

For the past few weeks, I have been running Arch Linux on both of my machines. This switch was made because I was hoping for a little more adventure with my machines and a little more understanding of how Linux is setup. I have been very pleased with the decision. Not only is there a constant feeling that I made the right choice, but every single problem that I have run into has been fixable and feels great to fix.

Now, I will give a more detailed explanation of how big of an Arch Linux fan I am sometime in the future. For now, I would like to travel about 3 weeks into the past and take a look at when I was first installing Arch Linux. I decided to dual boot Windows 8.1 and Arch Linux so that I am still able to use my new gaming machine to play all of the newest games on Windows. During this install, I ran into minimal problems.

After everything was setup on both my laptop and my desktop, I noticed a quite unfortunate problem. On my laptop, when I booted into Windows, the system time was incorrect. (It was 5 hours ahead of my local time). On my Desktop, there was the same problem except that my Arch time was 5 hours ahead of my local time. For the first few days, my thought process was something along the lines of “No big deal; I can manage.” But after switching back and forth between my desktop and laptop, I found that it was hard to keep track of what operating system had the incorrect timestamp on either machine.

After doing some Googling, it wasn’t hard to find that this problem was due to the fact that Linux requires the BIOS system time to be set to GMT while Windows requires the BIOS system time to be set to an accurate local time. In order to fix the problem, one has to use an unreliable method to make Windows or Linux convert the BIOS time into their required time formats. Personally, after experiencing some problems with getting Windows to recognize the BIOS time as GMT, I decided to set my BIOS time to my local time and force Linux to handle the conversion.

After this small delay, I spent a good amount of time thinking about how many of these type of problems exist: when two different, yet very similar, things decide to use different standards that cause problems when trying to get things setup in coordination. Even though all of these technologies share the same functionalities, they go about the architecture design choices in completely different ways.

I like to call this the “Segregation in Technology” problem. As technology evolves, every creator of the new technology will attempt to make things better by making new decisions; however, any decision they make may go against the already established standards, and any time this happens, confusion is undoubtedly going to happen.

Another example of this type of problem that comes to mind is filesystems. Several popular filesystems are in existence (FAT, NTFS, EXT4); however, no operating system can seem to agree on which file system to use. What does this mean? When switching between these systems, some type of conversion must occur in order to access the files from another type of system. This becomes incredibly annoying, for instance, when trying to do large file transfers from my Windows machine to my Android phone, which causes problems because of the incompatibility between Window’s NTFS system and the SD Card’s FAT file system.

This very same problem was about to occur recently when ChromeOS announced that they would drop support for the EXT4 file system (Which is essentially the standard in all UNIX-esque systems). Of course, there was a large uproar from the community because of this, and Google revoked their previous announcement. Although the problem never occurred, the consideration for the drop of filesystem support by Google shows how easily the “Segregation in Technology” problem can manifest itself.

As time goes on, take time to notice pieces of technology that do not share standards (Hint: Pre-Lolli Android). If I were an old man who didn’t have the time to learn the ins-and-outs of the many forms of technology that exist today, I would not even attempt to use these devices in my daily life. I can see why most elderly folks don’t. I think that this problem of segregation is one of the largest problems in modern technology and its recent boom in the “Internet of Things”. What do you think?

It’s Just Everlasting Negotiation.

There was a time in my life not to long ago when I made several very large mistakes. I betrayed the trust of everyone around me, thinking that I knew best for myself. Turns out, I was wrong. While I thought I was happy, everyone I knew constantly berated me for my decisions.

When I came to admit my mistakes, none were as quick to forgive me as she was. She was perfect. She smiled when I cried, she tended to my every need and always made sure that I was okay. She helped me grow. I came to rely on her to be happy, as she was the only one who supported me with everything I did and didn’t do.

She then went on to make some very bad decisions. She betrayed my trust in the worst ways imaginable. Returning the favor, I made sure that I was the one that was quickest to forgive her. What’s in the past is in the past, and I know she would never make those mistakes again.

At this time, I really wish I could sit down with her and tell her that I miss her and that only when she is around is everything at its happiest. She is far away, both physically and mentally, and I am often left alone for days while she goes on with her life. I really miss her.

Then, when she returns, she wants to negotiate. She wants to tell me to stop relying on her; yet, when she needs something, she is so quick to return to me. And so the cycle continues. It is just everlasting negotiation. I wish that we could just settle down and stick to what makes us both the happiest.

Again, I wish that I could tell her this, but I know that nothing good would come of it. It will always be just everlasting negotiation.

If you’re reading this, I love you.

Visualizing my Operating System Use

In my work, home, and play life, I constantly switch operating systems. If I want to play games, I use Windows. If I want to get some hardcore development done, I use Linux. If I want to casually browse the internet, I use my mac. But the question remains: How often do I use each operating system?

The Tool

Being a software developer, I thought that the best approach to answering this question would be to create a tool to do it for me. I would call it OSTracker (Short for Operating Systems Tracker – An extremely advanced acronym that definitely required this parenthetical explanation). Originally, the plan was to have a service that could run on any operating system that tracked how long I was on the computer. These metrics would then be uploaded to a centralized remote repository and would be parsed and analyzed by some sort of scripting language, that would then display the results in a semi-pretty format.

In order to create something that could easily be run on any operating system that I would pickup, I thought of creating the service in Java. The service could then be called with the following calling conventions:

java ostracker <login/logoff>

This would be extremely useful because this would also allow me to easily port the project from Linux to Windows, Android, or any other modern operating system that I may come across.

The Unfortunate Lack of Direction

Unfortunately, my laziness got the best of me. After all, I did not need to worry about having a constantly running service, as all I needed to keep track of was the name of the operating system, the time of the event, and whether or not the event was a log off or a log on. Since all I needed to do was push these metrics once, I decided that it would be better if everything was handled on the remote repository itself.

Thus, I created a PHP Servlet that would process all events. The servlet is pretty simple, it essentially accepts the aforementioned details as parameters and plops them into a file on the server if they should be there.

How does the servlet know if the information should be there? Well, if the previously tracked action is one that makes sense in relation to the new action. For instance,

  • You just logged off Windows, your last action was logging off Windows. This pair of events doesn’t really make sense.
  • You just logged off Windows, your last action was logging on Windows. This pair of events makes perfect sense.
  • You just logged onto Linux, your last action was logging off of Windows. Again, this makes perfect sense. I just switched operating systems.

Now, I wanted to give the system the ability to track when I was on many operating systems at the same time (assuming multiple computers, I suppose), so the servlet handles that case well.

  • You just logged onto Linux, your last action was logging onto Windows. No problem.

The datafile is arranged in a format of <operatingSystem><time><action> and different events are delineated with a newline character. Thus, the PHP file just has to use all the information from the last event in order to determine if the next event is able to be processed. If so, a 1 is returned. If not, a 0 is returned. Here’s the code:

$actions = explode("\n", file_get_contents($filename)); 
$actions_count = count($actions) - 2; 
$lastAction = $actions[$actions_count]; 
 
if (strstr($lastAction, $os) == FALSE || strstr($lastAction, $action) == FALSE)
{ 
  $resp = file_put_contents($filename, "$os$time$logaction\n");
  echo 1; 
} 
else 
{ 
  echo 0; 
}

As you can see, it’s extremely simple.

The Client-Side

Linux

Now that I had something that was running on the server, I needed something to actually call the PHP file in order to push the metrics onto the server. Again, I considered writing this in Java as it would provide me with the ease of portability. However, laziness got the best of me. I decided to write proprietary scripts.

For UNIX, I wrote a bash script. The script is extremely simple. In order to account for a possible lack of internet connection, I decided to essentially “spam” the script until I got a proper response from the server. The PHP file is navigated to using the curl utility (in quiet mode, so that I don’t notice any spam whatsoever).

Again, nothing too hefty at all. All of the meat is handled by the curl utility.

#!/bin/bash

time=`date +%s`
action=$1

response=`curl -s# ${url}?system=${operatingsystem}&time=${time}&action=${action}
while [[ ${response} != 1 ]]; do
        response=`curl -s# ${url}system=${operatingsystem}&time=${time}&action=${action}
        if [[ ${resonse} == 0 ]]; then
                exit 0
        fi
        printf ${response}
done
exit 0

Windows

On Windows, it is essentially the same story. In order to recover from a lack of internet connection, I decided to plop the whole thing in a while loop so that it ran until it found a response. Not sure whether or not having a broken pipe would throw an exception, I also wrapped the whole thing in a try/catch that when caught simply looped back to the beginning of the method.

All of this was implemented in C#.Net and uses a WebClient to do the heavylifting and post the data to the server. The code is almost identical to that of the BASH script:

class Program
{
 static void Main(string[] args)
 {
 
 try
 {
 string action = "logout";
 WebClient wc = new WebClient();
 string time = ConvertToUnixTimestamp(DateTime.Now).ToString();
 string resp = wc.DownloadString(url + "?system=windows&time=" + time + "&action=" + action);
 while (int.Parse(resp) != 1)
 resp = wc.DownloadString(url + "?system=windows&time=" + time + "&action=" + action);
 }
 catch (Exception e)
 {
 Main(args);
 }
 
}

public static double ConvertToUnixTimestamp(DateTime date)
{
 DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0);
 TimeSpan diff = date.ToUniversalTime() - origin;
 return Math.Floor(diff.TotalSeconds);
}
}

Getting the Scripts To Run

Getting the scripts to run at the right time was probably the most challenging part. I didn’t have to worry about running the scripts many, many times since the PHP servlet took care of any situations that didn’t make sense (If we logged in twice, per se); however, automatically executing the scripts when the computer is turned on or off is a daunting task.

Linux

In order to get this to work on Linux, I called the script from my /etc/rc.local file, which launched the script on the computer’s start with no problems whatsoever.

Shutdown was a different story, however. Since during shutdown the network adapter is shut down very early, I had to make sure that my script would run before anything else in the shutdown process. I learned that in order to get something to run on shutdown, there is a small process to go through:

1) Copy your script to /etc/init.d
2) Make a symlink of it in /etc/rc0.d
3) Make a symlink of it in /etc/rc6.d

Then, within rc?.d, the symlinks are actually executed in ASCIIbetical order, so in order to make sure that my scripts executed first, I made sure to insert ‘A01’ at the beginning of the symlink names. I think that did the trick, since everything works properly!

Windows

Getting things to automatically start in Windows is a bit trickier. There were some suggestions on the internet to use the Group Policies Editor in order to add my programs as Startup/Shutdown scripts; however, these did not work for me for whatever reason.

To solve the startup issue, I made a shortcut to my script’s executable and plopped it into the startup folder in order to make sure that it got launched when Windows started.

Shutdown, however, is a bit more complicated. Since there is no way that  I know of to control Windows shutdown process, I decided to make a simple wrapper script for shutting down my computer. Instead of simply shutting it down, it calls my OSTracker script and then tells the computer to shut down. A easy, two line, batch script. The only caveat? It’s sitting on my desktop and I need to be sure to double click it whenever I need to shutdown my Windows machine. Shouldn’t be too hard to remember!

The Display

Now comes the fun part. How do I display this information to the public? I decided to go with a line graph. Although it’s probably not the best choice, it looks cool, and with a ‘1’ representing logged in and a ‘0’ representing logged off, it’s definitely easy to decipher a line graph like such.

In order to implement the graph, I used Chart.js. As a bonus, Chart comes with some pretty cool animations, so it made my page look just a tad nicer than I had originally intended.

Basically, the JavaScript page makes use of regular expressions to find the operating system, time, and action taken on each line of the statistics file. Nothing too interesting, so I’ll just skip the ramble.

The Product

As you could probably deduce if you actually read any of my rambling, this system is extremely hacked together. Nothing was really planned out and it took me about 6 hours to complete. However, this is my first spur-of-the-moment project in quite some time. It feels great.

Here are the results: http://brandonio21.com/OSTrackerStats/

A Rant About Startup Growth

A bit of background

I very recently began working at a software development company that has grown very, very fast. Its profits have grown tremendously, its employee count has grown, its project count has grown, and even the amount of buildings it owns has grown. From the outsiders perspective, this is a great thing. When talking to those that are in charge of the business side of things (sales, management, hiring, etc..), everything is going just as according to plan; however, the tech side of the company definitely has a different story to tell. The company has grown so rapidly, in fact, that the technology and routines of the company cannot keep up with the ever-changing environment that the company lives in.

A Quick Example

Take, for instance, version control. This is probably my biggest gripe about how the company operates. The company consists of many, many teams doing many, many different things. None of these teams can agree on a single method of version control. Each team does something different with their own little twist. Many teams use an internal clone of GitHub, Many others use Perforce. This doesn’t seem so bad at first, however, with each side of version control comes its own integration of code review, submittal processes, and checkin/checkout procedures. There have been several occasions where I needed to grab code from another team that was in GitHub and attempt to sync it with my code that is stored in Perforce. It’s a logistical nightmare.

It doesn’t stop there, though. Each method of version control has its own story. I won’t go too far in depth, but let’s just say that the goal of the company was once something along the lines of “We need to push code that works“. Now, the company is leaning more towards a philosophy of “We need to push code that works and is efficient, modular, and understandable“. This makes fixing bugs on older products or adding features to older products a logistical nightmare.

A Resolution?

One of the problems that I have found with Perforce is that its local repository management is more than complicated. There have been several times where I quickly want to branch a directory full of code so that I can play around with a  possible new feature, but be able to quickly switch back to the main devline of code. With Perforce, this is much more complicated than a simple git checkout -b feature.

So, in order to solve this problem, I have resorted to keeping all of my copies of the company’s source code within my own local git repositories. I then sync all of the Perforce checkouts into a branch and then merge that branch into all of my feature branches, resolving all conflicts along the way.

The Message

Although this post seems to have gone nowhere at all and made no constructive points whatsoever (that was definitely not the purpose of the post), it did point out something that I never expected. The internal infrastructure of large companies can be a serious mess. This being my first position at a large software development company, I honestly did not expect things to be as messy as my code directory on my home computer. But it’s also much, much worse. I believe that the problem arose from the fact that the company simply expanded very, very fast and that the internal tech employees were unable to adapt to newer tools quickly enough.

What does this mean? It means that now developers are stuck at a crossroads of bad naming conventions and commented-out blocks of code. They’re forced to use project-specific version control and they’re constantly deciding on which of the hundreds of branches they should push their code to.

Honestly, in situations like this, I think it is best for the company to stop working on new features for a small while, and fix their current infrastructure disaster.