Results of Stripping Cookies from all non-GET Cross-Origin HTTP Requests

Being inspired from a class that I took on computer security and being particularly moved by literature written on Cross-Site Request Forgery, namely a paper written by Zeller et al. [1] I decided to begin work on a Chrome extension that would locally aid in the prevention of CSRF attacks by stripping cookies from potentially dangerous requests.

The Princeton paper mentions that this can be done if the following two rules are implemented:

  1.  Allow any GET request to go through, since HTTP/1.1 dictates that GET requests should not perform action.
  2. Block (strip cookies) any non-GET request which voids the same-origin policy.

I went ahead and did just that in the development of No-CSRF, which shamelessly used a very similar extension by avlidienbrunn as a starting point.

After about a week of testing the extension through myself and some colleagues, we discovered that this method of preventing CSRF actually broke many websites, including (but not limited to) Google Drive, Facebook Messenger, and a login page for the University of California, San Diego.

The reasoning for this breakage was simple. These websites relied on functionality that sent cross-origin non-GET requests to different parts of product, each with their own sub-domain. The sub-domain, of course, is included in an origin and therefore differing sub-domains indicate different origin.

One thing to note is that the paper by Zeller et al. suggested that the stripping of cookies should not be performed if the request is valid under Adobe’s cross-domain policy. However, even when allowing the websites outlined in crossdomain.xml, functionality was still broken.

There are a few results that follow from these observations:

  1.  The local CSRF protection scheme outlined in Zeller et al. cannot be accomplished without breaking several popular websites.
  2.  In order to resolve this, the policy may be changed such that cookies are only stripped from requests that are cross-domain, where a domain is defined as everything not including the sub-domains (for instance, the domain of drive.google.com would be google.com. Thus, a non-get cross-domain request from calendar.google.com to drive.google.com would not be stripped.)
  3. On a shared system where different users control different sub-domains, therefore, it is possible for attacker.website.com to send a malicious CSRF attack to private.website.com without stripped headers.

In fact, this proposed lax “cross-domain stripping” is exactly what was implemented in avlidienbrunn’s extension before I removed it and replaced it with a stricter “cross-origin stripping” in No-CSRF.

As a result, therefore, I propose that by sending cross-origin non-GET requests, major websites are limiting the success of local CSRF prevention by forcing them to be less-strict about which requests have their cookies stripped.

References
1 Cross Site Request Forgeries: Exploitation and Prevention. Zeller, William and Felten, Edward W. Princeton University, 2008.

Complacency RE: TSA

This past weekend I was on a trip to San Francisco. I was only staying for the weekend and hadn’t packed much. Everything I needed fit into a carry-on. This also meant that everything I needed would have to be loaded onto the TSA conveyor-belt to ensure that I wouldn’t be bringing anything dangerous into the passenger cabin of the plane. To my surprise, my bag was flagged and my tube of toothpaste was deemed “too large” and had to be taken away from me.

Upon returning home a few days later, I relayed the story to a colleague, who said something along the lines of “The only one to blame is yourself.” After unsuccessfully trying to argue my point that the TSA should not be able to take away toothpaste that is obviously toothpaste [1], I realized that my colleague had simply grown complacent with the TSA.

I believe strongly that the TSA is a good example of the people giving up freedoms for “protection,” something that should not happen. [2] This blog post, therefore, serves as a reminder to the reader that the people should not be willing to sacrifice freedoms for safety. The people should only be willing to accept safety precautions that do not encroach upon their freedoms. The TSA and NSA, unfortunately, do not fall into this category and should not be accepted. [3]

References
1 I had asked the TSA agent if I could squeeze toothpaste out such that, when folded, the tube would meet the maximum size restrictions. They said this was not allowed.
2 A fantastic explanation of this can be seen in Glen Greenwald’s TED Talk about privacy https://www.ted.com/talks/glenn_greenwald_why_privacy_matters?language=en
3 Unfortunately, I am not aware of a good way to fight the TSA. As a side thought, however, I wonder how many innocent tubes of toothpaste must be taken away until a tube of toothpaste is considered to be a safe item

Rust: You probably meant to use .chars().count()

After reading a fantastic article by Ian Whitney, it came to my attention that there is some confusion regarding the “length” of a string in Rust. According to the documentationstd::string::String.len() returns the number of bytes that are in the given string. On a technical level, there is nothing confusing about this definition. However, it is widely accepted by other languages (like Java and Ruby) that the “length” of a string is the number of characters within the string.

The problem with this difference in definition is brought to light in a playpen by respeccing which shows that Rust’s std::String::String.len() function produces counter-intuitive results. Two strings with the same character count return different “lengths” because they contain a different number of bytes.

The solution to this is instead to use a String’s character iterator and count the number of elements, as std::string::String.chars().count() does.

Reddit Discussion

Introducing download-sweeper

One of the biggest issues on any system of mine is the cluttering of the Downloads folder. In the modern Internet age, we download a ton of files. Checking right now, the contents of my Downloads folder sum to about 15GB in filesize. Although that probably isn’t too much of an issue given the cheapness of storage today, it remains troublesome when hand-searching through the files.

When I only ran Windows, I used Cyber-D’s Autodelete to delete my old downloads. This worked perfectly for me, except for the fact that sometimes it would delete files that I forgot that I wanted. But with Windows, I could always find those files in the recycle bin.

Fast forward several years, and I now run Linux as my primary operating system. Without doing much research to see if a program already existed, I drafted a “spec” for download-sweeper, a program that would delete old files in the Downloads directory, but also allow the user a “grace period” where they could still recover old Download files if they were removed.

Sure, this could probably be created in less than 100 lines of C, but I was looking to create a robust and portable solution that would allow me to quickly make changes if I needed to. Thus, I created a clean (~350 lines) Python solution that would do just that.

Further, when integrated with systemd, everything works perfectly. I currently have the application deployed on my Laptop, my Desktop, and on the webserver that’s running this blog. Of course, there aren’t any downloads on the webserver, so I use it to act as a “virus/malware quarantining tool”.

You can view the project on its GitHub page, here: https://github.com/brandonio21/download-sweeper

Ditch SLiM, Replace it with SDDM

My laptop, which usually boasts a twelve hour battery life, was reporting that it would only last for a few hours on a fresh charge. Something was obviously wrong. Checking the output of htop, it became clear that journald was the culprit.

It seemed that journald was simply writing log messages over and over and over again, and running journalctl -xe revealed the culprit. I recently updated my system, which included new updates to Xrandr and many other packages. SLiM, my beloved simple and clean display manager, was no longer compatible.

Consulting the ArchWiki page for SLiM, it is made immediately clear that the display manager has been out of service for some time. Thus, in order to save battery life and keep my system up-to-date, I have chosen to replace it with SDDM. The transition is clean, simple, and SDDM is just as nice, so I urge anyone still using SLiM to do the same.

All that needs to be done is a SLiM disable, a SLiM uninstall, a SDDM install, a SDDM theming, and an SDDM enable, not necessarily in that order. Here is the raw list of commands:

sudo pacman -S sddm
sddm --example-config | sudo tee /etc/sddm.conf
sudo systemctl disable slim
sudo systemctl enable sddm
sudo systemctl stop slim
# You should now be logged out of your XSession. Log in to the tty and do the following:
sudo pacman -R slim
sudo systemctl start sddm

Everything should now be good! Of course, if you’d like to make your SDDM pretty, you’ll have to change the theme (There are two ArchLinux based themes in the AUR)

There are some caveats, however. For one, I was a heavy user of slimlock. Obviously, this is not available with SDDM. I remedied the situation by switching to i3lock.