Category Archives: Security

Do Not Give Users Reason to Uninstall

Before I begin, I want to point out that this post is not about some evil method of user-retainment nor is it about uninstalling software on an operating systems level. This post is fitted within the scope of any single piece of security software and its owner.

During the development of my first piece of security-related software, No-CSRF, I began to notice that there were a lot of thoughts that crossed my mind which were in the same vein as “But what if the user forgets to re-enable after they disable?”. I was spending a lot of time making a Chrome extension that would make the user more safe as they browsed the World Wide Web; however, the extension was only useful if the user had it enabled.

I never did anything about this issue, however. The only attempt I made to solve the problem was to ensure that the user could re-enable the extension just as easily as they disabled it. When they disabled it, they would know that they could also re-enable it. [1]

When the first version of the extension was released, however, it was a little too strict. Sites that posed no security threat had their functionalities broken and some sites were blocked completely. In fact, one of my colleagues, who was excited to try the extension and make his browser a small bit safer, found many of these cases. After complaining that he was unable to pay an online bill due to the extension, he uninstalled it from Chrome completely and urged me to fix it.

This isn’t an issue in itself; however, it exemplifies a problematic attitude amongst users – something along the lines of “If it breaks what I need, then I don’t need it.” Even if No-CSRF was protecting users from many dangerous Cross-Site Request Forgeries, users chose to uninstall it, essentially choosing convenience over security.

Much like users uninstalled the extension, other users will take far more drastic steps in order to make things more convenient. I had another friend who, upon having port issues with a dedicated server, decided to disable his firewall. These choices, which jeopardize a user’s security, are made without much thought.

Thus, when it comes to security software, a careful balance should be sought after. Although the security-software developers would probably like their software to protect users from many different attacks, they should instead choose to balance protection and usability. A security protocol that will be used by many users is also one that allows the user to become more secure without a change in workflow.

If the developer chooses to make their security software too secure, users may uninstall, making the benefits of the security null. If the users didn’t uninstall a software with less protection, but protection nonetheless, then that is the software that should be produced.

As soon as users have a reason to uninstall, they will. Do not give them that reason.

Finding the balance between security and usability is difficult. Although it may be tempting to solve the problem by making it difficult to disable the security software as a whole, users should never be stripped of this freedom. Thus, I propose that the following tenants should be followed when making security-related software:

  1. Do not alter the difficulty of disabling/uninstalling. User freedom is just as important as security.
  2. Do not give the user an unavoidable reason to uninstall your software.
  3. Make disabling on a per-case basis easier than uninstalling.

If these three points are achieved, the user should choose to disable the security in any case where a broken workflow is not worth increased security, but should not disable the security software as a whole.

   [ + ]

1. This is referring to a custom in-extension disable rather than the browser-level disable

Results of Stripping Cookies from all non-GET Cross-Origin HTTP Requests

Being inspired from a class that I took on computer security and being particularly moved by literature written on Cross-Site Request Forgery, namely a paper written by Zeller et al. [1] I decided to begin work on a Chrome extension that would locally aid in the prevention of CSRF attacks by stripping cookies from potentially dangerous requests.

The Princeton paper mentions that this can be done if the following two rules are implemented:

  1.  Allow any GET request to go through, since HTTP/1.1 dictates that GET requests should not perform action.
  2. Block (strip cookies) any non-GET request which voids the same-origin policy.

I went ahead and did just that in the development of No-CSRF, which shamelessly used a very similar extension by avlidienbrunn as a starting point.

After about a week of testing the extension through myself and some colleagues, we discovered that this method of preventing CSRF actually broke many websites, including (but not limited to) Google Drive, Facebook Messenger, and a login page for the University of California, San Diego.

The reasoning for this breakage was simple. These websites relied on functionality that sent cross-origin non-GET requests to different parts of product, each with their own sub-domain. The sub-domain, of course, is included in an origin and therefore differing sub-domains indicate different origin.

One thing to note is that the paper by Zeller et al. suggested that the stripping of cookies should not be performed if the request is valid under Adobe’s cross-domain policy. However, even when allowing the websites outlined in crossdomain.xml, functionality was still broken.

There are a few results that follow from these observations:

  1.  The local CSRF protection scheme outlined in Zeller et al. cannot be accomplished without breaking several popular websites.
  2.  In order to resolve this, the policy may be changed such that cookies are only stripped from requests that are cross-domain, where a domain is defined as everything not including the sub-domains (for instance, the domain of drive.google.com would be google.com. Thus, a non-get cross-domain request from calendar.google.com to drive.google.com would not be stripped.)
  3. On a shared system where different users control different sub-domains, therefore, it is possible for attacker.website.com to send a malicious CSRF attack to private.website.com without stripped headers.

In fact, this proposed lax “cross-domain stripping” is exactly what was implemented in avlidienbrunn’s extension before I removed it and replaced it with a stricter “cross-origin stripping” in No-CSRF.

As a result, therefore, I propose that by sending cross-origin non-GET requests, major websites are limiting the success of local CSRF prevention by forcing them to be less-strict about which requests have their cookies stripped.

   [ + ]

1. Cross Site Request Forgeries: Exploitation and Prevention. Zeller, William and Felten, Edward W. Princeton University, 2008.