Category Archives: Web

Results of Stripping Cookies from all non-GET Cross-Origin HTTP Requests

Being inspired from a class that I took on computer security and being particularly moved by literature written on Cross-Site Request Forgery, namely a paper written by Zeller et al. [1] I decided to begin work on a Chrome extension that would locally aid in the prevention of CSRF attacks by stripping cookies from potentially dangerous requests.

The Princeton paper mentions that this can be done if the following two rules are implemented:

  1.  Allow any GET request to go through, since HTTP/1.1 dictates that GET requests should not perform action.
  2. Block (strip cookies) any non-GET request which voids the same-origin policy.

I went ahead and did just that in the development of No-CSRF, which shamelessly used a very similar extension by avlidienbrunn as a starting point.

After about a week of testing the extension through myself and some colleagues, we discovered that this method of preventing CSRF actually broke many websites, including (but not limited to) Google Drive, Facebook Messenger, and a login page for the University of California, San Diego.

The reasoning for this breakage was simple. These websites relied on functionality that sent cross-origin non-GET requests to different parts of product, each with their own sub-domain. The sub-domain, of course, is included in an origin and therefore differing sub-domains indicate different origin.

One thing to note is that the paper by Zeller et al. suggested that the stripping of cookies should not be performed if the request is valid under Adobe’s cross-domain policy. However, even when allowing the websites outlined in crossdomain.xml, functionality was still broken.

There are a few results that follow from these observations:

  1.  The local CSRF protection scheme outlined in Zeller et al. cannot be accomplished without breaking several popular websites.
  2.  In order to resolve this, the policy may be changed such that cookies are only stripped from requests that are cross-domain, where a domain is defined as everything not including the sub-domains (for instance, the domain of would be Thus, a non-get cross-domain request from to would not be stripped.)
  3. On a shared system where different users control different sub-domains, therefore, it is possible for to send a malicious CSRF attack to without stripped headers.

In fact, this proposed lax “cross-domain stripping” is exactly what was implemented in avlidienbrunn’s extension before I removed it and replaced it with a stricter “cross-origin stripping” in No-CSRF.

As a result, therefore, I propose that by sending cross-origin non-GET requests, major websites are limiting the success of local CSRF prevention by forcing them to be less-strict about which requests have their cookies stripped.

   [ + ]

1. Cross Site Request Forgeries: Exploitation and Prevention. Zeller, William and Felten, Edward W. Princeton University, 2008.

Announcing SourceGetter

Recently during University of California, San Diego’s quarterly Beginner’s Programming Competition, it was asked whether we could automate the scoring process. That is, we hand out balloons to contestants that complete a certain number of problems. Using hackerrank as our competition management solution, we wanted to know if there was a way to get the scores of people from the hackerrank leaderboard. Hackerrank, having an outdated and incomplete API, was perfect for web scraping.

Basically, web scraping is useful whenever you need information from a webpage, but they do not have an official way to access that information. So, a script is created that acts like a normal user browsing the web and “scrapes” information from the webpage. Usually, this is not a problem – send an HTTP GET request to a webpage with a utility like cURL and then parse the returned source code.

The problem lies when the website is written using a dynamic-rendering language, such as Javascript. Tools like cURL do not wait for the page to render and just return the static content that would always be displayed. This is where SourceGetter comes into play.

SourceGetter is a simple node servlet running PhantomJS that renders a URL passed via a URL parameter and returns the rendered source code. Thus, if you simply use cURL through the SourceGetter servlet, you will get the rendered code.

In order to use it, take a look at the following examples:

Getting the source code of

Getting the source code of

Basically, the servlet runs on port 3000 of this website,, and thus all HTTP GET requests need to be sent through here. Don’t worry, we’re not tracking any of your data. Everything is serve-and-forget. Here is a bit of useful information:


Getting the rendered source code of a URL


Getting the rendered source code of a URL under HTTPS

Putting a “+” in front of the URL indicates that you want the web page to be summoned under the HTTPS protocol.

Getting the rendered source code of a web-page with slashes

Simply replace the slashes with “+” signs.



Getting the rendered source for


Getting the rendered source for



Hopefully you enjoy this product. If you have any questions or comments, please ask!

Reverse DNS (rDNS) and DigitalOcean

As you may know from my previous post, I recently made the switch to Digital Ocean for my VPS needs. So far, everything has been great. However, I encountered a problem the other day regarding Reverse DNS (rDNS) for my VPS. The error occurred when I attempted to send an email to someone with a email address. I got a reply containing a failure message. The message contained details about an invalid rDNS name.

After searching the Digital Ocean forums, I learned that by default, Digital Ocean configures rDNS for all hosting accounts. I couldn’t, for the life of me, figure out why mine was not working.

Then I read a small post that stated something along the lines of

In order for rDNS to be configured properly, the name of your droplet must be the same name as the URL that is being used to point to the droplet’s IP.

After changing the name of the droplet to reflect the primary domain name, everything worked out and emails started sending.

So now you know: Name your droplets with their corresponding URLs.