Cross Site XMLHttpRequest
Cross-Site XMLHttpRequest allows a web page to read information from other web servers using norm XMLHttpRequest. In the past this has not been permitted since the other server may be sitting inside a corporate firewall or may be a server where the user is logged in.
To solve this problem it is suggested that the accessed server can signal back to the browser that it is ok for other sites to access certain pages on the server. Firefox checks for this and only returns the response to the page if the server explicitly allows it. Otherwise the browser will throw away the response from the server and throw an exception.
Details
There are currently two draft specs from w3c for how this should work. The signaling for when a document is accessible is spec'ed in the access-control draft spec [1]. This states that the site can insert <?access-control?> processing instructions into XML files that says which sites can access the file. It also allows for http-headers to be added to allow access to be controlled to any file type.
The PI contains lists of URL patterns that describe which URLs can access the file. These patterns can contain wildcards, but follow strict parsing rules rather than being general URLs.
Additionally [2] is a draft spec for how XMLHttpRequest should interact with the access-control spec. This spec describes some headers that should be included when making a cross site request. (Though I personally wonder if this part should be moved into the access-control spec.) It also describes how to deal with http methods other than GET and POST.
What about [3]?
Suggested Implementation
A goal of the implementation is that it should be reusable for other things than XMLHttpRequest. For example document.load should be able to do the same cross-site loads with the same restrictions. As should XSLT and XBL.
To do this we'll set up an nsIStreamListener that sits between the normal nsIStreamListener and the nsIChannel. Once onStartRequest is called we check for access control headers. If the headers deny access we cancel the channel with a network error failure. If headers allow access we pass through all calls to the outer caller.
If headers don't say either way and the content type is an XML one (do we have a good way to determine that?) we set up a parser and ourselfs as sink. We'll then listen to notifications until the first start-element notification. At the same time we have to store all incoming data that is fed to the parser. If the access control PIs doesn't indicate that access should be granted we cancel the channel.
If access control is granted we forward calls to the outer caller and stream the buffered data to it.
Issues
- We have to check that the code in onStartRequest in the original streamlistener doesn't do things that are too late to do once the delayed onStartRequest is called.
- Is it possible to cancel with a network error if we get a 404 or 401 or similar? This would be a good way to avoid making it possible for the site to check for the existence of files on the server or check if the user is logged in or not.
Security worries
- The first thing that worries me is that you can make POST submissions to any url and include XML data as payload. It is already possible to make POST submissions to any url, but the only possible payload is plain/text encoded form data or multipart/mixed encoded files and form data. With Cross-Site XMLHttpRequest it would be possible to send XML data. In particular there is worry that this would make it possible to do SOAP requests to any server. Note that while the page would be unable to access the data returned by the SOAP request, that isn't necessary if the request itself is "transfer all users money to account 12345-67". To avoid this we could either use the model as for non-GET-non-POST requests defined in the XHR spec [4], or we could use something like [5]
- It is already possible to POST arbitrary data using <form encoding="text/plain">
- Should still investigate if this will mess up SOAP servers
- Using magic url is bad if you want different policies for different files in the same directory.
- Caching should prevent excessive extra GETs even in the case of POST
- Should we try to follow these specs even when accessing files on the same domain? From the sites point of view they can't rely on that anyway since all browsers don't support the access-control spec (and old versions never will).
- No. It'll just trick developers into thinking they are protected against things they really aren't.
- We have to make sure to not notify the onreadystatechange listener or any other listeners until we've done all access control checks. Otherwise it would be possible to check for the availability of files on other servers though you couldn't actually read the content.
- Should be taken care of by the inner nsIStreamListener approach
- We have to make sure to not put data in .responseText until we've passed access control checks even for XML files.
- Should be taken care of by the inner nsIStreamListener approach
- We have to make it impossible to distinguish between a access-control-failed error and network errors such as 404s. Can the implementation "recancel" a canceled channel?
- Might be possible to recancel, have to check implementations.
- Alternative might be to make sure that clients of the new code doesn't use the errorcode on the channel but rather the one passed in from onStartRequest/onStopRequest
- Should we check for PIs even if HTTP headers has said that access is granted? It'll always be possible to circumvent those headers using .mimetypeOverride which'll make us not treat the doc as XML and thus we won't even look for PIs. Alternatively we could ignore the .mimetypeOverride when checking for PIs but that might be a problem with poorly configured servers (which is the whole reason for .mimetypeOverride)
- Do not pay attention to .mimetypeOverride when checking for PIs (It's ok to require that servers are properly configured. May not work for everyone, but it's safer)
- If headers grant access do check for PIs
- If headers denies access don't check for PIs
- This is so that it's easy to deny access everywhere on a server level
- We should make sure to make it impossible to set authentication headers since that would make it easier for a site to attempt (distributed) brute force hacking against authenticated servers. Note though that such hacking would be significantly complicated by the fact that the server must be password protected but still have files that it grants access to a 3rd party server, which doesn't really make a lot of sense.
- Also disallow passing login arguments to .open()
- Timeless left some comments at [6]
- Should we send authentication information with the first GET request for the case when we do two requests? Should we send cookies? Alternative is to prefix authentication and cookie headers with 'xmlhttprequest-' or similar to avoid affecting existing servers but allow aware servers to look at relevant headers.
- We might as well include authentication headers and cookies in the original GET since that request can be done by any thirdparty anyway.
- Not including the authentication header makes it harder on CGIs since the webserver might deny access before the CGI even gets a chance to react.
- Do NOT send custom headers or cookies when talking to external sites -- this risks exposing sensitive IDs, usernames, and passwords when talking to third party services.
- We'll only include the cookie headers for the external site. Not the headers of the requesting site. It should be ok to include the cookie headers for the external site since such requests can be created already.
- Might be a good idea to disallow custom headers when talking to external sites since such headers could confuse the server in unpredictable ways.
- I don't see an adequate threat model described here -- what are the kinds of activities that a potential attacker might use this channel to do, and what are some ways to prevent this? For example, how will cross site XHR be used in conjunction with cross site scripting attacks?
- Good point. We should create a real threat model.
- My main concern are the statements as: "make it impossible to distinguish between a access-control-failed error and network errors such as 404s."
- How will we be able to eliminate timing attacks? There are 4 events which might abort an cross domain XHR req:
- Namelookup failed (hostname does not exist or is offline)
- Real 404
- rejection based upon Content-Access-Control header
- Rejection based upon XML <?access-control?> tag
- How will we be able to eliminate timing attacks? There are 4 events which might abort an cross domain XHR req:
An attacker can check the time it takes before a request is rejected and based upon this conclude whether a certain server is running (inside a corporate firewall)
Threads/risks:
- Functional attacks
- DDOS : Most requests could already be done with img tags etc. Crafting post requests becomes easier(better control over post data)
- Messes up soap: Should be researched/tested
- XSS/CSRF: If website A.com is vulnerable to an XSS exploit, then all the data of all other domains having accepted *.A.com is suddenly vulnerable
- Propagation of XSS: Suppose we have 3 domains, A.com, B.com and C.com. B.com retrieves data of A.com and renders this data in a "<pre" environement. Domain C retrieves the content of the "<pre" block of B.com. The user cannot control any values of C.com, thus C.com claims to be safe against XSS exploits
Now suppose we can control the data of A.com ; B.com will not have an XSS exploit since it is in a "<pre" tag. Unfortunately C.com has an XSS exploit and will render the code of A.com. This code now runs in the context of C.com and is able to request other data of B.com; Conclusion is that the statement in the previous bullet might have more implications than one thought.
- Implementation attacks
- premature loading of data (fixed by the inner nsIStreamListener)
- side channel attacks (e.g. timing, computational load, measuring network speed/usage)