Cross-Origin Resource Sharing In Simple Words

CORS - Cross-Origin Resource Sharing

It might be a new concept to you, but CORS or Cross-Origin Resource Sharing is an important way for a restricted resource on a specific web page to get requested from a website which sits on a different domain.

While websites can easily embed (request) assets such as CSS, images and videos and scripts from a different origin (cross-origin), there are restrictions on other requests. For example, cross-domain requests referring to Ajax is generally blocked because sites usually apply a security policy that limits requests to same-origin requests. Cross-origin requests will be barred by default.

CORS is a way for servers and browsers to determine whether accessing resources cross-origin is safe. Because apps can determine whether cross-origin access is safe it means that developers have more freedom to implement website functionality – compared to a situation where they are restricted to requests that are same-origin. CORS aims to be more secure than a simple cross-origin request that is not subject to any limits or restrictions.

The Fetch Living Standard, specified by WHATWG, contains the CORS specification and explains how a CORS implementation should work. CORS was also included in an earlier Recommendation from the W3C.

Understanding the functionality of CORS

In essence, by describing a new type of HTTP header, CORS enables a browser to request a URL that is remote – assuming it has permission to do so. Validation and authorization can be performed server-side to some degree, but usually the browser will take the role of supporting CORS headers and adhering to any specified restrictions.

Where a request can modify data (whether it is an HTTP or an Ajax request) the CORS specification will ask that the browser applies a “preflight”. This means that the browser will ask for supported methods from the server using the HTTP OPTIONS method. Once these “arrive” the actual request will be sent using the usual HTTP method. A server has the option to advise the client that credentials such as HTTP authentication information or a cookie is sent to verify a request.

An example of CORS in action

Let’s say someone wants to visit the website http://www.supersite.com, with the web page then trying to put in place a cross-origin HTTP request which gets data from http://help.supersite.com. If the user’s browser is supportive of CORS it will try to make a valid cross-origin HTTP request to help.supersite.com by doing the following:

  1. A GET request is sent by the browser, and the browser includes an extra “origin” HTTP header which is sent to help.supersite.com, this header contains the domain of the parent that sent the original request.Origin: http://www.supersite.com
  2. The sever that gets the request over at help.supersite.com could respond with one of the following:
    1. The data that was requested from www.supersite.com alongside an ACAO header (that’s an access control allow origin header) which implies that it accepts requests from the origin, so this could look like:Access-Control-Allow-Origin: http://www.supersite.com
    2. The data, alongside an ACAO header which says that it accepts a request from any domain, which looks like this:Access-Control-Allow-Origin: *
    3. A page throwing up an error, stating that it does not accept cross-origin requests

Sometimes an API response or a page is considered to be public content which can be accessed by anyone, and that’s where the wildcard response is used so that code on any site can access it. For example, take Google Fonts and freely available web fonts that are hosted on this service for the public.

The object-capability model is another place where wildcard origin policies are used because the pages have URLs that simply cannot be guessed – and the idea is that any who knows the unique URL should be able to enjoy access.

There is a special value of “”. This means that requests cannot supply credentials – this prohibits the use of client-side SSL, HTTP authentication or even the use of cookies – none of these are allowed in the request that’s sent cross-domain.

CORS architecture states that the ACAO header must be set by an external service – it’s not set by the application server which sends the original request. For example, help.supersite.com would make use of CORS to permit a browser to send a request from www.supersite.com.

An example of CORS preflight

Modern CORS-supporting browsers will require an additional “preflight” request when some cross-domain Ajax resources are requested. This is done to determine whether the browser has the permission to take the action it wants to take. CORS preflights exist because of the implications on user data.

OPTIONS /

Host: help.supersite.com

Origin: http://www.supersite.com

Let’s say then that help.supersite.com is happy to take the requested action it could send back these HTTP headers:

Access-Control-Allow-Origin: http://www.supersite.com

Access-Control-Allow-Methods: PUT, DELETE

The next step is for the browser to then make the request it originally wanted to make. On the flipside, if help.supersite.com does not allow cross-site HTTP requests it will simply send back an error when the OPTIONS request is sent – as a consequence, the browser won’t make the requests it originally intended to make.

Headers specific to CORS

The HTTP headers used by CORS include

Request Headers

  • Origin
  • Access-Control-Request-Method
  • Access-Control-Request-Headers

Response headers

  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials
  • Access-Control-Expose-Headers
  • Access-Control-Max-Age
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Headers

Which browsers support CORS?

Given that CORS is a relatively new technology that is dependent on browser support it’s worth noting that you need an up to date browser to use a website that makes use of CORS. For example, Chrome 28+ will work, and so will Opera 15+ and Android 4.4+. Some Presto-enabled browsers – that’s Opera – will work, while Geck from 1.9.1 onwards works. Every version of Microsoft Edge also supports CORS.

The history of CORS

Three staff members at Tellme Networks, Michael Bodell, Matt Oshry and Brad Porter originally started working on CORS in March 2004 as they intended that it should be included in VoiceXML version 2.1. The idea was to make sure that VoiceXML browsers can safely make data requests that are cross-origin.

However, the mechanism that they developed clearly had wider applications beyond VoiceXML and that’s why CORS was subsequently added to an implementation NOTE. W3C’s web apps working group, alongside major browser developers, started formalising the specification into a working draft – which was eventually put on track to be formally recommended by W3C.

The first W3C working draft was submitted in May 2006 and nearly three years later the draft was given it’s name – CORS. It was only in the beginning of 2014 that CORS was formally accepted as a recommendation by the W3C.

Differences between JSONP and CORS

It’s worth noting that CORS can be seen as a better and more contemporary solution compared to JSNOP patterns. CORS offers a number of advantages which include the fact that it supports a range of HTTP requests – it’s not restricted to GET, like JSONP.

Furthermore XMLHttpRequest can be used in the context of CORS and this means that CORS can handle errors better than JSONP. It’s also worth noting that JSONP is susceptible to cross-site scripting problems (XSS) where the external website is hacked, in contrast CORS enables a site to parse responses manually which in turn improves overall security.

However, JSONP does have the advantage that it is better supported by legacy browser where CORS is not yet supported, but it is of little consequence today because CORS is now widely supported by everyday browsers.

Understanding Accelerated Mobile Pages (AMP)

AMP - Accelerated Mobile Pages

AMP, or accelerated mobile pages, is an open source framework that is commonly used on mobile devices – you’ve probably used it while searching for content without even knowing it. AMP was developed by Google, Twitter was also involved. The aim of AMP is to give mobile users better and faster experiences. AMP does this because it gives developers the option to simplify both CSS and HTML so that mobile users get a more “lightweight” experience.

Facebook started the trend with its Instant Articles and Google responded with AMP. However, AMP has gained a bigger foothold and today it is a commonly used way to deliver mobile content from search results at a much faster speed – compared to serving a standard page. In fact, AMP has become so prominent and so popular that Google has been pushing to get AMP included in the web standards framework.

How AMP works its magic

There are basically three main components to AMP. Two are code related, AMP’s version of HTML, and AMP JS. There is also a content delivery network (CDN) working behind the scenes with AMP. Let’s take a look at the three components:

  • HTML. Called AMP HTML, consider it a slimmed down version of standard hypertext markup language (or HTML). In essence, AMP HTML restricts developers in terms of which HTML tags they can make use of.
    Some HTML tags are restricted when using AMP and the goal of these restrictions are to improve page load speeds. The same goes for CSS, AMP also limits the CSS tags you can use. There is a full list of AMP HTML tags available which most seasoned developers will recognize straight away.
  • Developers cannot use JavaScript when coding AMP pages – again, like the HTML restrictions, the goal is to cut out code that can make a page load slowly. However, developers can use AMP scripts. AMP scripts are optimized so that pages load quickly. So, even though you cannot use JavaScript with AMP you can rely on AMP’s library of components – making animations, using dynamic content and editing the layout all using the AMP library of components. You’re even covered for data compliance.
  • Content delivery. AMP has a content delivery network (CDN) that speeds up the delivery of web content – it’s called AMP Cache. It’s proxy-based and acts as a cache, storing all valid AMP content – you cannot, by default, opt out of using AMP Cache. But don’t worry, if you have your own CDN you can still use it by layering it on top of AMP Cache.

Is AMP a good idea for your site?

It depends on what your website is geared to do. If you serve news and other stories that are mostly static AMP can work really well because it’s known that AMP generates more organic search referrals, particularly for media sites. Furthermore, media sites can make their content stand out using Google’s Rich Cards.

E-commerce operators might want to think twice as there’s currently no settled opinion on the value of AMP for retailers. At issue is the dynamic nature of these pages – e-commerce sites involve lots of user interaction such as sorting, filtering and adding goods to a cart.

Nonetheless, there is a general agreement that AMP, used correctly, can achieve the following for website owners:

  • Deliver a big boost for organic search, with much more traffic sent from Google
  • Improve conversion thanks to a much improved mobile experience, AMP can also boost engagement with mobile users
  • AMP CDN also reduces server load because content is served from AMP Cache
  • Implementing AMP can boost your prominence in mobile search results as your site will feature in the AMP carousel

Things to watch out for with AMP

Perhaps the biggest issue with AMP is the fact that it is quite an involved process – setting your website up to serve AMP pages requires a lot of work. While serving better mobile pages should always a priority you should weigh up the benefits against the potential costs of implementing AMP. In fact, putting AMP in place might mean that you run your site “in parallel”, with one set of assets for normal content, and one set of assets for AMP.

AMP also causes difficulty when measuring website traffic. Due to AMP CDN you won’t be able to rely on counting server requests to measure traffic. Instead you will need to find other methods to track users to get a real view of CTRs; it will be a bit more tricky to measure engagement on the AMP version of your site.

Another point to keep in mind is the user experience. Because AMP is stripped-down HTML you will not be able to deliver some content types – think about images that rotate, or a map that can be navigated. UX-heavy parts of your site will need to be re-built for AMP, so you’re running two sites in essence.

Finally, site owners should also beware of the fact that the nature of AMP means that users are more likely to head back to search results after they view your page, rather than engage further with your site. This has a negative impact on engagement and conversion.

Using the opportunities AMP provides

AMP has its drawbacks but many site owners will benefit from looking into the mobile traffic benefits that AMP can bring. You can start off by building an AMP version of your site so that you can feature more prominently in mobile web searches. Next, consider developing ways to easily route mobile users who land on an AMP page straight to your mobile app. Do this and you’ll mitigate the loss of engagement while also getting access to improved analytics. It’s also worth trying to get a full view of how your user journeys from AMP content to your app or to your website. Do AMP users convert? It’s worth trying to find this out as you experiment with an AMP version of your site.

REST – All You Have To Know About Representational State Transfer

REST

REST, short for representational state transfer, is a type of software architecture that was designed to ensure interoperability between different Internet computer systems. REST works by putting in place very strict constraints for the development of web services. Services that conform to the REST architecture can more easily communicate with one another.

RESTful services can request and edit text version of a web resource via a predefined set of operations that are uniform – and stateless. REST is just one type of architecture – SOAP web services is another example, and each of these architectures will have their own specific rules and operations.

The roots of REST

The original definition of a web resources was basically a document or a file that can be accessed using its URL. However as the web became more complex the term “web resource” has been expanded. Today, a web resource is essentially anything that can be found, given a name, reached or manipulated on the web.

When a RESTful web services requests a resource via it’s URI it will get a response that contains a “payload”. This payload is formatted in one of a number of set formats ranging from HTML and XML through to JSON. A response to a RESTful request can also state that something about the resource has changed, while it can also deliver hypertext links across to resources that are related to the requested resource – or indeed entire collections. When using HTTP (which is usually the method of choice) the developer has access to a range of operations including POST, GET, HEAD and even DELETE and TRACE.

REST developers make use of standard operations alongside a stateless protocol. The resulting RESTful system is more reliable, operates faster and also gives developers the option to re-use REST components. These components can be updated or managed on-the-go, doing so does not affect the entire underlying system – you can change RESTful system components while the system is running.

Roy Fielding developed REST in 2000 as part of his doctorate. His dissertation referred to REST principles in the context of the HTTP object model which dates from 1994. These were used when developing HTTP 1.1., and also while developing the standards around URI  – the uniform resource identifier.

Why the name REST?

As you can imagine, REST as a term was used on purpose, it was supposed to make developers reflect on how web applications should behave when they are properly designed.  Such an app should act as part of a network of web resources (also called a virtual “state-machine”). Within this network the user is able to step through their app by picking resource identifiers (a specific page or URL) and by then applying a HTTP operation such as POST. These operations are also called the “applicable” state transitions. The representation of the resource is then sent to the end user – this is the next application “state”.

The 2000 PhD dissertation, and what Fielding said

Roy Fielding, a student a the University of California at Irvine, did a PHD dissertation in 2000 which was called “Architectural Styles and the Design of Network-based Software Architectures”. While his PHD dissertation was published in 2000 he has in fact been working on REST for a number of years, he was arguably working on REST in parallel with HTTP 1.1 – HTTP 1.1. was being developed between 1996 and 1999 and was of course based on the HTTP 1.0 design of 1996.

In 2009, when looking back at how REST was developed, Fielding said the he was involved in the HTTP standardization process, and he was often asked to defend many of the choices made when designing how the Web works. Fielding outlined how challenging it was to do that given the fact that anyone could submit a suggestion on how the Web should work, and given that the Web was becoming absolutely central to an industry that was growing at breakneck pace.

Fielding says that at the time he had over five hundred developers commenting on the develop of the Web. Many of these engineers were highly experienced. It was part of Fielding’s job to explain all of the Web’s concepts from the ground up. Fielding says that the process of doing this helped him to settle down on some important principles. It also helped outline the constraints and properties that are now part of the REST architecture.

What are they key architectural principles of the REST architecture?

We explained earlier how REST works in a large part due to the constraints it places on web architecture. These constraints affect web architecture in these ways:

  • By placing requirements on how quickly component interactions are performed – a key determinant in the way users perceive network performance, and a key factor in overall network efficiency.
  • By enforcing scalability – in other words, so that large numbers of components are supported, alongside interactions between these components.
  • Demanding a simple, uniform interface
  • Ensuring that components can be easily modified when user requirements change, even while the app is running
  • Making sure that the communication between service agents and components are totally visible
  • Ensuring that components are portable as the code behind components can be moved with data intact
  • Finally, REST demands that the entire system is resilient, no single connector or component failure should result in the entire system collapsing

It’s worth noting that Fielding has some specific comments around the scalability aspect of the REST architecture. Fielding referred to a unique characteristic of REST: the separation of concerns, assuring that components serve distinct purposes. This fact makes it easier to  implement components, makes connectors simpler while also making it easier to tune applications for improved performance. Overall, server components become more scalable. Because system constraints are layered under REST it means that any intermediary such as a firewall or a gateway can be added into the application without the need to change the way components interface with one another. These intermediaries can therefore easily assist in the translation of communications – or by improving performance, Fielding pointed to caches that are large-scale and shared, for example.

Fielding said that, overall, REST makes intermediate processing easier. It does this by making sure that the messages sent by REST are self-descriptive. REST also ensure that interactions are stateless in between each request, while Fielding also pointed to the fact that REST imposes standard methods and standard media types when exchanging information. REST responses are also cache friendly.

The architectural constraints used by REST

Looking beyond architectural principles, it’s important to understand the constraints that define which systems quality as a RESTful system. There are six, and they are designed to limit the way servers respond to and process client requests. By imposing these limits, the system gains positive characteristics. Such a system performs faster and is simplified – it’s more scalable and also portable. At the same time a RESTful system is easier to modify while also being more visible. Overall, a RESTful system is more reliable too – thanks to the formal REST constraints:

Architecture based on a client-server approach

We touched on this point before: the separation of concerns. For example, with this approach the user interface tasks (or “concerns”) will be kept separate from the data storage tasks (“concerns”). In doing so the user interface is more portable and can be carried across to a different platform.

At the same time the system is also more scalable as the server components are simplified. In the context of the web, it’s worth noting that this separation means that distinct web components can grow and evolve on their own. It supports operations at inter-scale, a requirement where there are multiple organizational domains.

REST is stateless

In an important constraint, no client context may be stored on a server between client-server communications requests. The entire request must contain all the information needed to fully answer the request, and the client itself holds the session state. However, the session state can be transferred to, say, a database so that the state persists for a period of time. This can enable authentication for example.

Once the client is ready to transition to the next state it can start sending a request. Clients are in transition when there are no requests that have not been answered. When a client wants to start another state transition it can make use of the links which are contained in the representation of application states.

Content can be cached

When looking at RESTful architecture, both clients and tools that work as an intermediary are able to cache the responses to requests. However, every response must classify itself as either non-cacheable, or that it is OK to cache – this is to make sure that data that is not appropriate for a cache, or data that is stale, is not cached. Thanks to caching there are fewer interactions between client and server which means that an app performance better, and that apps are more scalable.

Layered architecture

Clients usually cannot tell whether an established connection contains an intermediary, or whether that connection is made directly with the server. For example, where load balancers or proxies are acting as intermediaries these won’t cause any issues with communications. Nor, under layered architecture, will using a proxy or load balancer require the need to edit code either on the client or on the server.

Servers that act as intermediaries can help to scale apps by working as a shared cache, or as a load balancer. Security is another benefit of the layered approach as a security layer can be added on top of a web app – and in doing so the security logic is running separately from the application logic. Developers can enforce security policies using this method. Also note that the layered approach means that a server can send a call to several servers in order to deliver a response to a client.

Code-on-demand

Finally, servers can hand code execution responsibilities to clients by sending code to a client – for example, a server can send java applets or JavaScript to a client so that the code is executed on the client side.

Uniform interfaces

Perhaps one of the most fundamental aspects of RESTful systems is the uniform interface, a way to decouple and simplify application architecture so that each component can develop and change on its own. There are four, further constraints involved in uniform interfaces:

  • Requests must contain resource identifiers. As an example, a URI can be contained in a request to enable the resource to be identified. Note that, conceptually, resources are distinct from the results that are returned to a client. For example, no matter what the server’s own method of storing data is, the data sent in answer to a request could be sent as anything from XML to HTML or even JSON.
  • Representations can manipulate resources. Clients might hold a representation of a specific resource – and this could contain metadata. This information is sufficient for the client to be able to change the resource, or to delete the resource.
  • Messages are self-descriptive. Every message sent under a RESTful architecture will contain sufficient information for the recipient to be able to process the message – as an example, a message can use a media type to specify which parser should be used.
  • HATEOAS. Short for “hypermedia as the engine of the application state”, this principle is just like a human having access to the home page of a specific website. The client should then have the option to make use of links provided by the server so that it can automatically understand which activities and resources it will require.

A few other points to note about REST

SOAP has specific standards that define SOAP web services. But in the case of RESTful applications there are is no officially-issued standards documentation. In a way you can see REST as a style of web architecture, rather than a specification or a protocol.

So, REST is not a standard but implementations that would objectively qualify as “RESTful” are very dependent on standards – indeed, they are standardised to a degree ,using XML, HTTP and JSON. Note that developers sometimes describe the APIs that they create as complying with REST architecture even if these APIs sometime skirt around the constraints imposed by RESTful architecture.