The Servo Book

Servo is a web browser engine written in the Rust programming language, and currently developed on 64-bit Linux, 64-bit macOS, 64-bit Windows, and Android.

Work is still ongoing to make Servo consumable as a webview library, so for now, the only supported way to use Servo is via servoshell, our winit- and egui-based example browser.

Screenshot of servoshell

This book will be your guide to building and running servoshell, hacking on and contributing to Servo, the architecture of Servo, and how to consume Servo and its libraries.

This book is still in early development! In the table of contents, * denotes chapters that have been imported from our existing docs, and still need to be copyedited or reworked. During this early stage of development, we’ll be frequently rewriting the Git history, so please be patient with us if you make a pull request.

Need help?

Join the Servo Zulip if you have any questions. Everyone is welcome!

Getting Servo

You can download the latest prebuilt version of servoshell from the Downloads section on our website. Older nightly releases are available in the servo/servo-nightly-builds repo.

Downloading the source

To make changes to Servo or build servoshell yourself, clone the main repo with Git:

$ git clone
$ cd servo

Windows users: you will need to clone the Servo repo into the same drive as your CARGO_HOME folder (#28530).

Servo’s main repo is pretty big! If you have an unreliable network connection or limited disk space, consider making a shallow clone.

Running servoshell

Assuming you’re in the directory containing servo, you can run servoshell with:

$ ./servo [url] [options]

Use --help to list the available command line options:

$ ./servo --help

Use --pref to configure Servo’s behaviour, including to enable experimental web platform features. For example, to run our Conway’s Game of Life demo with WebGPU enabled:

$ ./servo --pref dom.webgpu.enabled

Use --devtools=6080 to enable support for debugging pages with Firefox devtools:

$ ./servo --devtools=6080

Note: devtools support is currently broken (#29831).

Runtime dependencies

On Linux, servoshell requires:

  • GStreamer ≥ 1.18
  • gst-plugins-base ≥ 1.18
  • gst-plugins-good ≥ 1.18
  • gst-plugins-bad ≥ 1.18
  • gst-plugins-ugly ≥ 1.18
  • libXcursor
  • libXrandr
  • libXi
  • libxkbcommon
  • vulkan-loader

Contributing to Servo

Servo welcomes contribution from everyone. Here are the guidelines if you are thinking of helping us:


Contributions to Servo or its dependencies should be made in the form of GitHub pull requests. Each pull request will be reviewed by a core contributor (someone with permission to land patches) and either landed in the main tree or given feedback for changes that would be required. All contributions should follow this format, even those from core contributors.

Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.

Head over to Servo Starters to find good tasks to start with. If you come across words or jargon that do not make sense, please check the glossary first. If there's no matching entry, please make a pull request to add one with the content TODO so we can correct that!

See for more information on how to start working on Servo.

Pull request checklist

  • Branch from the main branch and, if needed, rebase to the current main branch before submitting your pull request. If it doesn't merge cleanly with main you may be asked to rebase your changes.

  • Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).

  • Commits should be accompanied by a Developer Certificate of Origin ( sign-off, which indicates that you (and your employer if applicable) agree to be bound by the terms of the project license. In git, this is the -s option to git commit.

  • If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review in the Servo chat.

  • Add tests relevant to the fixed bug or new feature. For a DOM change this will usually be a web platform test; for layout, a reftest. See our testing guide for more information.

For specific git instructions, see GitHub workflow 101.

Running tests in pull requests

When you push to a pull request, GitHub automatically checks that your changes have no compilation, lint, or tidy errors.

To run unit tests or Web Platform Tests against a pull request, add one or more of the labels below to your pull request. If you do not have permission to add labels to your pull request, add a comment on your bug requesting that they be added.

LabelRuns unit tests onRuns web tests on
T-fullAll platformsLinux
T-linux-wpt-2013LinuxLinux (only legacy layout)
T-linux-wpt-2020LinuxLinux (skip legacy layout)


Servo Code of Conduct is published at

Technical Steering Committee

Technical oversight of the Servo Project is provided by the Technical Steering Committee.


mach is a Python program that does plenty of things to make working on Servo easier, like building and running Servo, running tests, and updating dependencies.

Windows users: you will need to replace ./mach with .\mach in the commands in this book.

Use --help to list the subcommands, or get help with a specific subcommand:

$ ./mach --help
$ ./mach build --help

When you use mach to run another program, such as servoshell, that program may have its own options with the same names as mach options. You can use --, surrounded by spaces, to tell mach not to touch any subsequent options and leave them for the other program.

$ ./mach run --help         # Gets help for `mach run`.
$ ./mach run -d --help      # Still gets help for `mach run`.
$ ./mach run -d -- --help   # Gets help for the debug build of servoshell.

This also applies to the Servo unit tests, where there are three layers of options: mach options, cargo test options, and libtest options.

$ ./mach test-unit --help           # Gets help for `mach test-unit`.
$ ./mach test-unit -- --help        # Gets help for `cargo test`.
$ ./mach test-unit -- -- --help     # Gets help for the test harness (libtest).

Work is ongoing to make it possible to build Servo without mach. Where possible, consider whether you can use native Cargo functionality before adding new functionality to mach.

Building Servo

If this is your first time building Servo, be sure to set up your environment before continuing with the steps below.

To build servoshell for your machine:

$ ./mach build -d

To build servoshell for Android (armv7):

$ ./mach build --android

Sometimes the tools or dependencies needed to build Servo will change. If you start encountering build problems after updating Servo, try running ./mach bootstrap again, or set up your environment from the beginning.

You are not alone! If you have problems building Servo that you can’t solve, you can always ask for help in the build issues chat on Zulip.

Build profiles

There are three main build profiles, which you can build and use independently of one another:

  • debug builds, which allow you to use a debugger (lldb)
  • release builds, which are slower to build but more performant
  • production builds, which are used for official releases only
debug release production
mach option -d -r --profile production
optimised? noyesyes
debug info? yesnono
debug assertions? yesyes(!)no
maximum RUST_LOG level traceinfoinfo
SpiderMonkey debug build? no, unless you ./mach build --debug-mozjs
finds resources in
current working dir?

You can change these settings in a servobuild file (see servobuild.example) or in the root Cargo.toml.

Setting up your environment

Before you can build Servo, you will need to:

  1. Install the tools for your platform: Windows, macOS, Linux
  2. If you are on NixOS, no further action is needed!
  3. Install the other dependencies with one of the methods below:

Sometimes the tools or dependencies needed to build Servo will change. If you start encountering build problems after updating Servo, try running ./mach bootstrap again, or set up your environment from the beginning.

You are not alone! If you have problems setting up your environment that you can’t solve, you can always ask for help in the build issues chat on Zulip.

Checking if you have the tools installed

  • curl --version should print a version like 7.83.1 or 8.4.0
    • On Windows, type curl.exe --version instead, to avoid getting the PowerShell alias for Invoke-WebRequest
  • python --version should print 3.11.0 or newer (3.11.1, 3.12.0, …)
  • rustup --version should print a version like 1.26.0
  • (Windows only) choco --version should print a version like 2.2.2
  • (macOS only) brew --version should print a version like 4.2.17

Tools for Windows

Note that curl will already be installed on Windows 1804 or newer.

  • Download and install python from the Python website
  • Download and install choco from the Chocolatey website
  • If you already have rustup, download the Community edition of Visual Studio 2022
  • If you don’t have rustup, download and run the rustup installer: rustup-init.exe
    • Be sure to select Quick install via the Visual Studio Community installer
  • In the Visual Studio installer, ensure the following components are installed:
    • Windows 10 SDK (10.0.19041.0)
    • MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest)
    • C++ ATL for latest v143 build tools (x86 & x64)
    • C++ MFC for latest v143 build tools (x86 & x64)

We don’t recommend having more than one version of Visual Studio installed. Servo will try to search for the appropriate version of Visual Studio, but having only a single version installed means fewer things can go wrong.

Tools for macOS

Note that curl will already be installed on macOS.

Tools for Linux

  • Install curl and python:
    • Arch: sudo pacman -S --needed curl python python-pip
    • Debian, Ubuntu: sudo apt install curl python3-pip python3-venv
    • Fedora: sudo dnf install curl python3 python3-pip python3-devel
    • Gentoo: sudo emerge net-misc/curl dev-python/pip
  • Download and install rustup from the rustup website

On NixOS, type nix-shell to enter a shell with all of the necessary tools and dependencies. You can install python3 to allow mach to run nix-shell automatically:

  • environment.systemPackages = [ pkgs.python3 ]; (install globally)
  • home.packages = [ pkgs.python3 ]; (install with Home Manager)

Dependencies for any Linux distro, using Nix

  • Make sure you have curl and python installed (see Tools for Linux)
  • Make sure you have the runtime dependencies installed as well
  • Install Nix, the package manager — the easiest way is to use the installer, with either the multi-user or single-user installation (your choice)
  • Tell mach to use Nix: export MACH_USE_NIX=

Dependencies for Arch

(including Manjaro)

  • sudo pacman -S --needed curl python python-pip

  • sudo pacman -S --needed base-devel git mesa cmake libxmu pkg-config ttf-fira-sans harfbuzz ccache llvm clang autoconf2.13 gstreamer gstreamer-vaapi gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly vulkan-icd-loader

Dependencies for Debian

(including elementary OS, KDE neon, Linux Mint, Pop!_OS, Raspbian, TUXEDO OS, Ubuntu)

  • sudo apt install curl python3-pip python3-venv
  • sudo apt install build-essential ccache clang cmake curl g++ git gperf libdbus-1-dev libfreetype6-dev libgl1-mesa-dri libgles2-mesa-dev libglib2.0-dev gstreamer1.0-plugins-good libgstreamer-plugins-good1.0-dev gstreamer1.0-plugins-bad libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-ugly gstreamer1.0-plugins-base libgstreamer-plugins-base1.0-dev gstreamer1.0-libav libgstrtspserver-1.0-dev gstreamer1.0-tools libges-1.0-dev libharfbuzz-dev liblzma-dev libudev-dev libunwind-dev libvulkan1 libx11-dev libxcb-render0-dev libxcb-shape0-dev libxcb-xfixes0-dev libxmu-dev libxmu6 libegl1-mesa-dev llvm-dev m4 xorg-dev

Dependencies for Fedora

  • sudo dnf install python3 python3-pip python3-devel
  • sudo dnf install libtool gcc-c++ libXi-devel freetype-devel libunwind-devel mesa-libGL-devel mesa-libEGL-devel glib2-devel libX11-devel libXrandr-devel gperf fontconfig-devel cabextract ttmkfdir expat-devel rpm-build cmake libXcursor-devel libXmu-devel dbus-devel ncurses-devel harfbuzz-devel ccache clang clang-libs llvm python3-devel gstreamer1-devel gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-bad-free-devel gstreamer1-plugins-ugly-free libjpeg-turbo-devel zlib libjpeg vulkan-loader

Dependencies for Gentoo

  • sudo emerge net-misc/curl media-libs/freetype media-libs/mesa dev-util/gperf dev-python/pip dev-libs/openssl media-libs/harfbuzz dev-util/ccache sys-libs/libunwind x11-libs/libXmu x11-base/xorg-server sys-devel/clang media-libs/gstreamer media-libs/gst-plugins-base media-libs/gst-plugins-good media-libs/gst-plugins-bad media-libs/gst-plugins-ugly media-libs/vulkan-loader

Dependencies for openSUSE

  • sudo zypper install libX11-devel libexpat-devel Mesa-libEGL-devel Mesa-libGL-devel cabextract cmake dbus-1-devel fontconfig-devel freetype-devel gcc-c++ git glib2-devel gperf harfbuzz-devel libXcursor-devel libXi-devel libXmu-devel libXrandr-devel libopenssl-devel python3-pip rpm-build ccache llvm-clang libclang autoconf213 gstreamer-devel gstreamer-plugins-base-devel gstreamer-plugins-good gstreamer-plugins-bad-devel gstreamer-plugins-ugly vulkan-loader libvulkan1

Dependencies for Void Linux

  • sudo xbps-install libtool gcc libXi-devel freetype-devel libunwind-devel MesaLib-devel glib-devel pkg-config libX11-devel libXrandr-devel gperf bzip2-devel fontconfig-devel cabextract expat-devel cmake cmake libXcursor-devel libXmu-devel dbus-devel ncurses-devel harfbuzz-devel ccache glu-devel clang gstreamer1-devel gst-plugins-base1-devel gst-plugins-good1 gst-plugins-bad1-devel gst-plugins-ugly1 vulkan-loader

Troubleshooting your build

(on Linux)
error: getting status of /nix/var/nix/daemon-socket/socket: Permission denied

If you get this error and you’ve installed Nix with your system package manager:

  • Add yourself to the nix-users group
  • Log out and log back in
(on Linux)
error: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I)

This error is harmless, but you can fix it as follows:

  • Run sudo nix-channel --add nixpkgs
  • Run sudo nix-channel --update
(on Windows)
Cannot run mach in a path on a case-sensitive file system on Windows.
  • Open a command prompt or PowerShell as administrator (Win+X, A)
  • Disable case sensitivity for your Servo repo:
    fsutil file SetCaseSensitiveInfo X:\path\to\servo disable
(on Windows)
Could not find DLL dependency: api-ms-win-crt-runtime-l1-1-0.dll
DLL file `api-ms-win-crt-runtime-l1-1-0.dll` not found!

Find the path to Redist\ucrt\DLLs\x64\api-ms-win-crt-runtime-l1-1-0.dll, e.g. C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs\x64\api-ms-win-crt-runtime-l1-1-0.dll.

Then set the WindowsSdkDir environment variable to the path that contains Redist, e.g. C:\Program Files (x86)\Windows Kits\10.

(on Windows)
thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'clang.dll\', \'libclang.dll\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [(C:\\Program Files\\LLVM\\bin\\libclang.dll: invalid DLL (64-bit))])"', C:\Users\me\.cargo\registry\src\...

rustup may have been installed with the 32-bit default host, rather than the 64-bit default host needed by Servo. Check your default host with rustup show, then set the default host:

> rustup set default-host x86_64-pc-windows-msvc

(on Windows)
ERROR: GetShortPathName returned a long path name: `C:/PROGRA~2/Windows Kits/10/`. Use `fsutil file setshortname' to create a short name for any components of this path that have spaces.

SpiderMonkey (mozjs) requires 8.3 filenames to be enabled on Windows (#26010).

  • Open a command prompt or PowerShell as administrator (Win+X, A)
  • Enable 8.3 filename generation: fsutil behavior set disable8dot3 0
  • Uninstall and reinstall whatever contains the failing paths, such as Visual Studio or the Windows SDK — this is easier than adding 8.3 filenames by hand

Building for Android

Support for Android is currently in-progress and these instructions might change frequently.

Get Android tools

After cloning the repository and installing dependencies common to all targets, you should obtain the Android SDK, either using the Android Studio IDE or via the sdkmanager CLI (which requires Java 17 or greater to be installed separately).

To install the NDK and SDK using Android Studio, refer to the guidelines on the website. For the SDK, install the Android 33 platform. The NDK must be version r25c. Versions before and after change the layout of the NDK and add or remove files.

If you are using the sdkmanager tool, you can do:

tools/bin/sdkmanager platform-tools "platforms;android-33" "build-tools;33.0.2" "ndk;25.2.9519653"

Set the following environment variables while building. (You may want to export them from a configuration file like .bashrc (~/.bash_profile for Mac OS X).).


NOTE: If you are using Nix, you don't need to install the tools or setup the ANDROID_* environment variables manually. Simply enable the Android build support running:


in the shell session before invoking ./mach commands

Build Servo

In the following sub-commands the --android flag is short for --target armv7-linux-androideabi

# Replace "--release" with "--dev" to create an unoptimized debug build.
./mach build --release --android

For running in an emulator however, you’ll likely want to build for Android x86 instead:

./mach build --release --target i686-linux-android

Installing and running on-device

To install Servo on a hardware device, first set up your device for development.

Run this command to install the Servo package on your device. Replace --release with --dev if you are building in debug mode.

./mach install --release --android

To start Servo, tap the "Servo" icon in your launcher screen, or run this :

./mach run --android


adb shell am force-stop org.mozilla.servo/org.mozilla.servo.MainActivity

If the above doesn't work, try this:

adb shell am force-stop org.mozilla.servo


adb uninstall org.mozilla.servo

NOTE: The following instructions are outdated and might not apply any longer. They are retained here for reference until the Android build is fully functional and the below instructions are reviewed.


We are currently using a Nexus 9 for profiling, because it has an NVidia chipset and supports the NVidia System Profiler. First, install the profiler.

You will then need to root your Nexus 9. There are a variety of options, but I found the CF-Auto-Root version the easiest. Just follow the instructions on that page (download, do the OEM unlock, adb reboot bootloader, fastboot boot image/CF-Auto-Root-flounder-volantis-nexus9.img) and you should end up with a rooted device.

If you want reasonable stack backtraces, you should add the flags -fno-omit-frame-pointer -marm -funwind-tables to the CFLAGS (simplest place to do so is in the mach python scripts that set up the env for Android). Also, remember to build with -r for release!

Installing and running in the emulator

To set up the emulator, use the avdmanager tool installed with the SDK. Create a default Pixel 4 device with an SDCard of size greater than 100MB. After creating it, open the file ~/.android/avd/nexus7.avd/config.ini and change the hw.dPad and hw.mainKeys configuration files to yes.


./mach install --android --release


./mach run --android


adb shell am force-stop org.mozilla.servo


adb uninstall org.mozilla.servo

Viewing logs and stdout

By default, only libsimpleservo logs are sent to adb logcat. println output is also sent to logcat (see #21637). Log macros (warn!, debug!, …) are sent to logcat under the tag simpleservo.

To show all the servo logs, remove the log filters in

Getting meaningful symbols out of crash reports

Copy the logcat output that includes the crash information and unresolved symbols to a temporary log file. Run ndk-stack -sym target/armv7-linux-androideabi/debug/apk/obj/local/armeabi-v7a/lib/ -dump log. This should resolve any symbols from that appear in the output. The ndk-stack tool is found in the NDK root directory.

Debugging on-device

First, you will need to enable debugging in the project files by adding android:debuggable="true" to the application tag in servo/support/android/apk/app/src/main/AndroidManifest.xml.

~/android-ndk-r9c/ndk-gdb \
    --adb=/Users/larsberg/android-sdk-macosx/platform-tools/adb \
    --launch=org.mozilla.servo.MainActivity \

To get symbols resolved, you may need to provide additional library paths (at the gdb prompt):
set solib-search-path /Users/larsberg/servo/support/android/apk/obj/local/armeabi/:/Users/larsberg/servo/support/android/apk/libs/armeabi

OR you may need to enter the same path names as above in the support/android/apk/libs/armeabi/gdb.setup file.

If you are not able to get past the "Application Servo (process com.mozilla.servo) is waiting for the debugger to attach." prompt after entering continue at (gdb) prompt, you might have to set Servo as the debug app (Use the "Select debug app" option under "Developer Options" in the Settings app). If this doesn't work, Stack Overflow will help you.

The ndk-gdb debugger may complain about ... function not defined when you try to set breakpoints. Just answer y when it asks you to set breakpoints on future library loads. You will be able to catch your breakpoints during execution.

x86 build

To build a x86 version, follow the above instructions, but replace --android with --target=i686-linux-android. The x86 emulator will need to support GLES v3 (use AVS from Android Studio v3+).

WebVR support

  • Enable WebVR preference: "dom.webvr.enabled": true
  • ./mach build --release --android --features googlevr
  • ./mach package --release --android --flavor googlevr
  • ./mach install --release --android
  • ./mach run --android (warning: the first run loads the default url sometimes after a clean APK install)


If you are using a PandaBoard, Servo is known to run on Android with the instructions above using the following build of Android for PandaBoard:

Important Notices.

Different from Linux or Mac, on Android, Servo's program entry is in the library, not executable. So we can't execute the program with command line arguments. To approximate command line arguments, we have a hack for program entry on android: You can put command-line arguments, one per line, in the file /sdcard/servo/android_params on your device. You can find a default android_params property under resources in the Servo repo.

Default settings:

default font directory : /system/fonts (in android)
default resource path : /sdcard/servo (in android)
default font configuration : /sdcard/servo/.fcconfig (in android)
default fallback font : Roboto

Working on the user interface without building Servo

We provide nightly builds of a servo library for android, so it's not necessary to do a full build to work on the user interface.

  • Download the latest AAR: (this is an armv7 build)
  • In your local copy of Servo, create a file support/android/apk/ and specify the path to the AAR you just downloaded: servoViewLocal=/tmp/servo-latest.aar
  • open support/android/apk with Android Studio
  • important: in the project list, you should see 2 projects: servoapp and servoview-local. If you see servoapp and servoview that means Android Studio didn't correctly read the settings. Re-sync gradle or restart Android Studio
  • select the build variant mainArmv7Debug or mainArmv7Release
  • plug your phone
  • press the Play button

Alternatively, you can generate the APK with the command line:

wget -O /tmp/servo-latest.aar
cd SERVO_REPO/support/android/apk
echo 'servoViewLocal=/tmp/servo-latest.aar' >
./gradlew clean
./gradlew :servoapp:assembleMainArmv7Release
cd ../../..
adb install -r target/armv7-linux-androideabi/release/servoapp.apk

The relevant files to tweak the UI are: and activity_main.xml

Running servoshell

Once built, servoshell will be in target/debug/servo or target/release/servo. You can run it directly, but we recommend using mach instead.

To run servoshell with mach, replace ./servo with ./mach run -d -- or ./mach run -r --, depending on the build profile you want to run. For example, both of the commands below run the debug build of servoshell with the same options:

$ target/debug/servo
$ ./mach run -d --

Some basic Rust

Even if you have never seen any Rust code, it's not too hard to read Servo's code. But there are some basics things one must know:

This won't be enough to do any serious work at first, but if you want to navigate the code and fix basic bugs, that should do it. It's a good starting point, and as you dig into Servo source code, you'll learn more.

For more exhaustive documentation:

Cargo and crates

A Rust library is called a crate. Servo uses plenty of crates. These crates are dependencies. They are listed in files called Cargo.toml. Servo is split into components and ports (see components and ports directories). Each has its own dependencies, and each has its own Cargo.toml file.

Cargo.toml files list the dependencies. You can edit this file.

For example, components/net_traits/Cargo.toml includes:

 git = ""

But because rust-stb-image API might change over time, it's not safe to compile against the HEAD of rust-stb-image. A Cargo.lock file is a snapshot of a Cargo.toml file which includes a reference to an exact revision, ensuring everybody is always compiling with the same configuration:

name = "stb_image"
source = "git+"

This file should not be edited by hand. In a normal Rust project, to update the git revision, you would use cargo update -p stb_image, but in Servo, use ./mach cargo-update -p stb_image. Other arguments to cargo are also understood, e.g. use --precise '0.2.3' to update that crate to version 0.2.3.

See Cargo's documentation about Cargo.toml and Cargo.lock files.

Working on a crate

As explained above, Servo depends on a lot of libraries, which makes it very modular. While working on a bug in Servo, you'll often end up in one of its dependencies. You will then want to compile your own version of the dependency (and maybe compiling against the HEAD of the library will fix the issue!).

For example, I'm trying to bring some cocoa events to Servo. The Servo window on Desktop is constructed with a library named winit. winit itself depends on a cocoa library named cocoa-rs. When building Servo, magically, all these dependencies are downloaded and built for you. But because I want to work on this cocoa event feature, I want Servo to use my own version of winit and cocoa-rs.

This is how my projects are laid out:


Both folder are git repositories.

To make it so that servo uses ~/my-projects/cocoa-rs/, first ascertain which version of the crate Servo is using and whether it is a git dependency or one from

Both information can be found using, in this example, cargo pkgid cocoa(cocoa is the name of the package, which doesn't necessarily match the repo folder name).

If the output is in the format, you are dealing with a git dependency and you will have to edit the ~/my-projects/servo/Cargo.toml file and add at the bottom:

"" = { path = '../cocoa-rs' }

If the output is in the format, you are dealing with a dependency and you will have to edit the ~/my-projects/servo/Cargo.toml in the following way:

"cocoa:0.0.0" = { path = '../cocoa-rs' }

Both will tell any cargo project to not use the online version of the dependency, but your local clone.

For more details about overriding dependencies, see Cargo's documentation.

Editor support

Visual Studio Code

By default, rust-analyzer tries to run cargo without mach, which will cause problems! For example, you might get errors in the rust-analyzer extension about build scripts:

The style crate requires enabling one of its 'servo' or 'gecko' feature flags and, in the 'servo' case, one of 'servo-layout-2013' or 'servo-layout-2020'.

If you are on NixOS, you might get errors about pkg-config or crown:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=\"1\" PKG_CONFIG_ALLOW_SYSTEM_LIBS=\"1\" \"pkg-config\" \"--libs\" \"--cflags\" \"fontconfig\"`

[ERROR rust_analyzer::main_loop] FetchWorkspaceError: rust-analyzer failed to load workspace: Failed to load the project at /path/to/servo/Cargo.toml: Failed to read Cargo metadata from Cargo.toml file /path/to/servo/Cargo.toml, Some(Version { major: 1, minor: 74, patch: 1 }): Failed to run `cd "/path/to/servo" && "cargo" "metadata" "--format-version" "1" "--manifest-path" "/path/to/servo/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error: error: could not execute process `crown -vV` (never executed)

mach passes different RUSTFLAGS to the Rust compiler than plain cargo, so if you try to build Servo with cargo, it will undo all the work done by mach (and vice versa).

Because of this, and because Servo can currently only be built with mach, you need to configure the rust-analyzer extension to use mach in .vscode/settings.json:

    "rust-analyzer.check.overrideCommand": [
        "./mach", "check", "--message-format=json" ],
    "rust-analyzer.cargo.buildScripts.overrideCommand": [
        "./mach", "check", "--message-format=json" ],
    "rust-analyzer.rustfmt.overrideCommand": [ "./mach", "fmt" ],

If having your editor open still causes unwanted rebuilds on the command line, then you can try configuring the extension to use an alternate target directory. This will require more disk space.

    "rust-analyzer.checkOnSave.overrideCommand": [
        "./mach", "check", "--message-format=json", "--target-dir", "target/lsp" ],
    "rust-analyzer.cargo.buildScripts.overrideCommand": [
        "./mach", "check", "--message-format=json", "--target-dir", "target/lsp" ],
    "rust-analyzer.rustfmt.overrideCommand": [ "./mach", "fmt" ],

NixOS users

If you are on NixOS, you should also set CARGO_BUILD_RUSTC in .vscode/settings.json as follows, where /nix/store/.../crown is the output of nix-shell etc/shell.nix --run 'command -v crown'.

    "rust-analyzer.server.extraEnv": {
        "CARGO_BUILD_RUSTC": "/nix/store/.../crown",

These settings should be enough to not need to run code . from within a nix-shell etc/shell.nix, but it wouldn’t hurt to try that if you still have problems.

When enabling rust-analyzer’s proc macro support, you may start to see errors like

proc macro `MallocSizeOf` not expanded: Cannot create expander for /path/to/servo/target/debug/deps/ unsupported ABI `rustc 1.69.0-nightly (dc1d9d50f 2023-01-31)` rust-analyzer(unresolved-proc-macro)

This means rust-analyzer is using the wrong proc macro server, and you will need to configure the correct one manually. Use mach to query the current sysroot path, and copy the last line of output:

$ ./mach rustc --print sysroot
NOTE: Entering nix-shell etc/shell.nix
info: component 'llvm-tools' for target 'x86_64-unknown-linux-gnu' is up to date

Then configure either your sysroot path or proc macro server path in .vscode/settings.json:

    "rust-analyzer.procMacro.enable": true,
    "rust-analyzer.cargo.sysroot": "[paste what you copied]",
    "rust-analyzer.procMacro.server": "[paste what you copied]/libexec/rust-analyzer-proc-macro-srv",


One of the simplest ways to debug Servo is to print interesting variables with the println!, eprintln!, or dbg! macros. In general, these should only be used temporarily; you’ll need to remove them or convert them to proper debug logging before your pull request will be merged.

Debug logging with log and RUST_LOG

Servo uses the log crate for long-term debug logging and error messages:

fn main() {

Unlike macros like println!, log adds a timestamp and tells you where the message came from:

[2024-05-01T09:07:42Z ERROR servoshell::app] hello
[2024-05-01T09:07:42Z WARN  servoshell::app] hello
[2024-05-01T09:07:42Z INFO  servoshell::app] hello
[2024-05-01T09:07:42Z DEBUG servoshell::app] hello
[2024-05-01T09:07:42Z TRACE servoshell::app] hello

You can use RUST_LOG to filter the output of log by level (off, error, warn, info, debug, trace) and/or by where the message came from, also known as the “target”. Usually the target is a Rust module path like servoshell::app, but there are some special targets too (see § Event tracing). To set RUST_LOG, prepend it to your command or use export:

$ RUST_LOG=warn ./mach run -d test.html     # Uses the prepended RUST_LOG.
$ export RUST_LOG=warn
$ ./mach run -d test.html                   # Uses the exported RUST_LOG.

See the env_logger docs for more details, but here are some examples:

  • to enable all messages up to and including debug level, but not trace:
  • to enable all messages from servo::*, servoshell::*, or any target starting with servo:
    RUST_LOG=servo=trace (or just RUST_LOG=servo)
  • to enable all messages from any target starting with style, but only error and warn messages from style::rule_tree:

Note that even when a log message is filtered out, it can still impact runtime performance, albeit only slightly. Some builds of Servo, including official nightly releases, remove DEBUG and TRACE messages at compile time, so enabling them with RUST_LOG will have no effect.

Event tracing

In the constellation, the compositor, and servoshell, we log messages sent to and received from other components, using targets of the form component>other@Event or component<other@Event. This means you can select which event types to log at runtime with RUST_LOG!

For example, in the constellation (more details):

  • to trace only events from script:
  • to trace all events except for ReadyToPresent events:
  • to trace only script InitiateNavigateRequest events:

In the compositor (more details):

  • to trace only MoveResizeWebView events:
  • to trace all events except for Forwarded events:

In servoshell (more details):

  • to trace only events from servo:
  • to trace all events except for AxisMotion events:
  • to trace only winit window moved events:

Event tracing can generate an unwieldy amount of output. In general, we recommend the following config to keep things usable:

  • constellation<,constellation>,constellation<compositor@ForwardEvent(MouseMoveEvent)=off,constellation<compositor@LogEntry=off,constellation<compositor@ReadyToPresent=off,constellation<script@LogEntry=off
  • compositor<,compositor>
  • servoshell<,servoshell>,servoshell<winit@DeviceEvent=off,servoshell<winit@MainEventsCleared=off,servoshell<winit@NewEvents(WaitCancelled)=off,servoshell<winit@RedrawEventsCleared=off,servoshell<winit@RedrawRequested=off,servoshell<winit@UserEvent(WakerEvent)=off,servoshell<winit@WindowEvent(CursorMoved)=off,servoshell<winit@WindowEvent(AxisMotion)=off,servoshell<servo@EventDelivered=off,servoshell<servo@ReadyToPresent=off,servoshell>servo@Idle=off,servoshell>servo@MouseWindowMoveEventClass=off

Other debug logging

mach run does this automatically, but to print a backtrace when Servo panics:

$ RUST_BACKTRACE=1 target/debug/servo test.html

Servo has some other kinds of debug logging with -Z (--debug):

$ ./mach run -d -- --debug help
$ ./mach run -d -- --debug dump-style-tree test.html

On macOS, you can also add some Cocoa-specific debug options, after an extra --:

$ ./mach run -d -- test.html -- -NSShowAllViews YES

Running servoshell with a debugger

To run servoshell with a debugger, use --debugger-cmd. Note that if you choose gdb or lldb, we automatically use rust-gdb and rust-lldb.

$ ./mach run --debugger-cmd=gdb test.html   # Same as `--debugger-cmd=rust-gdb`.
$ ./mach run --debugger-cmd=lldb test.html  # Same as `--debugger-cmd=rust-lldb`.

To pass extra options to the debugger, you’ll need to run the debugger yourself:

$ ./mach run --debugger-cmd=gdb -ex=r test.html         # Passes `-ex=r` to servoshell.
$ rust-gdb -ex=r --args target/debug/servo test.html    # Passes `-ex=r` to gdb.

$ ./mach run --debugger-cmd=lldb -o r test.html         # Passes `-o r` to servoshell.
$ rust-lldb -o r -- target/debug/servo test.html        # Passes `-o r` to lldb.

$ ./mach run --debugger-cmd=rr -M test.html             # Passes `-M` to servoshell.
$ rr record -M target/debug/servo test.html             # Passes `-M` to rr.

Many debuggers need extra options to separate servoshell’s arguments from their own options, and --debugger-cmd will pass those options automatically for a few debuggers, including gdb and lldb. For other debuggers, --debugger-cmd will only work if the debugger needs no extra options:

$ ./mach run --debugger-cmd=rr test.html                    # Good, because it’s...
#  servoshell arguments        ^^^^^^^^^
$ rr target/debug/servo test.html                           # equivalent to this.
#  servoshell arguments ^^^^^^^^^

$ ./mach run --debugger-cmd=renderdoccmd capture test.html  # Bad, because it’s...
#                renderdoccmd arguments? ^^^^^^^
#                  servoshell arguments          ^^^^^^^^^
$ renderdoccmd target/debug/servo capture test.html         # equivalent to this.
# => target/debug/servo is not a valid command.

$ renderdoccmd capture target/debug/servo test.html         # Good.
#              ^^^^^^^ renderdoccmd arguments
#                    servoshell arguments ^^^^^^^^^

Debugging with gdb or lldb

To search for a function by name or regex:

(lldb) image lookup -r -n <name>
(gdb) info functions <name>

To list the running threads:

(lldb) thread list
(lldb) info threads

Other commands for gdb or lldb include:

(gdb) b a_servo_function    # Add a breakpoint.
(gdb) run                   # Run until breakpoint is reached.
(gdb) bt                    # Print backtrace.
(gdb) frame n               # Choose the stack frame by its number in `bt`.
(gdb) next                  # Run one line of code, stepping over function calls.
(gdb) step                  # Run one line of code, stepping into function calls.
(gdb) print varname         # Print a variable in the current scope.

See this gdb tutorial or this lldb tutorial more details.

To inspect variables in lldb, you can also type gui, then use the arrow keys to expand variables:

(lldb) gui
│ ◆─(&mut gfx::paint_task::PaintTask<Box<CompositorProxy>>) self = 0x000070000163a5b0    │
│ ├─◆─(msg::constellation_msg::PipelineId) id                                            │
│ ├─◆─(url::Url) _url                                                                    │
│ │ ├─◆─(collections::string::String) scheme                                             │
│ │ │ └─◆─(collections::vec::Vec<u8>) vec                                                │
│ │ ├─◆─(url::SchemeData) scheme_data                                                    │
│ │ ├─◆─(core::option::Option<collections::string::String>) query                        │
│ │ └─◆─(core::option::Option<collections::string::String>) fragment                     │
│ ├─◆─(std::sync::mpsc::Receiver<gfx::paint_task::LayoutToPaintMsg>) layout_to_paint_port│
│ ├─◆─(std::sync::mpsc::Receiver<gfx::paint_task::ChromeToPaintMsg>) chrome_to_paint_port│

If lldb crashes on certain lines involving the profile() function, it’s not just you. Comment out the profiling code, and only keep the inner function, and that should do it.

Reversible debugging with rr (Linux only)

rr is like gdb, but lets you rewind. Start by running servoshell via rr:

$ ./mach run --debugger=rr test.html    # Either this...
$ rr target/debug/servo test.html       # ...or this.

Then replay the trace, using gdb commands or rr commands:

$ rr replay
(rr) continue
(rr) reverse-cont

To run one or more tests repeatedly until the result is unexpected:

$ ./mach test-wpt --chaos path/to/test [path/to/test ...]

Traces recorded by rr can take up a lot of space. To delete them, go to ~/.local/share/rr.

OpenGL debugging with RenderDoc (Linux or Windows only)

RenderDoc lets you debug Servo’s OpenGL activity. Start by running servoshell via renderdoccmd:

$ renderdoccmd capture -d . target/debug/servo test.html

While servoshell is running, run qrenderdoc, then choose File > Attach to Running Instance. Once attached, you can press F12 or Print Screen to capture a frame.

Automated testing

This is boring. But your PR won't get accepted without a test. Tests are located in the tests directory. You'll see that there are a lot of files in there, so finding the proper location for your test is not always obvious.

First, look at the "Testing" section in ./mach --help to understand the different test categories. You'll also find some update-* commands. It's used to update the list of expected results.

To run a test:

./mach test-wpt tests/wpt/yourtest

For your PR to get accepted, source code also has to satisfy certain tidiness requirements.

To check code tidiness:

./mach test-tidy

Updating a test

In some cases, extensive tests for the feature you're working on already exist under tests/wpt:

  • Make a release build
  • run ./mach test-wpt --release --log-raw=/path/to/some/logfile
  • run update-wpt on it

This may create a new commit with changes to expectation ini files. If there are lots of changes, it's likely that your feature had tests in wpt already.

Include this commit in your pull request.

Add a new test

If you need to create a new test file, it should be located in tests/wpt/mozilla/tests or in tests/wpt/web-platform-tests if it's something that doesn't depend on servo-only features. You'll then need to update the list of tests and the list of expected results:

./mach test-wpt --manifest-update

Debugging a test

See the debugging guide to get started in how to debug Servo.

Web tests (tests/wpt)

This folder contains the web platform tests and the code required to integrate them with Servo. To learn how to write tests, go here.

Contents of tests/wpt

In particular, this folder contains:

  • config.ini: some configuration for the web-platform-tests.
  • include.ini: the subset of web-platform-tests we currently run.
  • tests: copy of the web-platform-tests.
  • meta: expected failures for the web-platform-tests we run.
  • mozilla: web-platform-tests that cannot be upstreamed.

Running web tests

The simplest way to run the web-platform-tests in Servo is ./mach test-wpt in the root directory. This will run the subset of JavaScript tests defined in include.ini and log the output to stdout.

A subset of tests may be run by providing positional arguments to the mach command, either as filesystem paths or as test urls e.g.

./mach test-wpt tests/wpt/tests/dom/historical.html

to run the dom/historical.html test, or

./mach test-wpt dom

to run all the DOM tests.

There are also a large number of command line options accepted by the test harness; these are documented by running with --help.

Running all web tests

Running all the WPT tests with debug mode results in a lot of timeout. If one wants to run all the tests, build with mach build -r and test with mach test-wpt --release

Running web tests with an external WPT server

Normally wptrunner starts its own WPT server, but occasionally you might want to run multiple instances of mach test-wpt, such as when debugging one test while running the full suite in the background, or when running a single test many times in parallel (--processes only works across different tests).

This would lead to a “Failed to start HTTP server” errors, because you can only run one WPT server at a time. To fix this:

  1. Follow the steps in Running the tests manually
  2. Add a break to start_servers in as follows:
--- a/tests/wpt/tests/tools/serve/
+++ b/tests/wpt/tests/tools/serve/
@@ -746,6 +746,7 @@ def start_servers(logger, host, ports, paths, routes, bind_address, config,
                   mp_context, log_handlers, **kwargs):
     servers = defaultdict(list)
     for scheme, ports in ports.items():
+        break
         assert len(ports) == {"http": 2, "https": 2}.get(scheme, 1)
         # If trying to start HTTP/2.0 server, check compatibility
  1. Run mach test-wpt as many times as needed

If you get unexpected TIMEOUT in testharness tests, then the custom testharnessreport.js may have been installed incorrectly (see Running the tests manually for more details).

Running web tests manually

(See also the relevant section of the upstream README.)

It can be useful to run a test without the interference of the test runner, for example when using a debugger such as gdb. To do this, we need to start the WPT server manually, which requires some extra configuration.

To do this, first add the following to the system's hosts file:   www.web-platform.test   www1.web-platform.test   www2.web-platform.test   web-platform.test   xn--n8j6ds53lwwkrqhv28a.web-platform.test   xn--lve-6lad.web-platform.test

Navigate to tests/wpt/web-platform-tests for the remainder of this section.

Normally wptrunner installs Servo’s version of testharnessreport.js, but when starting the WPT server manually, we get the default version, which won’t report test results correctly. To fix this:

  1. Create a directory local-resources
  2. Copy tools/wptrunner/wptrunner/testharnessreport-servo.js to local-resources/testharnessreport.js
  3. Edit local-resources/testharnessreport.js to substitute the variables as follows:
  • %(output)d
    • 1 if you want to play with the test interactively (≈ pause-after-test)
    • 0 if you don’t care about that (though it’s also ok to use 1 always)
  • %(debug)strue
  1. Create a ./config.json as follows (see tools/wave/config.default.json for defaults):
{"aliases": [{
    "url-path": "/resources/testharnessreport.js",
    "local-dir": "local-resources"

Then start the server with ./wpt serve. To check if testharnessreport.js was installed correctly:

  • The standard output of curl http://web-platform.test:8000/resources/testharnessreport.js should look like testharnessreport-servo.js, not like the default testharnessreport.js
  • The standard output of target/release/servo http://web-platform.test:8000/css/css-pseudo/highlight-pseudos-computed.html (or any testharness test) should contain lines starting with:

To prevent browser SSL warnings when running HTTPS tests locally, you will need to run Servo with --certificate-path resources/cert-wpt-only.

Running web tests in Firefox

When working with tests, you may want to compare Servo's result with Firefox. You can supply --product firefox along with the path to a Firefox binary (as well as few more odds and ends) to run tests in Firefox from your Servo checkout:

./mach test-wpt dom --product firefox --binary $GECKO_BINS/firefox --certutil-binary $GECKO_BINS/certutil --prefs-root $GECKO/testing/profiles

Updating web test expectations

When fixing a bug that causes the result of a test to change, the expected results for that test need to be changed. This can be done manually, by editing the .ini file under the meta folder that corresponds to the test. In this case, remove the references to tests whose expectation is now PASS, and remove .ini files that no longer contain any expectations.

When a larger number of changes is required, this process can be automated. This first requires saving the raw, unformatted log from a test run, for example by running ./mach test-wpt --log-raw /tmp/servo.log. Once the log is saved, run from the root directory:

./mach update-wpt /tmp/servo.log

Writing new web tests

The simplest way to create a new test is to use the following command:

./mach create-wpt tests/wpt/path/to/new/test.html

This will create test.html in the appropriate directory using the WPT template for JavaScript tests. Tests are written using testharness.js. Documentation can be found here. To create a new reference test instead, use the following:

./mach create-wpt --reftest tests/wpt/path/to/new/reftest.html --reference tests/wpt/path/to/reference.html

reference.html will be created if it does not exist, and reftest.html will be created using the WPT reftest template. To know more about reftests, check this. These new tests can then be run in the following manner like any other WPT test:

./mach test-wpt tests/wpt/path/to/new/test.html
./mach test-wpt tests/wpt/path/to/new/reftest.html

Editing web tests

web-platform-tests may be edited in-place and the changes committed to the servo tree. These changes will be upstreamed when the tests are next synced.

Updating the upstream tests

In order to update the tests from upstream use the same mach update commands. e.g. to update the web-platform-tests:

./mach update-wpt --sync
./mach test-wpt --log-raw=update.log
./mach update-wpt update.log

This should create two commits in your servo repository with the updated tests and updated metadata.

Servo-specific tests

The mozilla directory contains tests that cannot be upstreamed for some reason (e.g. because they depend on Servo-specific APIs), as well as some legacy tests that should be upstreamed at some point. When run they are mounted on the server under /_mozilla/.

Analyzing reftest results

Reftest results can be analyzed from a raw log file. To generate this run with the --log-raw option e.g.

./mach test-wpt --log-raw wpt.log

This file can then be fed into the reftest analyzer which will show all failing tests (not just those with unexpected results). Note that this ingests logs in a different format to original version of the tool written for gecko reftests.

The reftest analyzer allows pixel-level comparison of the test and reference screenshots. Tests that both fail and have an unexpected result are marked with a !.

Updating the WPT manifest

MANIFEST.json can be regenerated automatically with the mach command update-manifest e.g.

./mach update-manifest

This is equivalent to running

./mach test-wpt --manifest-update SKIP_TESTS

Architecture overview

Servo is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.

Because C++ is poorly suited to preventing these problems, Servo is written in Rust, a new language designed specifically with Servo's requirements in mind. Rust provides a task-parallel infrastructure and a strong type system that enforces memory safety and data race freedom.

When making design decisions we will prioritize the features of the modern web platform that are amenable to high-performance, dynamic, and media-rich applications, potentially at the cost of features that cannot be optimized. We want to know what a fast and responsive web platform looks like, and to implement it.


flowchart TB
    subgraph Content Process
    ScriptA[Script Thread A]-->LayoutA[Layout A]
    ScriptB[Script Thread B]-->LayoutB
    LayoutB[Layout B]

    subgraph Main Process
    direction TB
    Embedder-->Constellation[Constellation Thread]
    Constellation-->Font[Font Cache]
    Constellation-->Image[Image Cache]
    Constellation-->Resource[Resource Manager]


This diagram shows the architecture of Servo in the case of only a single content process. Servo is designed to support multiple content processes at once. A single content process can have multiple script threads running at the same time with their own layout. These have their own communication channels with the main process. Solid lines here indicate communication channels.


Each constellation instance can for now be thought of as a single tab or window, and manages a pipeline of tasks that accepts input, runs JavaScript against the DOM, performs layout, builds display lists, renders display lists to tiles and finally composites the final image to a surface.

The pipeline consists of three main tasks:

  • Script—Script's primary mission is to create and own the DOM and execute the JavaScript engine. It receives events from multiple sources, including navigation events, and routes them as necessary. When the content task needs to query information about layout it must send a request to the layout task.
  • Layout—Layout takes a snapshot of the DOM, calculates styles, and constructs the two main layout data structures, the box tree and the fragment tree. The fragment tree is used to untransformed position of nodes and from there to build a display list, which is sent to the compositor.
  • Compositor—The compositor forwards display lists to WebRender, which is the content rasterization and display engine used by both Servo and Firefox. It uses the GPU to render to the final image of the page. As the UI thread, the compositor is also the first receiver of UI events, which are generally immediately sent to content for processing (although some events, such as scroll events, can be handled initially by the compositor for responsiveness).

Concurrency and parallelism

Implementation Strategy

Concurrency is the separation of tasks to provide interleaved execution. Parallelism is the simultaneous execution of multiple pieces of work in order to increase speed. Here are some ways that we take advantage of both:

  • Task-based architecture. Major components in the system should be factored into actors with isolated heaps, with clear boundaries for failure and recovery. This will also encourage loose coupling throughout the system, enabling us to replace components for the purposes of experimentation and research.
  • Concurrent rendering. Both rendering and compositing are separate threads, decoupled from layout in order to maintain responsiveness. The compositor thread manages its memory manually to avoid garbage collection pauses.
  • Tiled rendering. We divide the screen into a grid of tiles and render each one in parallel. Tiling is needed for mobile performance regardless of its benefits for parallelism.
  • Layered rendering. We divide the display list into subtrees whose contents can be retained on the GPU and render them in parallel.
  • Selector matching. This is an embarrassingly parallel problem. Unlike Gecko, Servo does selector matching in a separate pass from flow tree construction so that it is more easily parallelized.
  • Parallel layout. We build the flow tree using a parallel traversal of the DOM that respects the sequential dependencies generated by elements such as floats.
  • Text shaping. A crucial part of inline layout, text shaping is fairly costly and has potential for parallelism across text runs. Not implemented.
  • Parsing. We have written a new HTML parser in Rust, focused on both safety and compliance with the specification. We have not yet added speculation or parallelism to the parsing.
  • Image decoding. Decoding multiple images in parallel is straightforward.
  • Decoding of other resources. This is probably less important than image decoding, but anything that needs to be loaded by a page can be done in parallel, e.g. parsing entire style sheets or decoding videos.
  • GC JS concurrent with layout - Under most any design with concurrent JS and layout, JS is going to be waiting to query layout sometimes, perhaps often. This will be the most opportune time to run the GC.

For information on the design of WebXR see the in-tree documentation.


  • Parallel-hostile libraries. Some third-party libraries we need don't play well in multi-threaded environments. Fonts in particular have been difficult. Even if libraries are technically thread-safe, often thread safety is achieved through a library-wide mutex lock, harming our opportunities for parallelism.
  • Too many threads. If we throw maximum parallelism and concurrency at everything, we will end up overwhelming the system with too many threads.

JavaScript and DOM bindings

We are currently using SpiderMonkey, although pluggable engines is a long-term, low-priority goal. Each content task gets its own JavaScript runtime. DOM bindings use the native JavaScript engine API instead of XPCOM. Automatic generation of bindings from WebIDL is a priority.

Multi-process architecture

Similar to Chromium and WebKit2, we intend to have a trusted application process and multiple, less trusted engine processes. The high-level API will in fact be IPC-based, likely with non-IPC implementations for testing and single-process use-cases, though it is expected most serious uses would use multiple processes. The engine processes will use the operating system sandboxing facilities to restrict access to system resources.

At this time we do not intend to go to the same extreme sandboxing ends as Chromium does, mostly because locking down a sandbox constitutes a large amount of development work (particularly on low-priority platforms like Windows XP and older Linux) and other aspects of the project are higher priority. Rust's type system also adds a significant layer of defense against memory safety vulnerabilities. This alone does not make a sandbox any less important to defend against unsafe code, bugs in the type system, and third-party/host libraries, but it does reduce the attack surface of Servo significantly relative to other browser engines. Additionally, we have performance-related concerns regarding some sandboxing techniques (for example, proxying all OpenGL calls to a separate process).

I/O and resource management

Web pages depend on a wide variety of external resources, with many mechanisms of retrieval and decoding. These resources are cached at multiple levels—on disk, in memory, and/or in decoded form. In a parallel browser setting, these resources must be distributed among concurrent workers.

Traditionally, browsers have been single-threaded, performing I/O on the "main thread", where most computation also happens. This leads to latency problems. In Servo there is no "main thread" and the loading of all external resources is handled by a single resource manager task.

Browsers have many caches, and Servo's task-based architecture means that it will probably have more than extant browser engines (e.g. we might have both a global task-based cache and a task-local cache that stores results from the global cache to save the round trip through the scheduler). Servo should have a unified caching story, with tunable caches that work well in low-memory environments.

Further reading


Important research and accumulated knowledge about browser implementation, parallel layout, etc:

Directory structure

  • components
    • bluetooth — Implementation of the bluetooth thread.
    • bluetooth_traits — APIs to the bluetooth crate for crates that don't want to depend on the bluetooth crate for build speed reasons.
    • canvas — Implementation of painting threads for 2D and WebGL canvases.
    • canvas_traits — APIs to the canvas crate for crates that don't want to depend on the canvas crate for build speed reasons.
    • compositing — Integration with OS windowing/rendering and event loop.
    • constellation — Management of resources for a top-level browsing context (ie. tab).
    • devtools — In-process server to allow manipulating browser instances via a remote Firefox developer tools client.
    • devtools_traits — APIs to the devtools crate for crates that don't want to depend on the devtools crate for build speed reasons.
    • gfx — Draws the result of laying out a page, and sends the result to the compositor.
    • gfx_traits — APIs to the gfx crate for crates that don't want to depend on the gfx crate for build speed reasons.
    • layout — Converts page content into positioned, styled boxes and passes the result to the renderer.
    • layout_thread — Runs the threads for layout, communicates with the script thread, and calls into the layout crate to do the layout.
    • msg — Shared APIs for communicating between specific threads and crates.
    • net — Network protocol implementations, and state and resource management (caching, cookies, etc.).
    • net_traits — APIs to the net crate for crates that don't want to depend on the net crate for build speed reasons.
    • plugins — Syntax extensions, custom attributes, and lints.
    • profile — Memory and time profilers.
    • profile_traits — APIs to the profile crate for crates that don't want to depend on the profile crate for build speed reasons.
    • script — Implementation of the DOM (native Rust code and bindings to SpiderMonkey).
    • script_layout_interface — The API the script crate provides for the layout crate.
    • script_traits — APIs to the script crate for crates that don't want to depend on the script crate for build speed reasons.
    • selectors — CSS selector matching.
    • servo — Entry points for the servo application and libservo embedding library.
    • style — APIs for parsing CSS and interacting with stylesheets and styled elements.
    • style_traits — APIs to the style crate for crates that don't want to depend on the style crate for build speed reasons.
    • util — Assorted utility methods and types that are commonly used throughout the project.
    • webdriver_server — In-process server to allow manipulating browser instances via a WebDriver client.
    • webgpu — Implementation of threads for the WebGPU API.
  • etc — Useful tools and scripts for developers.
  • mach — A command-line tool to help with developer tasks.
  • ports
    • winit — Embedding implementation for the winit windowing library.
  • python
    • servo — Implementations of servo-specific mach commands.
    • tidy — Python package of code lints that are automatically run before merging changes.
  • resources — Files used at run time. Need to be included somehow when distributing binary builds.
  • support
    • android — Libraries that require special handling for building for Android platforms
    • rust-task_info — Library for obtaining information about memory usage for a process
  • target
    • debug — Build artifacts generated by ./mach build --debug.
    • doc — Documentation is generated here by the rustdoc tool when running ./mach doc
    • release — Build artifacts generated by ./mach build --release.
  • tests
    • dromaeo — Harness for automatically running the Dromaeo testsuite.
    • html — Manual tests and experiments.
    • jquery — Harness for automatically running the jQuery testsuite.
    • power — Tools for measurement of power consumption.
    • unit — Unit tests using rustc’s built-in test harness.
    • wpt — W3C web-platform-tests and csswg-tests along with tools to run them and expected failures.

TODO: Update foo_traits and ports/winit.

Major dependencies




Current state of, and outlook on, Servo's integration of SpiderMonkey: [ ](

DOM Bindings

Script Thread

Layout DOM

Servo's style system overview

This document provides an overview of Servo's style system. For more extensive details, refer to the style doc comments, or the Styling Overview in the wiki, which includes a conversation between Boris Zbarsky and Patrick Walton about how style sharing works.

Selector Implementation

To ensure compatibility with Stylo (a project integrating Servo's style system into Gecko), selectors must be consistent.

The consistency is implemented in selectors' SelectorImpl, containing the logic related to parsing pseudo-elements and other pseudo-classes apart from tree-structural ones.

Servo extends the selector implementation trait in order to allow a few more things to be shared between Stylo and Servo.

The main Servo implementation (the one that is used in regular builds) is SelectorImpl.

DOM glue

In order to keep DOM, layout and style in different modules, there are a few traits involved.

Style's dom traits (TDocument, TElement, TNode, TRestyleDamage) are the main "wall" between layout and style.

Layout's wrapper module makes sure that layout traits have the required traits implemented.

The Stylist

The stylist structure holds all the selectors and device characteristics for a given document.

The stylesheets' CSS rules are converted into Rules. They are then introduced in a SelectorMap depending on the pseudo-element (see PerPseudoElementSelectorMap), stylesheet origin (see PerOriginSelectorMap), and priority (see the normal and important fields in PerOriginSelectorMap).

This structure is effectively created once per pipeline, in the corresponding LayoutThread.

The properties module

The properties module is a mako template. Its complexity is derived from the code that stores properties, cascade function and computation logic of the returned value which is exposed in the main function.

Pseudo-Element resolution

Pseudo-elements are a tricky section of the style system. Not all pseudo-elements are very common, and so some of them might want to skip the cascade.

Servo has, as of right now, five pseudo-elements:

  • ::before and ::after.
  • ::selection: This one is only partially implemented, and only works for text inputs and textareas as of right now.
  • ::-servo-details-summary: This pseudo-element represents the <summary> of a <details> element.
  • ::-servo-details-content: This pseudo-element represents the contents of a <details> element.

Both ::-servo-details-* pseudo-elements are private (i.e. they are only parsed from User-Agent stylesheets).

Servo has three different ways of cascading a pseudo-element, which are defined in PseudoElementCascadeType:

"Eager" cascading

This mode computes the computed values of a given node's pseudo-element over the first pass of the style system.

This is used for all public pseudo-elements, and is, as of right now, the only way a public pseudo-element should be cascaded (the explanation for this is below).

"Precomputed" cascading

Or, better said, no cascading at all. A pseudo-element marked as such is not cascaded.

The only rules that apply to the styles of that pseudo-element are universal rules (rules with a *|* selector), and they are applied directly over the element's style if present.

::-servo-details-content is an example of this kind of pseudo-element, all the rules in the UA stylesheet with the selector *|*::-servo-details-content (and only those) are evaluated over the element's style (except the display value, that is overwritten by layout).

This should be the preferred type for private pseudo-elements (although some of them might need selectors, see below).

"Lazy" cascading

Lazy cascading allows to compute pseudo-element styles lazily, that is, just when needed.

Currently (for Servo, not that much for stylo), selectors supported for this kind of pseudo-elements are only a subset of selectors that can be matched on the layout tree, which does not hold all data from the DOM tree.

This subset includes tags and attribute selectors, enough for making ::-servo-details-summary a lazy pseudo-element (that only needs to know if it is in an open details element or not).

Since no other selectors would apply to it, this is (at least for now) not an acceptable type for public pseudo-elements, but should be considered for private pseudo-elements.


Servo has two layout systems:

  • Layout (Layout 2020): This is a new layout system for Servo which doesn't yet have all the features of the legacy layout, but will have support for proper fragmentation. For more on the benefits of the new layout system, see [[Layout 2020|Layout-2020]]. The architecture described below refers to the new layout system. For more information about why we wrote a new layout engine see the [[Servo-Layout-Engines-Report]].
  • Legacy layout (Layout 2013): This is the original layout engine written for Servo. This layout engine is currently in maintenance mode.

Layout happens in three phases: box tree construction, fragment tree construction, and display list construction. Once a display list is generated, it is sent to WebRender for rendering. When possible during tree construction, layout will try to use parallelism with Rayon. Certain CSS feature prevent parallelism such as floats or counters. The same code is used for both parallel and serial layout.

Box Tree

The box tree is tree that represents the nested formatting contexts as described in the CSS specification. There are various kinds of formatting contexts, such as block formatting contexts (for block flow), inline formatting contexts (for inline flow), table formatting contexts, and flex formatting contexts. Each formatting context has different rules for how boxes inside that context are laid out. Servo represents this tree of contexts using nested enums, which ensure that the content inside each context can only be the sort of content described in the specification.

The box tree is just the initial representation of the layout state and generally speaking the next phase is to run the layout algorithm on the box tree and produce a fragment tree. Fragments in CSS are the results of splitting elements in the box tree into multiple fragments due to things like line breaking, columns, and pagination. Additionally during this layout phase, Servo will position and size the resulting fragments relative to their containing blocks. The transformation generally takes place in a function called layout(...) on the different box tree data structures.

Layout of the box tree into the fragment tree is done in parallel, until a section of the tree with floats is encountered. In those sections, a sequential pass is done and parallel layout can commence again once the layout algorithm moves across the boundaries of the block formatting context that contains the floats, whether by descending into an independent formatting context or finishing the layout of the float container.

Fragment Tree

The product of the layout step is a fragment tree. In this tree, elements that were split into different pieces due to line breaking, columns, or pagination have a fragment for every piece. In addition, each fragment is positioned relatively to a fragment corresponding to its containing block. For positioned fragments, an extra placeholder fragment, AbsoluteOrFixedPositioned, is left in the original tree position. This placeholder is used to build the display list in the proper order according the CSS painting order.

Display List Construction

Once layout has created a fragment tree, it can move on to the next phase of rendering which is to produce a display list for the tree. During this phase, the fragment tree is transformed into a WebRender display list which consists of display list items (rectangles, lines, images, text runs, shadows, etc). WebRender does not need a large variety of display list items to represent web content.

In addition to normal display list items, WebRender also uses a tree of spatial nodes to represent transformations, scrollable areas, and sticky content. This tree is essentially a description of how to apply post-layout transformations to display list items. When the page is scrolled, the offset on the root scrolling node can be adjusted without immediately doing a layout. Likewise, WebRender has the capability to apply transformations, including 3D transformations to web content with a type of spatial node called a reference frame.

Clipping whether from CSS clipping or from the clipping introduced by the CSS overflow property is handled by another tree of clip nodes. These nodes also have spatial nodes assigned to them so that clips stay in sync with the rest of web content. WebRender decides how best to apply a series of clips to each item.

Once the display list is constructed it is sent to the compositor which forwards it to WebRender.



How WebXR works in Servo


Servo's WebXR implementation involves three main components:

  1. The script thread (runs all JS for a page)
  2. The WebGL thread (maintains WebGL canvas data and invokes GL operations corresponding to WebGL APIs)
  3. The compositor (AKA the main thread)

Additionally, there are a number of WebXR-specific concepts:

  • The discovery object (ie. how Servo discovers if a device can provide a WebXR session)
  • The WebXR registry (the compositor's interface to WebXR)
  • The layer manager (manages WebXR layers for a given session and frame operations on those layers)
  • The layer grand manager (manages all layer managers for WebXR sessions)

Finally, there are graphics-specific concepts that are important for the low-level details of rendering with WebXR:

  • surfman is a crate that abstracts away platform-specific details of OpenGL hardware-accelerated rendering
  • surface is a hardware buffer that are tied to a specific OpenGL context
  • surface texture is an OpenGL texture that wraps a surface. Surface textures can be shared between OpenGL contexts.
  • surfman context represents a particular OpenGL context, and is backed by platform-specific implementations (such as EGL on Unix-based platforms)
  • ANGLE is an OpenGL implementation on top of Direct3D which is used in Servo to provide a consistent OpenGL backend on Windows-based platforms

How Servo's compositor starts

The embedder is responsible for creating a window and triggering the rendering context creation appropriately. Servo creates the rendering context by creating a surfman context which will be used by the compositor for all web content rendering operations.

How a session starts

When a webpage invokes navigator.xr.requestSession(..) through JS, this corresponds to the XrSystem::RequestSession method in Servo. This method sends a message to the WebXR message handler that lives on the main thread, under the control of the compositor.

The WebXR message handler iterates over all known discovery objects and attempts to request a session from each of them. The discovery objects encapsulate creating a session for each supported backend.

As of Feb 3, 2024, there are five WebXR backends:

  • magicleap - supports Magic Leap 1.0 devices
  • googlevr - supports Google VR
  • headless - supports a window-less, device-less device for automated tests
  • glwindow - supports a GL-based window for manual testing in desktop environments without real devices
  • openxr - supports devices that implement the OpenXR standard

WebXR sessions need to create a layer manager at some point in order to be able to create and render to WebXR layers. This happens in several steps:

  1. some initialization happens on the main thread
  2. the main thread sends a synchronous message to the WebGL thread
  3. the WebGL thread receives the message
  4. some backend-specific, graphics-specific initialization happens on the WebGL thread, hidden behind the layer manager factory abstraction
  5. the new layer manager is stored in the WebGL thread
  6. the main thread receives a unique identifier representing the new layer manager

This cross-thread dance is important because the device performing the rendering often has strict requirements for the compatibility of any WebGL context that is used for rendering, and most GL state is only observable on the thread that created it.

How an OpenXR session is created

The OpenXR discovery process starts at OpenXrDiscovery::request_session. The discovery object only has access to whatever state was passed in its constructor, as well as a SessionBuilder object that contains values required to create a new session.

Creating an OpenXR session first creates an OpenXR instance, which allows coniguring which extensions are in use. There are different extensions used to initialize OpenXR on different platforms; for Window the D3D extension can be used since Servo relies on ANGLE for its OpenGL implementation.

Once an OpenXR instance exists, the session builder is used to create a new WebXR session that runs in its own thread. All WebXR sessions can either run in a thread or have Servo run them on the main thread. This choice has implications for how the graphics for the WebXR session can be set up, based on what GL state must be available for sharing.

OpenXR's new session thread initializes an OpenXR device, which is responsible for creating the actual OpenXR session. This session object is created on the WebGL thread as part of creating the OpenXR layer manager, since it relies on sharing the underlying GPU device that the WebGL thread uses.

Once the session object has been created, the main thread can obtain a copy and resume initializing the remaining properties of the new device.

Docs for older versions

Some parts of this book may not apply to older versions of Servo.

If you need to work with a very old version of Servo, you may find these docs helpful:

Style guide

  • Use sentence case for chapter and section headings, e.g. “Getting started” rather than “Getting Started”
  • Use permalinks when linking to source code repos — press Y in GitHub to get a permanent URL

Markdown source

  • Use one sentence per line with no column limit, to make diffs and history easier to understand

To help split sentences onto separate lines, you can replace ([.!?]) $1\n, but watch out for cases like “e.g.”. Then to fix indentation of simple lists, you can replace ^([*-] .*\n([* -].*\n)*)([^\n* -])$1 $3, but this won’t work for nested or more complex lists.

  • For consistency, indent nested lists with two spaces, and use - for unordered lists

Error messages

  • Where possible, always include a link to documentation, Zulip chat, or source code — this helps preserve the original context, and helps us check and update our advice over time

The remaining rules for error messages are designed to ensure that the text is as readable as possible, and that the reader can paste their error message into find-in-page with minimal false negatives, without the rules being too cumbersome to follow.

Wrap the error message in <pre><samp>, with <pre> at the start of the line. If you want to style the error message as a quote, wrap it in <pre><blockquote><samp>.

<pre> treats newlines as line breaks, and at the start of the line, it prevents Markdown syntax from accidentally taking effect when there are blank lines in the error message.

<samp> marks the text as computer output, where we have CSS that makes it wrap like it would in a terminal. Code blocks (<pre><code>) don’t wrap, so they can make long errors hard to read.

Replace every & with &amp;, then replace every < with &lt;. Text inside <pre> will never be treated as Markdown, but it’s still HTML markup, so it needs to be escaped.

Always check the rendered output to ensure that all of the symbols were preserved. You may find that you still need to escape some Markdown with \, to avoid rendering called `Result::unwrap()` on an `Err` value as called Result::unwrap() on an Err value.

Error message Markdown
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=\"1\" PKG_CONFIG_ALLOW_SYSTEM_LIBS=\"1\" \"pkg-config\" \"--libs\" \"--cflags\" \"fontconfig\"`
<pre><samp>thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=\"1\" PKG_CONFIG_ALLOW_SYSTEM_LIBS=\"1\" \"pkg-config\" \"--libs\" \"--cflags\" \"fontconfig\"`</samp></pre>
error[E0765]: ...
 --> src/
2 |       println!("```);
  |  ______________^
3 | | }
  | |__^
<pre><samp>error[E0765]: ...
 --> src/
2 |       println!("```);
  |  ______________^
3 | | }
  | |__^</samp></pre>


The Servo Parallel Browser Engine Project

See and for help getting started.

Build Setup

If these instructions fail or you would like to install dependencies manually, try the manual build setup.


  • Run ./mach bootstrap
    Note: This will install the recommended version of GStreamer globally on your system.


  • Run mach bootstrap
    • This will install CMake, Git, and Ninja via choco in an Administrator console. Allow the scripts to run and once the operation finishes, close the new console.
  • Run refreshenv

See also Windows Troubleshooting Tips.


  • Ensure that the following environment variables are set:
    • ANDROID_NDK_ROOT: $ANDROID_SDK_ROOT/ndk/25.2.9519653/
      ANDROID_SDK_ROOT can be any directory (such as ~/android-sdk). All of the Android build dependencies will be installed there.
  • Install the latest version of the Android command-line tools to $ANDROID_SDK_ROOT/cmdline-tools/latest.
  • Run the following command to install the necessary components:
    sudo $ANDROID_SDK_ROOT/cmdline-tools/latest/bin/sdkmanager --install
     "build-tools;33.0.2" \
     "emulator" \
     "ndk;25.2.9519653" \
     "platform-tools" \
     "platforms;android-33" \

For information about building and running the Android build, see the Android documentation.

The Rust compiler

Servo's build system uses to automatically download a Rust compiler. This is a specific version of Rust Nightly determined by the rust-toolchain.toml file.

Checking for build errors, without building

If you’re making changes to one crate that cause build errors in another crate, consider this instead of a full build:

./mach check

It will run cargo check, which runs the analysis phase of the compiler (and so shows build errors if any) but skips the code generation phase. This can be a lot faster than a full build, though of course it doesn’t produce a binary you can run.

Commandline Arguments

  • -p INTERVAL turns on the profiler and dumps info to the console every INTERVAL seconds
  • -s SIZE sets the tile size for painting; defaults to 512
  • -z disables all graphical output; useful for running JS / layout tests
  • -Z help displays useful output to debug servo

Keyboard Shortcuts

  • Ctrl+L opens URL prompt (Cmd+L on Mac)
  • Ctrl+R reloads current page (Cmd+R on Mac)
  • Ctrl+- zooms out (Cmd+- on Mac)
  • Ctrl+= zooms in (Cmd+= on Mac)
  • Alt+left arrow goes backwards in the history (Cmd+left arrow on Mac)
  • Alt+right arrow goes forwards in the history (Cmd+right arrow on Mac)
  • Esc or Ctrl+Q exits Servo (Cmd+Q on Mac)


The generated documentation can be found on

TODO: docs/

Style Guide

The majority of our style recommendations are automatically enforced via our automated linters. This document has guidelines that are less easy to lint for.

Shell scripts

Shell scripts are suitable for small tasks or wrappers, but it's preferable to use Python for anything with a hint of complexity or in general.

Shell scripts should be written using bash, starting with this shebang:

#!/usr/bin/env bash

Note that the version of bash available on macOS by default is quite old, so be careful when using new features.

Scripts should enable a few options at the top for robustness:

set -o errexit
set -o nounset
set -o pipefail

Rememeber to quote all variables, using the full form: "${SOME_VARIABLE}".

Use "$(some-command)" instead of backticks for command substitution. Note that these should be quoted as well.

TODO: docs/

How to use this glossary

This is a collection of common terms that have a specific meaning in the context of the Servo project. The goal is to provide high-level definitions and useful links for further reading, rather than complete documentation about particular parts of the code.

If there is a word or phrase used in Servo's code, issue tracker, mailing list, etc. that is confusing, please make a pull request that adds it to this file with a body of TODO. This will signal more knowledgeable people to add a more meaningful definition.



The thread that receives input events from the operating system and forwards them to the constellation. It is also in charge of compositing complete renders of web content and displaying them on the screen as fast as possible.


The thread that controls a collection of related web content. This could be thought of as an owner of a single tab in a tabbed web browser; it encapsulates session history, knows about all frames in a frame tree, and is the owner of the pipeline for each contained frame.

Display list

A list of concrete rendering instructions. The display list is post-layout, so all items have stacking-context-relative pixel positions, and z-index has already been applied, so items later in the display list will always be on top of items earlier in it.

Layout thread

A thread that is responsible for laying out a DOM tree into layers of boxes for a particular document. Receives commands from the script thread to lay out a page and either generate a new display list for use by the renderer thread, or return the results of querying the layout of the page for use by script.


A unit encapsulating a means of communication with the script, layout, and renderer threads for a particular document. Each pipeline has a globally-unique id which can be used to access it from the constellation.

Renderer thread (alt. paint thread)

A thread which translates a display list into a series of drawing commands that render the contents of the associated document into a buffer, which is then sent to the compositor.

Script thread (alt. script task)

A thread that executes JavaScript and stores the DOM representation of all documents that share a common origin. This thread translates input events received from the constellation into DOM events per the specification, invokes the HTML parser when new page content is received, and evaluates JS for events like timers and <script> elements.

TODO: wiki/Building


  1. Install Python:

    • Install Python 3.11.
    • After installation ensure the PYTHON3 environment variable is set properly, e.g., to 'C:\Python11\python.exe' by doing:
      setx PYTHON3 "C:\Python11\python.exe" /m
      The /m will set it system-wide for all future command windows.
  2. Install the following tools:

    Make sure all of these tools are on your PATH.

  3. Install GStreamer:

    Install the MSVC (not MingGW) binaries from the GStreamer site. The currently recommended version is 1.16.0. i.e:

    Note that you should ensure that all components are installed from gstreamer, as we require many of the optional libraries that are not installed by default.


Please see [[Building for Android]].

Windows Tips

Using LLVM to Speed Up Linking

You may experience much faster builds on Windows by following these steps. (Related Rust issue:

  1. Download the latest version of LLVM (
  2. Run the installer and choose to add LLVM to the system PATH.
  3. Add the following to your Cargo config (Found at %USERPROFILE%\.cargo\config). You may need to change the triple to match your environment.
linker = "lld-link.exe"

Troubleshooting the Windows Build

If you have troubles with x63 type prompt as mach.bat set by default: 0. You may need to choose and launch the type manually, such as x86_x64 Cross Tools Command Prompt for VS 2019 in the Windows menu.)

  1. cd to/the/path/servo
  2. python mach build -d