The Servo Book
Servo is a web browser engine written in the Rust programming language, and currently developed on 64-bit Linux, 64-bit macOS, 64-bit Windows, and Android.
Work is still ongoing to make Servo consumable as a webview library, so for now, the only supported way to use Servo is via servoshell, our winit- and egui-based example browser.
This book will be your guide to building and running servoshell, hacking on and contributing to Servo, the architecture of Servo, and how to consume Servo and its libraries.
This book is a work in progress! In the table of contents, * denotes chapters that were recently added or imported from older docs, and still need to be copyedited or reworked.
Contributions are always welcome. Click the pencil button in the top right of each page to propose changes, or go to servo/book for more details.
Need help?
Join the Servo Zulip if you have any questions. Everyone is welcome!
Getting Servo
You can download the latest prebuilt version of servoshell from the Downloads section on our website. Older nightly releases are available in the servo/servo-nightly-builds repo.
Downloading the source
To make changes to Servo or build servoshell yourself, clone the main repo with Git:
$ git clone https://github.com/servo/servo.git
$ cd servo
Windows users: you will need to clone the Servo repo into the same drive as your CARGO_HOME folder (#28530).
Servo’s main repo is pretty big! If you have an unreliable network connection or limited disk space, consider making a shallow clone.
Running servoshell
Assuming you’re in the directory containing servo
, you can run servoshell with:
$ ./servo [url] [options]
Use --help
to list the available command line options:
$ ./servo --help
Use --pref
to configure Servo’s behaviour, including to enable experimental web platform features.
For example, to run our Conway’s Game of Life demo with WebGPU enabled:
$ ./servo --pref dom.webgpu.enabled https://demo.servo.org/experiments/webgpu-game-of-life/
Use --devtools=6080
to enable support for debugging pages with Firefox devtools:
$ ./servo --devtools=6080
Built servoshell yourself?
When you build it yourself, servoshell will be in target/debug/servo
or target/release/servo
.
You can run it directly as shown above, but we recommend using mach instead.
To run servoshell with mach, replace ./servo
with ./mach run -d --
or ./mach run -r --
, depending on the build profile you want to run.
For example, both of the commands below run the debug build of servoshell with the same options:
$ target/debug/servo https://demo.servo.org
$ ./mach run -d -- https://demo.servo.org
Runtime dependencies
On Linux, servoshell requires:
GStreamer
≥ 1.18gst-plugins-base
≥ 1.18gst-plugins-good
≥ 1.18gst-plugins-bad
≥ 1.18gst-plugins-ugly
≥ 1.18libXcursor
libXrandr
libXi
libxkbcommon
vulkan-loader
Keyboard shortcuts
- Ctrl+
Q
(⌘Q on macOS) exits servoshell - Ctrl+
L
(⌘L on macOS) focuses the location bar - Ctrl+
R
(⌘R on macOS) reloads the page - Alt+
←
(⌘← on macOS) goes back in history - Alt+
→
(⌘→ on macOS) goes forward in history - Ctrl+
=
(⌘= on macOS) increases the page zoom - Ctrl+
-
(⌘- on macOS) decreases the page zoom - Ctrl+
0
(⌘0 on macOS) resets the page zoom - Esc exits fullscreen mode
Contributing to Servo
Servo welcomes contribution from everyone. Here are the guidelines if you are thinking of helping us:
Contributions
Contributions to Servo or its dependencies should be made in the form of GitHub pull requests. Each pull request will be reviewed by a core contributor (someone with permission to land patches) and either landed in the main tree or given feedback for changes that would be required. All contributions should follow this format, even those from core contributors.
Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.
Head over to Servo Starters to find good tasks to start with.
If you come across words or jargon that do not make sense, please check the glossary first.
If there's no matching entry, please make a pull request to add one with the content TODO
so we can correct that!
See Hacking on Servo for more information on how to start working on Servo.
Pull request checklist
-
Branch from the main branch and, if needed, rebase to the current main branch before submitting your pull request. If it doesn't merge cleanly with main you may be asked to rebase your changes.
-
Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).
-
Commits should be accompanied by a Developer Certificate of Origin (http://developercertificate.org) sign-off, which indicates that you (and your employer if applicable) agree to be bound by the terms of the project license. In git, this is the
-s
option togit commit
. -
If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review in the Servo chat.
-
Add tests relevant to the fixed bug or new feature. For a DOM change this will usually be a web platform test; for layout, a reftest. See our testing guide for more information.
For specific git instructions, see GitHub workflow 101.
Running tests in pull requests
When you push to a pull request, GitHub automatically checks that your changes have no compilation, lint, or tidy errors.
To run unit tests or Web Platform Tests against a pull request, add one or more of the labels below to your pull request. If you do not have permission to add labels to your pull request, add a comment on your bug requesting that they be added.
Label | Runs unit tests on | Runs web tests on |
---|---|---|
T-full | All platforms | Linux |
T-linux-wpt-2013 | Linux | Linux (only legacy layout) |
T-linux-wpt-2020 | Linux | Linux (skip legacy layout) |
T-macos | macOS | (none) |
T-windows | Windows | (none) |
AI contributions
Contributions must not include content generated by large language models or other probabilistic tools, including but not limited to Copilot or ChatGPT. This policy covers code, documentation, pull requests, issues, comments, and any other contributions to the Servo project.
For now, we’re taking a cautious approach to these tools due to their effects — both unknown and observed — on project health and maintenance burden. This field is evolving quickly, so we are open to revising this policy at a later date, given proposals for particular tools that mitigate these effects. Our rationale is as follows:
Maintainer burden: Reviewers depend on contributors to write and test their code before submitting it. We have found that these tools make it easy to generate large amounts of plausible-looking code that the contributor does not understand, is often untested, and does not function properly. This is a drain on the (already limited) time and energy of our reviewers.
Correctness and security: Even when code generated by AI tools does seem to function, there is no guarantee that it is correct, and no indication of what security implications it may have. A web browser engine is built to run in hostile execution environments, so all code must take into account potential security issues. Contributors play a large role in considering these issues when creating contributions, something that we cannot trust an AI tool to do.
Copyright issues: Publicly available models are trained on copyrighted content, both accidentally and intentionally, and their output often includes that content verbatim. Since the legality of this is uncertain, these contributions may violate the licenses of copyrighted works.
Ethical issues: AI tools require an unreasonable amount of energy and water to build and operate, their models are built with heavily exploited workers in unacceptable working conditions, and they are being used to undermine labor and justify layoffs. These are harms that we do not want to perpetuate, even if only indirectly.
Note that aside from generating code or other contributions, AI tools can sometimes answer questions you may have about Servo, but we’ve found that these answers are often incorrect or very misleading.
In general, do not assume that AI tools are a source of truth regarding how Servo works. Consider asking your questions on Zulip instead.
Conduct
Servo Code of Conduct is published at https://servo.org/coc/.
Technical Steering Committee
Technical oversight of the Servo Project is provided by the Technical Steering Committee.
Requesting crate releases
In addition to creating the Servo browser engine, the Servo project also publishes modular components when they can benefit the wider community of Rust developers.
An example of one of these crates is rust-url
.
While we strive to be good maintainers, managing the process of building a browser engine and a collection of external libraries can be a lot of work, thus we make no guarantees about regular releases for these modular crates.
If you feel that a release for one of these crates is due, we will respond to requests for new releases. The process for requesting a new release is:
- Create one or more pull requests that prepare the crate for the new release.
- Create a pull request that increases the version number in the repository, being careful to keep track of what component of the version should increase since the last release. This means that you may need to note if there are any breaking changes.
- In the pull request ask that a new version be released. The person landing the change has the responsibility to publish the new version or explain why it cannot be published with the landing of the pull request.
mach
mach
is a Python program that does plenty of things to make working on Servo easier, like building and running Servo, running tests, and updating dependencies.
Windows users: you will need to replace ./mach
with .\mach
in the commands in this book.
Use --help
to list the subcommands, or get help with a specific subcommand:
$ ./mach --help
$ ./mach build --help
When you use mach to run another program, such as servoshell, that program may have its own options with the same names as mach options.
You can use --
, surrounded by spaces, to tell mach not to touch any subsequent options and leave them for the other program.
$ ./mach run --help # Gets help for `mach run`.
$ ./mach run -d --help # Still gets help for `mach run`.
$ ./mach run -d -- --help # Gets help for the debug build of servoshell.
This also applies to the Servo unit tests, where there are three layers of options: mach options, cargo test
options, and libtest options.
$ ./mach test-unit --help # Gets help for `mach test-unit`.
$ ./mach test-unit -- --help # Gets help for `cargo test`.
$ ./mach test-unit -- -- --help # Gets help for the test harness (libtest).
Work is ongoing to make it possible to build Servo without mach. Where possible, consider whether you can use native Cargo functionality before adding new functionality to mach.
Building Servo
If this is your first time building Servo, be sure to set up your environment before continuing with the steps below.
To build servoshell for your machine:
$ ./mach build -d
To build servoshell for Android (armv7):
$ ./mach build --android
To check your code for compile errors, without a full build:
$ ./mach check
Sometimes the tools or dependencies needed to build Servo will change.
If you start encountering build problems after updating Servo, try running ./mach bootstrap
again, or set up your environment from the beginning.
You are not alone! If you have problems building Servo that you can’t solve, you can always ask for help in the build issues chat on Zulip.
Build profiles
There are three main build profiles, which you can build and use independently of one another:
- debug builds, which allow you to use a debugger (lldb)
- release builds, which are slower to build but more performant
- production builds, which are used for official releases only
debug | release | production | |
---|---|---|---|
mach option | -d
| -r
| --prod
|
optimised? | no | yes | yes, more than in release |
maximum RUST_LOG level | trace
| info
| info
|
debug assertions? | yes | yes(!) | no |
debug info? | yes | no | no |
symbols? | yes | no | yes |
finds resources in current working dir? | yes | yes | no(!) |
There are also two special variants of production builds for performance-related use cases:
production-stripped
builds are ideal for benchmarking Servo over time, with debug symbols stripped for faster initial startupprofiling
builds are ideal for profiling and troubleshooting performance issues; they behave like a debug or release build, but have the same performance as a production build
production | production-stripped | profiling | |
---|---|---|---|
mach --profile
| production
| production-stripped
| profiling
|
debug info? | no | no | yes |
symbols? | yes | no | yes |
finds resources in current working dir? | no | no | yes(!) |
You can change these settings in a servobuild file (see servobuild.example) or in the root Cargo.toml.
Optional build settings
Some build settings can only be enabled manually:
- AddressSanitizer builds are enabled with
./mach build --with-asan
- crown linting is recommended when hacking on DOM code, and is enabled with
./mach build --use-crown
- SpiderMonkey debug builds are enabled with
./mach build --debug-mozjs
, or[build] debug-mozjs = true
in your servobuild file
Setting up your environment
Before you can build Servo, you will need to:
- Check if you have the necessary tools. If not, install them: Windows, macOS, Linux
- Check your tools again. If not, you may need to restart your shell, or log out and log back in.
- If you are on NixOS, you can stop here, no further action needed!
- Install the other dependencies by running
./mach bootstrap
(or.\mach bootstrap
on Windows). If you prefer not to do that, or your Linux distro is unsupported by mach, you can instead follow the steps below:- Try the Nix method or a distro-specific method: Arch, Debian, elementary OS, Fedora, Gentoo, KDE neon, Linux Mint, Manjaro, openSUSE, Pop!_OS, Raspbian, TUXEDO OS, Ubuntu, Void Linux
- Install
taplo
andcrown
by running./mach bootstrap --skip-platform
Sometimes the tools or dependencies needed to build Servo will change.
If you start encountering build problems after updating Servo, try running ./mach bootstrap
again, or set up your environment from the beginning.
You are not alone! If you have problems setting up your environment that you can’t solve, you can always ask for help in the build issues chat on Zulip.
Checking if you have the tools installed
curl --version
should print a version like 7.83.1 or 8.4.0- On Windows, type
curl.exe --version
instead, to avoid getting the PowerShell alias forInvoke-WebRequest
- On Windows, type
python --version
should print 3.11.0 or newer (3.11.1, 3.12.0, …)rustup --version
should print a version like 1.26.0- (Windows only)
choco --version
should print a version like 2.2.2 - (macOS only)
brew --version
should print a version like 4.2.17
Tools for Windows
Note that curl
will already be installed on Windows 1804 or newer.
- Download and install
python
from the Python website - Download and install
choco
from the Chocolatey website - If you already have
rustup
, download the Community edition of Visual Studio 2022 - If you don’t have
rustup
, download and run therustup
installer:rustup-init.exe
- Be sure to select Quick install via the Visual Studio Community installer
- In the Visual Studio installer, ensure the following components are installed:
- Windows 10 SDK (10.0.19041.0)
(Microsoft.VisualStudio.Component.Windows10SDK.19041
) - MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest)
(Microsoft.VisualStudio.Component.VC.Tools.x86.x64
) - C++ ATL for latest v143 build tools (x86 & x64)
(Microsoft.VisualStudio.Component.VC.ATL
) - C++ MFC for latest v143 build tools (x86 & x64)
(Microsoft.VisualStudio.Component.VC.ATLMFC
)
- Windows 10 SDK (10.0.19041.0)
We don’t recommend having more than one version of Visual Studio installed. Servo will try to search for the appropriate version of Visual Studio, but having only a single version installed means fewer things can go wrong.
Tools for macOS
Note that curl
will already be installed on macOS.
- Download and install
python
from the Python website - Download and install Xcode
- Download and install
brew
from the Homebrew website - Download and install
rustup
from the rustup website
Tools for Linux
- Install
curl
andpython
:- Arch:
sudo pacman -S --needed curl python python-pip
- Debian, Ubuntu:
sudo apt install curl python3-pip python3-venv
- Fedora:
sudo dnf install curl python3 python3-pip python3-devel
- Gentoo:
sudo emerge net-misc/curl dev-python/pip
- Arch:
- Download and install
rustup
from the rustup website
On NixOS, type nix-shell
to enter a shell with all of the necessary tools and dependencies.
You can install python3
to allow mach to run nix-shell
automatically:
environment.systemPackages = [ pkgs.python3 ];
(install globally)home.packages = [ pkgs.python3 ];
(install with Home Manager)
Dependencies for any Linux distro, using Nix
- Make sure you have
curl
andpython
installed (see Tools for Linux) - Make sure you have the runtime dependencies installed as well
- Install Nix, the package manager — the easiest way is to use the installer, with either the multi-user or single-user installation (your choice)
- Tell mach to use Nix:
export MACH_USE_NIX=
Dependencies for Arch
(including Manjaro)
-
sudo pacman -S --needed curl python python-pip
-
sudo pacman -S --needed base-devel git mesa cmake libxmu pkg-config ttf-fira-sans harfbuzz ccache llvm clang autoconf2.13 gstreamer gstreamer-vaapi gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly vulkan-icd-loader
Dependencies for Debian
(including elementary OS, KDE neon, Linux Mint, Pop!_OS, Raspbian, TUXEDO OS, Ubuntu)
sudo apt install curl python3-pip python3-venv
sudo apt install build-essential ccache clang cmake curl g++ git gperf libdbus-1-dev libfreetype6-dev libgl1-mesa-dri libgles2-mesa-dev libglib2.0-dev gstreamer1.0-plugins-good libgstreamer-plugins-good1.0-dev gstreamer1.0-plugins-bad libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-ugly gstreamer1.0-plugins-base libgstreamer-plugins-base1.0-dev gstreamer1.0-libav libgstrtspserver-1.0-dev gstreamer1.0-tools libges-1.0-dev libharfbuzz-dev liblzma-dev libudev-dev libunwind-dev libvulkan1 libx11-dev libxcb-render0-dev libxcb-shape0-dev libxcb-xfixes0-dev libxmu-dev libxmu6 libegl1-mesa-dev llvm-dev m4 xorg-dev
Dependencies for Fedora
sudo dnf install python3 python3-pip python3-devel
sudo dnf install libtool gcc-c++ libXi-devel freetype-devel libunwind-devel mesa-libGL-devel mesa-libEGL-devel glib2-devel libX11-devel libXrandr-devel gperf fontconfig-devel cabextract ttmkfdir expat-devel rpm-build cmake libXcursor-devel libXmu-devel dbus-devel ncurses-devel harfbuzz-devel ccache clang clang-libs llvm python3-devel gstreamer1-devel gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-bad-free-devel gstreamer1-plugins-ugly-free libjpeg-turbo-devel zlib libjpeg vulkan-loader
Dependencies for Gentoo
sudo emerge net-misc/curl media-libs/freetype media-libs/mesa dev-util/gperf dev-python/pip dev-libs/openssl media-libs/harfbuzz dev-util/ccache sys-libs/libunwind x11-libs/libXmu x11-base/xorg-server sys-devel/clang media-libs/gstreamer media-libs/gst-plugins-base media-libs/gst-plugins-good media-libs/gst-plugins-bad media-libs/gst-plugins-ugly media-libs/vulkan-loader
Dependencies for openSUSE
sudo zypper install libX11-devel libexpat-devel Mesa-libEGL-devel Mesa-libGL-devel cabextract cmake dbus-1-devel fontconfig-devel freetype-devel gcc-c++ git glib2-devel gperf harfbuzz-devel libXcursor-devel libXi-devel libXmu-devel libXrandr-devel libopenssl-devel python3-pip rpm-build ccache llvm-clang libclang autoconf213 gstreamer-devel gstreamer-plugins-base-devel gstreamer-plugins-good gstreamer-plugins-bad-devel gstreamer-plugins-ugly vulkan-loader libvulkan1
Dependencies for Void Linux
sudo xbps-install libtool gcc libXi-devel freetype-devel libunwind-devel MesaLib-devel glib-devel pkg-config libX11-devel libXrandr-devel gperf bzip2-devel fontconfig-devel cabextract expat-devel cmake cmake libXcursor-devel libXmu-devel dbus-devel ncurses-devel harfbuzz-devel ccache glu-devel clang gstreamer1-devel gst-plugins-base1-devel gst-plugins-good1 gst-plugins-bad1-devel gst-plugins-ugly1 vulkan-loader
Troubleshooting your build
See the style guide for how to format error messages.
(on Linux)error: getting status of /nix/var/nix/daemon-socket/socket: Permission denied
If you get this error and you’ve installed Nix with your system package manager:
- Add yourself to the
nix-users
group - Log out and log back in
(on Linux)error: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I)
This error is harmless, but you can fix it as follows:
- Run
sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
- Run
sudo nix-channel --update
(on Windows)Cannot run mach in a path on a case-sensitive file system on Windows.
- Open a command prompt or PowerShell as administrator (Win+X, A)
- Disable case sensitivity for your Servo repo:
fsutil file SetCaseSensitiveInfo X:\path\to\servo disable
(on Windows)Could not find DLL dependency: api-ms-win-crt-runtime-l1-1-0.dllDLL file `api-ms-win-crt-runtime-l1-1-0.dll` not found!
Find the path to Redist\ucrt\DLLs\x64\api-ms-win-crt-runtime-l1-1-0.dll
, e.g. C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs\x64\api-ms-win-crt-runtime-l1-1-0.dll
.
Then set the WindowsSdkDir
environment variable to the path that contains Redist
, e.g. C:\Program Files (x86)\Windows Kits\10
.
(on Windows)thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'clang.dll\', \'libclang.dll\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [(C:\\Program Files\\LLVM\\bin\\libclang.dll: invalid DLL (64-bit))])"', C:\Users\me\.cargo\registry\src\...
rustup may have been installed with the 32-bit default host, rather than the 64-bit default host needed by Servo.
Check your default host with rustup show
, then set the default host:
> rustup set default-host x86_64-pc-windows-msvc
(on Windows)ERROR: GetShortPathName returned a long path name: `C:/PROGRA~2/Windows Kits/10/`. Use `fsutil file setshortname' to create a short name for any components of this path that have spaces.
SpiderMonkey (mozjs) requires 8.3 filenames to be enabled on Windows (#26010).
- Open a command prompt or PowerShell as administrator (Win+X, A)
- Enable 8.3 filename generation:
fsutil behavior set disable8dot3 0
- Uninstall and reinstall whatever contains the failing paths, such as Visual Studio or the Windows SDK — this is easier than adding 8.3 filenames by hand
Building for Android
Support for Android is currently in-progress and these instructions might change frequently.
Get Android tools
After cloning the repository and installing dependencies common to all targets, you should obtain the Android SDK, either using the Android Studio IDE or via the sdkmanager
CLI (which requires Java 17 or greater to be installed separately).
To install the NDK and SDK using Android Studio, refer to the guidelines on the website. For the SDK, install the Android 33 platform. The NDK must be version r26c. Versions before and after change the layout of the NDK and add or remove files.
If you are using the sdkmanager
tool, you can do:
tools/bin/sdkmanager platform-tools "platforms;android-33" "build-tools;34.0.0" "ndk;26.2.11394342"
Set the following environment variables while building.
(You may want to export them from a configuration file like .bashrc
(~/.bash_profile
for Mac OS X).).
ANDROID_SDK_ROOT="/path/to/sdk"
ANDROID_NDK_ROOT="/path/to/ndk"
PATH=$PATH:$ANDROID_SDK/platform-tools
NOTE: If you are using Nix, you don't need to install the tools or setup the ANDROID_* environment variables manually. Simply enable the Android build support running:
export SERVO_ANDROID_BUILD=1
in the shell session before invoking ./mach commands
Build Servo
In the following sub-commands the --android
flag is short for --target aarch64-linux-android
# Replace "--release" with "--dev" to create an unoptimized debug build.
./mach build --release --android
For running in an emulator however, you’ll likely want to build for Android x86-64 instead:
./mach build --release --target x86_64-linux-android
Installing and running on-device
To install Servo on a hardware device, first set up your device for development.
Run this command to install the Servo package on your device.
Replace --release
with --dev
if you are building in debug mode.
./mach install --release --android
To start Servo, tap the "Servo" icon in your launcher screen, or run this :
./mach run --android https://www.servo.org/
Force-stop:
adb shell am force-stop org.servo.servoshell/org.servo.servoshell.MainActivity
If the above doesn't work, try this:
adb shell am force-stop org.servo.servoshell
Uninstall:
adb uninstall org.servo.servoshell
NOTE: The following instructions are outdated and might not apply any longer. They are retained here for reference until the Android build is fully functional and the below instructions are reviewed.
Profiling
We are currently using a Nexus 9 for profiling, because it has an NVidia chipset and supports the NVidia System Profiler. First, install the profiler.
You will then need to root your Nexus 9.
There are a variety of options, but I found the CF-Auto-Root version the easiest.
Just follow the instructions on that page (download, do the OEM unlock, adb reboot bootloader
, fastboot boot image/CF-Auto-Root-flounder-volantis-nexus9.img
) and you should end up with a rooted device.
If you want reasonable stack backtraces, you should add the flags -fno-omit-frame-pointer -marm -funwind-tables
to the CFLAGS
(simplest place to do so is in the mach python scripts that set up the env for Android).
Also, remember to build with -r
for release!
Installing and running in the emulator
To set up the emulator, use the avdmanager
tool installed with the SDK.
Create a default Pixel 4 device with an SDCard of size greater than 100MB.
After creating it, open the file ~/.android/avd/nexus7.avd/config.ini and change the hw.dPad
and hw.mainKeys
configuration files to yes
.
Installation:
./mach install --android --release
Running:
./mach run --android https://www.mozilla.org/
Force-stop:
adb shell am force-stop org.servo.servoshell
Uninstall:
adb uninstall org.servo.servoshell
Viewing logs and stdout
By default, only libsimpleservo logs are sent to adb logcat
.
println
output is also sent to logcat (see #21637).
Log macros (warn!
, debug!
, …) are sent to logcat under the tag simpleservo
.
To show all the servo logs, remove the log filters in jniapi.rs.
Getting meaningful symbols out of crash reports
Copy the logcat output that includes the crash information and unresolved symbols to a temporary log
file.
Run ndk-stack -sym target/armv7-linux-androideabi/debug/apk/obj/local/armeabi-v7a/lib/ -dump log
.
This should resolve any symbols from libsimpleservo.so that appear in the output.
The ndk-stack
tool is found in the NDK root directory.
Debugging on-device
First, you will need to enable debugging in the project files by adding android:debuggable="true"
to the application
tag in servo/support/android/apk/app/src/main/AndroidManifest.xml.
~/android-ndk-r9c/ndk-gdb \
--adb=/Users/larsberg/android-sdk-macosx/platform-tools/adb \
--project=servo/support/android/apk/app/src/main/
--launch=org.servo.servoshell.MainActivity \
--verbose
To get symbols resolved, you may need to provide additional library paths (at the gdb prompt):
set solib-search-path /Users/larsberg/servo/support/android/apk/obj/local/armeabi/:/Users/larsberg/servo/support/android/apk/libs/armeabi
OR you may need to enter the same path names as above in the support/android/apk/libs/armeabi/gdb.setup file.
If you are not able to get past the "Application Servo (process org.servo.servoshell) is waiting for the debugger to attach." prompt after entering continue
at (gdb)
prompt, you might have to set Servo as the debug app (Use the "Select debug app" option under "Developer Options" in the Settings app).
If this doesn't work, Stack Overflow will help you.
The ndk-gdb debugger may complain about ... function not defined
when you try to set breakpoints.
Just answer y
when it asks you to set breakpoints on future library loads.
You will be able to catch your breakpoints during execution.
x86 build
To build a x86 version, follow the above instructions, but replace --android
with --target=i686-linux-android
.
The x86 emulator will need to support GLES v3 (use AVS from Android Studio v3+).
WebVR support
- Enable WebVR preference: "dom.webvr.enabled": true
- ./mach build --release --android --features googlevr
- ./mach package --release --android --flavor googlevr
- ./mach install --release --android
- ./mach run --android https://threejs.org/examples/webvr_rollercoaster.html (warning: the first run loads the default url sometimes after a clean APK install)
PandaBoard
If you are using a PandaBoard, Servo is known to run on Android with the instructions above using the following build of Android for PandaBoard: http://releases.linaro.org/12.10/android/leb-panda
Important Notices.
Different from Linux or Mac, on Android, Servo's program entry is in the library, not executable.
So we can't execute the program with command line arguments.
To approximate command line arguments, we have a hack for program entry on android: You can put command-line arguments, one per line, in the file /sdcard/servo/android_params
on your device.
You can find a default android_params
property under resources
in the Servo repo.
Default settings:
default font directory : /system/fonts (in android)
default resource path : /sdcard/servo (in android)
default font configuration : /sdcard/servo/.fcconfig (in android)
default fallback font : Roboto
Working on the user interface without building Servo
We provide nightly builds of a servo library for android, so it's not necessary to do a full build to work on the user interface.
- Download the latest AAR: https://download.servo.org/nightly/android/servo-latest.aar (this is an armv7 build)
- In your local copy of Servo, create a file
support/android/apk/user.properties
and specify the path to the AAR you just downloaded:servoViewLocal=/tmp/servo-latest.aar
- open
support/android/apk
with Android Studio - important: in the project list, you should see 2 projects:
servoapp
andservoview-local
. If you seeservoapp
andservoview
that means Android Studio didn't correctly read the settings. Re-sync gradle or restart Android Studio - select the build variant
mainArmv7Debug
ormainArmv7Release
- plug your phone
- press the Play button
Alternatively, you can generate the APK with the command line:
wget https://download.servo.org/nightly/android/servo-latest.aar -O /tmp/servo-latest.aar
cd SERVO_REPO/support/android/apk
echo 'servoViewLocal=/tmp/servo-latest.aar' > user.properties
./gradlew clean
./gradlew :servoapp:assembleMainArmv7Release
cd ../../..
adb install -r target/armv7-linux-androideabi/release/servoapp.apk
The relevant files to tweak the UI are: MainActivity.java and activity_main.xml
Building for OpenHarmony
Get the OpenHarmony tools
Building for OpenHarmony requires the following:
- The OpenHarmony SDK. This is sufficient to compile servo as a shared library for OpenHarmony.
- The
hvigor
build tool to compile an application into an app bundle and sign it.
Setting up the OpenHarmony SDK
The OpenHarmony SDK is required to compile applications for OpenHarmony. The minimum version of SDK that Servo currently supports is v5.0 (API-12).
Downloading via DevEco Studio
DevEco Studio is an IDE for developing applications for HarmonyOS NEXT and OpenHarmony. It supports Windows and MacOS. You can manage installed OpenHarmony SDKs by clicking File->Settings and selecting "OpenHarmony SDK". After setting a suitable installation path, you can select the components you want to install for each available API version. DevEco Studio will automatically download and install the components for you.
Manual installation of the OpenHarmony SDK (e.g. on Linux)
- Go to the OpenHarmony release notes and select the version you want to compile for.
- Scroll down to the section "Acquiring Source Code from Mirrors" and click the download link for the version of "Public SDK package for the standard system" matching your host system.
- Extract the archive to a suitable location.
- Switch into the SDK folder with
cd <sdk_folder>/<your_operating_system>
. - Create a sub-folder with the same name as the API version (e.g 12 for SDK v5.0) and switch into it.
- Unzip the zip files of the individual components into the folder created in the previous step. Preferably use the
unzip
command on the command-line, or manually ensure that the unzipped bundles are called e.g.native
and notnative-linux-x64-5.x.y.z
.
The following snippet can be used as a reference for steps 4-6:
cd ~/ohos-sdk/linux
for COMPONENT in "native toolchains ets js previewer" do
echo "Extracting component ${COMPONENT}"
unzip ${COMPONENT}-*.zip
API_VERSION=$(cat ${COMPONENT}/oh-uni-package.json | jq -r '.apiVersion')
mkdir -p ${API_VERSION}
mv ${COMPONENT} "${API_VERSION}/"
done
On windows, it is recommended to use 7zip to unzip the archives, since the windows explorer unzip tool is extremely slow.
Manual installation of the HarmonyOS NEXT commandline tools
The HarmonyOS NEXT commandline tools contain the OpenHarmony SDK and the following additional tools:
codelinter
(linter)hstack
(crash dump stack analysis tool)hvigor
/hvigorw
(build tool)ohpm
(package manager)
Currently, the commandline tools package is not publicly available and requires a chinese Huawei account to download.
Manual installation of hvigor without the commandline tools
hvigor
(not the wrapper hvigorw
) is also available via npm
.
-
Install the same nodejs version as the commandline-tools ship. For HarmonyOS NEXT Node 18 is shipped. Ensure that the
node
binary is in PATH. -
Install Java using the recommended installation method for your OS. The build steps are known to work with OpenJDK v17, v21 and v23. On MacOS, if you install Homebrew's OpenJDK formula, the following additional command may need to be run after the installation:
# For the system Java wrappers to find this JDK, symlink it with sudo ln -sfn $HOMEBREW_PREFIX/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
-
Edit your
.npmrc
to contain the following line:@ohos:registry=https://repo.harmonyos.com/npm/
-
Install hvigor and the hvigor-ohos-plugin. This will create a
node_modules
directory in the current directory.npm install @ohos/hvigor npm install @ohos/hvigor-ohos-plugin
-
Now you should be able to run
hvigor.js
in your OpenHarmony project to build a hap bundle:/path/to/node_modules/@ohos/hvigor/bin/hvigor.js assembleHap
Configuring hdc on Linux
hdc
is the equivalent to adb
for OpenHarmony devices.
You can find it in the toolchains
directory of your SDK.
For convenience purposes, you might want to add toolchains
to your PATH
.
Among others, hdc
can be used to open a shell or transfer files between a device and the host system.
hdc
needs to connect to a physical device via usb
, which requires the user has permissions to access the device.
It's recommended to add a udev
rule to allow hdc to access the corresponding device without needing to run hdc
as root.
This stackoverflow answer also applies to hdc
.
Run lsusb
and check the vendor id of your device, and then create the corresponding udev
rule.
Please note that your user should be a member of the group you specify with GROUP="xxx"
.
Depending on your Linux distributions you may want to use a different group.
To check if hdc
is now working, you can run hdc list targets
and it should show your device serial number.
If it doesn't work, try rebooting.
Please note that your device needs to be in "Developer mode" with USB debugging enabled. The process here is exactly the same as one android:
- Tap the build number multiple times to enable developer mode.
- Then navigate to the developer options and enable USB debugging.
- When you connect your device for the first time, confirm the pop-up asking you if you want to trust the computer you are connecting to.
Signing configuration
Most devices require that the HAP is digitally signed by the developer to be able to install it.
When using the hvigor
tool, this can be accomplished by setting a static signingConfigs
object in the build-profile.json5
file or by dynamically creating the signingConfigs
array on the application context object in the hvigorfile.ts
script.
The signingConfigs
property is an array of objects with the following structure:
{
"name": "default",
"type": "<OpenHarmony or HarmonyOS>",
"material": {
"certpath": "/path/to/app-signing-certificate.cer",
"storePassword": "<encrypted password>",
"keyAlias": "debugKey",
"keyPassword": "<encrypted password>",
"profile": "/path/to/signed-profile-certificate.p7b",
"signAlg": "SHA256withECDSA",
"storeFile": "/path/to/java-keystore-file.p12"
}
}
Here <encrypted password>
is a hexadecimal string representation of the plaintext password after being encrypted.
The key and salt used to encrypt the passwords are generated by DevEco Studio IDE and are stored on-disk alongside the certificate files and keystore, usually under <USER HOME>/.ohos/config/openharmony
.
You can use the IDE to generate the information needed for password encryption, the required application and profile certificate files, and the keystore itself.
- Open Project Structure dialog from
File > Project Structure
menu. - Under the 'Signing Config' tab, enable the 'Automatically generate signature' checkbox.
NOTE: The signature autogenerated above is intended only for development and testing. For production builds and distribution via an App Store, the relevant configuration needs to be obtained from the App Store provider.
Once generated, it is necessary to point mach
to the above "signing material" configuration using the SERVO_OHOS_SIGNING_CONFIG
environment variable.
The value of the variable must be a file path to a valid .json
file with the same structure as the signingConfigs
property given above, but with certPath
, storeFile
and profile
given as paths relative to the json file, instead of absolute paths.
Building servoshell
Before building servo you will need to set some environment variables.
direnv is a convenient tool that can automatically set these variables based on an .envrc
file, but you can also use any other method to set the required environment variables.
.envrc
:
export OHOS_SDK_NATIVE=/path/to/openharmony-sdk/platform/api-version/native
# Required if the HAP must be signed. See the signing configuration section above.
export SERVO_OHOS_SIGNING_CONFIG=/path/to/signing-configs.json
# Required only when building for HarmonyOS:
export DEVECO_SDK_HOME=/path/to/command-line-tools/sdk # OR /path/to/DevEcoStudio/sdk OR on MacOS /Applications/DevEco-Studio.app/Contents/sdk
# Required only when building for OpenHarmony:
# Note: The openharmony sdk is under ${DEVECO_SDK_HOME}/default/openharmony
# Presumably you would need to replicate this directory structure
export OHOS_BASE_SDK_HOME=/path/to/openharmony-sdk/platform
# If you have the command-line tools installed:
export PATH="${PATH}:/path/to/command-line-tools/bin/"
export NODE_HOME=/path/to/command-line-tools/tool/node
# Alternatively, if you do NOT have the command-line tools installed:
export HVIGOR_PATH=/path/to/parent/directory/containing/node_modules # Not required if `hvigorw` is in $PATH
If you use direnv
and an .envrc
file, please remember to run direnv allow .
after modifying the .envrc
file.
Otherwise, the environment variables will not be loaded.
The following command can then be used to compile the servoshell application for a 64-bit ARM device or emulator:
./mach build --ohos --release [--flavor=harmonyos]
In mach build
, mach install
and mach package
commands, --ohos
is an alias for --target aarch64-unknown-linux-ohos
.
To build for an emulator running on an x86-64 host, use --target x86_64-unknown-linux-ohos
.
The default ohos
build / package / install targets OpenHarmony.
If you want to build for HarmonyOS you can add --flavor=harmonyos
.
Please check the Signing configuration and add a configuration with "name": "hos"
and "type": "HarmonyOS""
and the respective signing certificates.
Installing and running on-device
The following command can be used to install previously built servoshell application on a 64-bit ARM device or emulator:
./mach install --ohos --release [--flavor=harmonyos]
Further reading
Some basic Rust
Even if you have never seen any Rust code, it's not too hard to read Servo's code. But there are some basics things one must know:
- Match and Patterns
- Options
- Expression
- Traits
- That doesn't sound important, but be sure to understand how
println!()
works, especially the formatting traits
This won't be enough to do any serious work at first, but if you want to navigate the code and fix basic bugs, that should do it. It's a good starting point, and as you dig into Servo source code, you'll learn more.
For more exhaustive documentation:
Cargo and crates
A Rust library is called a crate.
Servo uses plenty of crates.
These crates are dependencies.
They are listed in files called Cargo.toml
.
Servo is split into components and ports (see components
and ports
directories).
Each has its own dependencies, and each has its own Cargo.toml
file.
Cargo.toml
files list the dependencies.
You can edit this file.
For example, components/net_traits/Cargo.toml
includes:
[dependencies.stb_image]
git = "https://github.com/servo/rust-stb-image"
But because rust-stb-image
API might change over time, it's not safe to compile against the HEAD
of rust-stb-image
.
A Cargo.lock
file is a snapshot of a Cargo.toml
file which includes a reference to an exact revision, ensuring everybody is always compiling with the same configuration:
[[package]]
name = "stb_image"
source = "git+https://github.com/servo/rust-stb-image#f4c5380cd586bfe16326e05e2518aa044397894b"
This file should not be edited by hand.
In a normal Rust project, to update the git revision, you would use cargo update -p stb_image
, but in Servo, use ./mach cargo-update -p stb_image
.
Other arguments to cargo are also understood, e.g. use --precise '0.2.3' to update that crate to version 0.2.3.
See Cargo's documentation about Cargo.toml and Cargo.lock files.
Working on a crate
As explained above, Servo depends on a lot of libraries, which makes it very modular. While working on a bug in Servo, you'll often end up in one of its dependencies. You will then want to compile your own version of the dependency (and maybe compiling against the HEAD of the library will fix the issue!).
For example, I'm trying to bring some cocoa events to Servo. The Servo window on Desktop is constructed with a library named winit. winit itself depends on a cocoa library named cocoa-rs. When building Servo, magically, all these dependencies are downloaded and built for you. But because I want to work on this cocoa event feature, I want Servo to use my own version of winit and cocoa-rs.
This is how my projects are laid out:
~/my-projects/servo/
~/my-projects/cocoa-rs/
Both folder are git repositories.
To make it so that servo uses ~/my-projects/cocoa-rs/
, first ascertain which version of the crate Servo is using and whether it is a git dependency or one from crates.io.
Both information can be found using, in this example, cargo pkgid cocoa
(cocoa
is the name of the package, which doesn't necessarily match the repo folder name).
If the output is in the format https://github.com/servo/cocoa-rs#cocoa:0.0.0
, you are dealing with a git dependency and you will have to edit the ~/my-projects/servo/Cargo.toml
file and add at the bottom:
[patch]
"https://github.com/servo/cocoa-rs#cocoa:0.0.0" = { path = '../cocoa-rs' }
If the output is in the format https://github.com/rust-lang/crates.io-index#cocoa#0.0.0
, you are dealing with a crates.io dependency and you will have to edit the ~/my-projects/servo/Cargo.toml
in the following way:
[patch]
"cocoa:0.0.0" = { path = '../cocoa-rs' }
Both will tell any cargo project to not use the online version of the dependency, but your local clone.
For more details about overriding dependencies, see Cargo's documentation.
Editor support
Visual Studio Code
By default, rust-analyzer tries to run cargo
without mach
, which will cause problems!
For example, you might get errors in the rust-analyzer extension about build scripts:
The style crate requires enabling one of its 'servo' or 'gecko' feature flags and, in the 'servo' case, one of 'servo-layout-2013' or 'servo-layout-2020'.
If you are on NixOS, you might get errors about pkg-config
or crown
:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=\"1\" PKG_CONFIG_ALLOW_SYSTEM_LIBS=\"1\" \"pkg-config\" \"--libs\" \"--cflags\" \"fontconfig\"`
[ERROR rust_analyzer::main_loop] FetchWorkspaceError: rust-analyzer failed to load workspace: Failed to load the project at /path/to/servo/Cargo.toml: Failed to read Cargo metadata from Cargo.toml file /path/to/servo/Cargo.toml, Some(Version { major: 1, minor: 74, patch: 1 }): Failed to run `cd "/path/to/servo" && "cargo" "metadata" "--format-version" "1" "--manifest-path" "/path/to/servo/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error: error: could not execute process `crown -vV` (never executed)
mach
passes different RUSTFLAGS to the Rust compiler than plain cargo
, so if you try to build Servo with cargo
, it will undo all the work done by mach
(and vice versa).
Because of this, and because Servo can currently only be built with mach
, you need to configure the rust-analyzer extension to use mach
in .vscode/settings.json
:
{
"rust-analyzer.rustfmt.overrideCommand": [ "./mach", "fmt" ],
"rust-analyzer.check.overrideCommand": [
"./mach", "cargo-clippy", "--message-format=json" ],
"rust-analyzer.cargo.buildScripts.overrideCommand": [
"./mach", "cargo-clippy", "--message-format=json" ],
}
If having your editor open still causes unwanted rebuilds on the command line, then you can try configuring the extension to use an alternate target directory. This will require more disk space.
{
"rust-analyzer.checkOnSave.overrideCommand": [
"./mach", "cargo-clippy", "--message-format=json", "--target-dir", "target/lsp" ],
"rust-analyzer.cargo.buildScripts.overrideCommand": [
"./mach", "cargo-clippy", "--message-format=json", "--target-dir", "target/lsp" ],
}
To enable optional build settings, append each mach option separately:
{
"rust-analyzer.checkOnSave.overrideCommand": [
"./mach", "check", "--message-format=json", "--target-dir", "target/lsp",
"--debug-mozjs", "--use-crown" ],
"rust-analyzer.cargo.buildScripts.overrideCommand": [
"./mach", "check", "--message-format=json", "--target-dir", "target/lsp",
"--debug-mozjs", "--use-crown" ],
}
NixOS users
If you are on NixOS and using --use-crown
, you should also set CARGO_BUILD_RUSTC in .vscode/settings.json
as follows, where /nix/store/.../crown
is the output of nix-shell --run 'command -v crown'
.
{
"rust-analyzer.server.extraEnv": {
"CARGO_BUILD_RUSTC": "/nix/store/.../crown",
},
}
These settings should be enough to not need to run code .
from within a nix-shell
, but it wouldn’t hurt to try that if you still have problems.
When enabling rust-analyzer’s proc macro support, you may start to see errors like
proc macro `MallocSizeOf` not expanded: Cannot create expander for /path/to/servo/target/debug/deps/libfoo-0781e5a02b945749.so: unsupported ABI `rustc 1.69.0-nightly (dc1d9d50f 2023-01-31)` rust-analyzer(unresolved-proc-macro)
This means rust-analyzer is using the wrong proc macro server, and you will need to configure the correct one manually. Use mach to query the current sysroot path, and copy the last line of output:
$ ./mach rustc --print sysroot
NOTE: Entering nix-shell /path/to/servo/shell.nix
info: component 'llvm-tools' for target 'x86_64-unknown-linux-gnu' is up to date
/home/me/.rustup/toolchains/nightly-2023-02-01-x86_64-unknown-linux-gnu
Then configure either your sysroot path or proc macro server path in .vscode/settings.json
:
{
"rust-analyzer.procMacro.enable": true,
"rust-analyzer.cargo.sysroot": "[paste what you copied]",
"rust-analyzer.procMacro.server": "[paste what you copied]/libexec/rust-analyzer-proc-macro-srv",
}
Debugging
One of the simplest ways to debug Servo is to print interesting variables with the println!
, eprintln!
, or dbg!
macros.
In general, these should only be used temporarily; you’ll need to remove them or convert them to proper debug logging before your pull request will be merged.
Debug logging with log
and RUST_LOG
Servo uses the log
crate for long-term debug logging and error messages:
#![allow(unused)] fn main() { log::error!("hello"); log::warn!("hello"); log::info!("hello"); log::debug!("hello"); log::trace!("hello"); }
Unlike macros like println!
, log
adds a timestamp and tells you where the message came from:
[2024-05-01T09:07:42Z ERROR servoshell::app] hello
[2024-05-01T09:07:42Z WARN servoshell::app] hello
[2024-05-01T09:07:42Z INFO servoshell::app] hello
[2024-05-01T09:07:42Z DEBUG servoshell::app] hello
[2024-05-01T09:07:42Z TRACE servoshell::app] hello
You can use RUST_LOG
to filter the output of log
by level (off
, error
, warn
, info
, debug
, trace
) and/or by where the message came from, also known as the “target”.
Usually the target is a Rust module path like servoshell::app
, but there are some special targets too (see § Event tracing).
To set RUST_LOG
, prepend it to your command or use export
:
$ RUST_LOG=warn ./mach run -d test.html # Uses the prepended RUST_LOG.
$ export RUST_LOG=warn
$ ./mach run -d test.html # Uses the exported RUST_LOG.
See the env_logger
docs for more details, but here are some examples:
- to enable all messages up to and including
debug
level, but nottrace
:RUST_LOG=debug
- to enable all messages from
servo::*
,servoshell::*
, or any target starting withservo
:RUST_LOG=servo=trace
(or justRUST_LOG=servo
) - to enable all messages from any target starting with
style
, but onlyerror
andwarn
messages fromstyle::rule_tree
:RUST_LOG=style,style::rule_tree=warn
Note that even when a log message is filtered out, it can still impact runtime performance, albeit only slightly.
Some builds of Servo, including official nightly releases, remove DEBUG and TRACE messages at compile time, so enabling them with RUST_LOG
will have no effect.
Event tracing
In the constellation, the compositor, and servoshell, we log messages sent to and received from other components, using targets of the form component>other@Event
or component<other@Event
.
This means you can select which event types to log at runtime with RUST_LOG
!
For example, in the constellation (more details):
- to trace only events from script:
RUST_LOG='constellation<=off,constellation<script@'
- to trace all events except for ReadyToPresent events:
RUST_LOG='constellation<,constellation<compositor@ReadyToPresent=off'
- to trace only script InitiateNavigateRequest events:
RUST_LOG='constellation<=off,constellation<script@InitiateNavigateRequest'
In the compositor (more details):
- to trace only MoveResizeWebView events:
RUST_LOG='compositor<constellation@MoveResizeWebView'
- to trace all events except for Forwarded events:
RUST_LOG=compositor<,compositor<constellation@Forwarded=off
In servoshell (more details):
- to trace only events from servo:
RUST_LOG='servoshell<=off,servoshell>=off,servoshell<servo@'
- to trace all events except for AxisMotion events:
RUST_LOG='servoshell<,servoshell>,servoshell<winit@WindowEvent(AxisMotion)=off'
- to trace only winit window moved events:
RUST_LOG='servoshell<=off,servoshell>=off,servoshell<winit@WindowEvent(Moved)'
Event tracing can generate an unwieldy amount of output. In general, we recommend the following config to keep things usable:
constellation<,constellation>,constellation<compositor@ForwardEvent(MouseMoveEvent)=off,constellation<compositor@LogEntry=off,constellation<compositor@ReadyToPresent=off,constellation<script@LogEntry=off
compositor<,compositor>
servoshell<,servoshell>,servoshell<winit@DeviceEvent=off,servoshell<winit@MainEventsCleared=off,servoshell<winit@NewEvents(WaitCancelled)=off,servoshell<winit@RedrawEventsCleared=off,servoshell<winit@RedrawRequested=off,servoshell<winit@UserEvent(WakerEvent)=off,servoshell<winit@WindowEvent(CursorMoved)=off,servoshell<winit@WindowEvent(AxisMotion)=off,servoshell<servo@EventDelivered=off,servoshell<servo@ReadyToPresent=off,servoshell>servo@Idle=off,servoshell>servo@MouseWindowMoveEventClass=off
Other debug logging
mach run
does this automatically, but to print a backtrace when Servo panics:
$ RUST_BACKTRACE=1 target/debug/servo test.html
Use -Z
(-- --debug
) to enable debug options.
For example, to print the stacking context tree after each layout, or get help about these options:
$ ./mach run -Z dump-stacking-context-tree test.html
$ ./mach run -Z help # Lists available debug options.
$ ./mach run -- --debug help # Same as above: lists available debug options.
$ ./mach run --debug # Not the same! This just chooses target/debug.
On macOS, you can also add some Cocoa-specific debug options, after an extra --
:
$ ./mach run -- test.html -- -NSShowAllViews YES
Running servoshell with a debugger
To run servoshell with a debugger, use --debugger-cmd
.
Note that if you choose gdb
or lldb
, we automatically use rust-gdb
and rust-lldb
.
$ ./mach run --debugger-cmd=gdb test.html # Same as `--debugger-cmd=rust-gdb`.
$ ./mach run --debugger-cmd=lldb test.html # Same as `--debugger-cmd=rust-lldb`.
To pass extra options to the debugger, you’ll need to run the debugger yourself:
$ ./mach run --debugger-cmd=gdb -ex=r test.html # Passes `-ex=r` to servoshell.
$ rust-gdb -ex=r --args target/debug/servo test.html # Passes `-ex=r` to gdb.
$ ./mach run --debugger-cmd=lldb -o r test.html # Passes `-o r` to servoshell.
$ rust-lldb -o r -- target/debug/servo test.html # Passes `-o r` to lldb.
$ ./mach run --debugger-cmd=rr -M test.html # Passes `-M` to servoshell.
$ rr record -M target/debug/servo test.html # Passes `-M` to rr.
Many debuggers need extra options to separate servoshell’s arguments from their own options, and --debugger-cmd
will pass those options automatically for a few debuggers, including gdb
and lldb
.
For other debuggers, --debugger-cmd
will only work if the debugger needs no extra options:
$ ./mach run --debugger-cmd=rr test.html # Good, because it’s...
# servoshell arguments ^^^^^^^^^
$ rr target/debug/servo test.html # equivalent to this.
# servoshell arguments ^^^^^^^^^
$ ./mach run --debugger-cmd=renderdoccmd capture test.html # Bad, because it’s...
# renderdoccmd arguments? ^^^^^^^
# servoshell arguments ^^^^^^^^^
$ renderdoccmd target/debug/servo capture test.html # equivalent to this.
# => target/debug/servo is not a valid command.
$ renderdoccmd capture target/debug/servo test.html # Good.
# ^^^^^^^ renderdoccmd arguments
# servoshell arguments ^^^^^^^^^
Debugging with gdb or lldb
To search for a function by name or regex:
(lldb) image lookup -r -n <name>
(gdb) info functions <name>
To list the running threads:
(lldb) thread list
(lldb) info threads
Other commands for gdb or lldb include:
(gdb) b a_servo_function # Add a breakpoint.
(gdb) run # Run until breakpoint is reached.
(gdb) bt # Print backtrace.
(gdb) frame n # Choose the stack frame by its number in `bt`.
(gdb) next # Run one line of code, stepping over function calls.
(gdb) step # Run one line of code, stepping into function calls.
(gdb) print varname # Print a variable in the current scope.
See this gdb tutorial or this lldb tutorial more details.
To inspect variables in lldb, you can also type gui
, then use the arrow keys to expand variables:
(lldb) gui
┌──<Variables>───────────────────────────────────────────────────────────────────────────┐
│ ◆─(&mut gfx::paint_task::PaintTask<Box<CompositorProxy>>) self = 0x000070000163a5b0 │
│ ├─◆─(msg::constellation_msg::PipelineId) id │
│ ├─◆─(url::Url) _url │
│ │ ├─◆─(collections::string::String) scheme │
│ │ │ └─◆─(collections::vec::Vec<u8>) vec │
│ │ ├─◆─(url::SchemeData) scheme_data │
│ │ ├─◆─(core::option::Option<collections::string::String>) query │
│ │ └─◆─(core::option::Option<collections::string::String>) fragment │
│ ├─◆─(std::sync::mpsc::Receiver<gfx::paint_task::LayoutToPaintMsg>) layout_to_paint_port│
│ ├─◆─(std::sync::mpsc::Receiver<gfx::paint_task::ChromeToPaintMsg>) chrome_to_paint_port│
└────────────────────────────────────────────────────────────────────────────────────────┘
If lldb crashes on certain lines involving the profile()
function, it’s not just you.
Comment out the profiling code, and only keep the inner function, and that should do it.
Reversible debugging with rr (Linux only)
rr is like gdb, but lets you rewind. Start by running servoshell via rr:
$ ./mach run --debugger=rr test.html # Either this...
$ rr target/debug/servo test.html # ...or this.
Then replay the trace, using gdb commands or rr commands:
$ rr replay
(rr) continue
(rr) reverse-cont
To run one or more tests repeatedly until the result is unexpected:
$ ./mach test-wpt --chaos path/to/test [path/to/test ...]
Traces recorded by rr can take up a lot of space.
To delete them, go to ~/.local/share/rr
.
OpenGL debugging with RenderDoc (Linux or Windows only)
RenderDoc lets you debug Servo’s OpenGL activity. Start by running servoshell via renderdoccmd:
$ renderdoccmd capture -d . target/debug/servo test.html
While servoshell is running, run qrenderdoc
, then choose File > Attach to Running Instance.
Once attached, you can press F12 or Print Screen to capture a frame.
DevTools
Firefox DevTools are a set of web developer tools that can be used to examine, edit, and debug a website's HTML, CSS, and JavaScript. Servo has support for a subset of DevTools functionality, allowing for simple debugging.
Connecting to Servo
- Run servoshell with the DevTools server enabled.
The number after the
devtools
parameter is the port used by the server.
./mach run --devtools=6080
-
Open Firefox and go to
about:debugging
. If this is your first time using the DevTools integration, go to Setup and addlocalhost:6080
as a network location. The port number must be the same as in the previous step. -
Click on Connect in the sidebar next to
localhost:6080
.
- Accept the incoming connection.
If no confirmation dialog appears, press
Y
in the terminal that servoshell is running in.
- Back in Firefox, choose a webview and click Inspect. A new window should open with the page's inspector.
Using the inspector
The inspector window is divided in various tabs with different workspaces. At the moment, Inspector and Console are working.
In the Inspector tab there are three columns. From left to right:
- The HTML tree shows the document nodes. This allows you to see, add, or modify attributes by double-clicking on the tag name or attribute.
- The style inspector displays the CSS styles associated with the selected element. The entries here come from the element's style attribute, from matching stylesheet rules, or inherited from other elements. Styles can be added or modified by clicking on a selector or property, or clicking in the empty space below.
- The extra column contains more helpful tools:
- Layout contains information about the box model properties of the element. Note that flex and grid do not work yet.
- Computed, which contains all of the CSS computed values after resolving things like relative units.
The Console tab contains a JavaScript console that interfaces with the website being displayed in Servo. Errors, warnings, and information that the website produces will be logged here. It can also be used to run JavaScript code directly on the website, for example, changing the document content or reloading the page:
document.write("Hello, Servo!")
location.reload()
Note: support for DevTools features is still a work in progress, and it can break in future versions of Firefox if there are changes to the messaging protocol.
Automated testing
This is boring.
But your PR won't get accepted without a test.
Tests are located in the tests
directory.
You'll see that there are a lot of files in there, so finding the proper location for your test is not always obvious.
First, look at the "Testing" section in ./mach --help
to understand the different test categories.
You'll also find some update-*
commands.
It's used to update the list of expected results.
To run a test:
./mach test-wpt tests/wpt/yourtest
For your PR to get accepted, source code also has to satisfy certain tidiness requirements.
To check code tidiness:
./mach test-tidy
Updating a test
In some cases, extensive tests for the feature you're working on already exist under tests/wpt:
- Make a release build
- run
./mach test-wpt --release --log-raw=/path/to/some/logfile
- run
update-wpt
on it
This may create a new commit with changes to expectation ini files. If there are lots of changes, it's likely that your feature had tests in wpt already.
Include this commit in your pull request.
Add a new test
If you need to create a new test file, it should be located in tests/wpt/mozilla/tests
or in tests/wpt/web-platform-tests
if it's something that doesn't depend on servo-only features.
You'll then need to update the list of tests and the list of expected results:
./mach test-wpt --manifest-update
Debugging a test
See the debugging guide to get started in how to debug Servo.
Web tests (tests/wpt
)
This folder contains the web platform tests and the code required to integrate them with Servo. To learn how to write tests, go here.
Contents of tests/wpt
In particular, this folder contains:
config.ini
: some configuration for the web-platform-tests.include.ini
: the subset of web-platform-tests we currently run.tests
: copy of the web-platform-tests.meta
: expected failures for the web-platform-tests we run.mozilla
: web-platform-tests that cannot be upstreamed.
Running web tests
The simplest way to run the web-platform-tests in Servo is ./mach test-wpt
in the root directory.
This will run the subset of JavaScript tests defined in include.ini
and log the output to stdout.
A subset of tests may be run by providing positional arguments to the mach command, either as filesystem paths or as test urls e.g.
./mach test-wpt tests/wpt/tests/dom/historical.html
to run the dom/historical.html test, or
./mach test-wpt dom
to run all the DOM tests.
There are also a large number of command line options accepted by the test harness; these are documented by running with --help
.
Running all web tests
Running all the WPT tests with debug mode results in a lot of timeout.
If one wants to run all the tests, build with mach build -r
and
test with mach test-wpt --release
Running web tests with an external WPT server
Normally wptrunner starts its own WPT server, but occasionally you might want to run multiple instances of mach test-wpt
, such as when debugging one test while running the full suite in the background, or when running a single test many times in parallel (--processes only works across different tests).
This would lead to a “Failed to start HTTP server” errors, because you can only run one WPT server at a time. To fix this:
- Follow the steps in Running web tests manually
- Add a
break
to start_servers in serve.py as follows:
--- a/tests/wpt/tests/tools/serve/serve.py
+++ b/tests/wpt/tests/tools/serve/serve.py
@@ -746,6 +746,7 @@ def start_servers(logger, host, ports, paths, routes, bind_address, config,
mp_context, log_handlers, **kwargs):
servers = defaultdict(list)
for scheme, ports in ports.items():
+ break
assert len(ports) == {"http": 2, "https": 2}.get(scheme, 1)
# If trying to start HTTP/2.0 server, check compatibility
- Run
mach test-wpt
as many times as needed
If you get unexpected TIMEOUT in testharness tests, then the custom testharnessreport.js may have been installed incorrectly (see Running web tests manually for more details).
Running web tests manually
(See also the relevant section of the upstream README.)
It can be useful to run a test without the interference of the test runner, for example when using a debugger such as gdb
.
To do this, we need to start the WPT server manually, which requires some extra configuration.
To do this, first add the following to the system's hosts file:
127.0.0.1 www.web-platform.test
127.0.0.1 www1.web-platform.test
127.0.0.1 www2.web-platform.test
127.0.0.1 web-platform.test
127.0.0.1 xn--n8j6ds53lwwkrqhv28a.web-platform.test
127.0.0.1 xn--lve-6lad.web-platform.test
Navigate to tests/wpt/web-platform-tests
for the remainder of this section.
Normally wptrunner installs Servo’s version of testharnessreport.js, but when starting the WPT server manually, we get the default version, which won’t report test results correctly. To fix this:
- Create a directory
local-resources
- Copy
tools/wptrunner/wptrunner/testharnessreport-servo.js
tolocal-resources/testharnessreport.js
- Edit
local-resources/testharnessreport.js
to substitute the variables as follows:
%(output)d
- →
1
if you want to play with the test interactively (≈ pause-after-test) - →
0
if you don’t care about that (though it’s also ok to use1
always)
- →
%(debug)s
→true
- Create a
./config.json
as follows (seetools/wave/config.default.json
for defaults):
{"aliases": [{
"url-path": "/resources/testharnessreport.js",
"local-dir": "local-resources"
}]}
Then start the server with ./wpt serve
.
To check if testharnessreport.js
was installed correctly:
- The standard output of
curl http://web-platform.test:8000/resources/testharnessreport.js
should look like testharnessreport-servo.js, not like the default testharnessreport.js - The standard output of
target/release/servo http://web-platform.test:8000/css/css-pseudo/highlight-pseudos-computed.html
(or any testharness test) should contain lines starting with:TEST START
TEST STEP
TEST DONE
ALERT: RESULT:
To prevent browser SSL warnings when running HTTPS tests locally, you will need to run Servo with --certificate-path resources/cert-wpt-only
.
Running web tests in Firefox
When working with tests, you may want to compare Servo's result with Firefox.
You can supply --product firefox
along with the path to a Firefox binary (as well as few more odds and ends) to run tests in Firefox from your Servo checkout:
GECKO="$HOME/projects/mozilla/gecko"
GECKO_BINS="$GECKO/obj-firefox-release-artifact/dist/Nightly.app/Contents/MacOS"
./mach test-wpt dom --product firefox --binary $GECKO_BINS/firefox --certutil-binary $GECKO_BINS/certutil --prefs-root $GECKO/testing/profiles
Updating web test expectations
When fixing a bug that causes the result of a test to change, the expected results for that test need to be changed.
This can be done manually, by editing the .ini
file under the meta
folder that corresponds to the test.
In this case, remove the references to tests whose expectation is now PASS
, and remove .ini
files that no longer contain any expectations.
When a larger number of changes is required, this process can be automated.
This first requires saving the raw, unformatted log from a test run, for example by running ./mach test-wpt --log-raw /tmp/servo.log
.
Once the log is saved, run from the root directory:
./mach update-wpt /tmp/servo.log
Writing new web tests
The simplest way to create a new test is to use the following command:
./mach create-wpt tests/wpt/path/to/new/test.html
This will create test.html in the appropriate directory using the WPT template for JavaScript tests. Tests are written using testharness.js. Documentation can be found here. To create a new reference test instead, use the following:
./mach create-wpt --reftest tests/wpt/path/to/new/reftest.html --reference tests/wpt/path/to/reference.html
reference.html
will be created if it does not exist, and reftest.html
will be created using the WPT reftest template.
To know more about reftests, check this.
These new tests can then be run in the following manner like any other WPT test:
./mach test-wpt tests/wpt/path/to/new/test.html
./mach test-wpt tests/wpt/path/to/new/reftest.html
Editing web tests
web-platform-tests may be edited in-place and the changes committed to the servo tree. These changes will be upstreamed when the tests are next synced.
Updating the upstream tests
In order to update the tests from upstream use the same mach update commands. e.g. to update the web-platform-tests:
./mach update-wpt --sync
./mach test-wpt --log-raw=update.log
./mach update-wpt update.log
This should create two commits in your servo repository with the updated tests and updated metadata.
Servo-specific tests
The mozilla
directory contains tests that cannot be upstreamed for some reason (e.g. because they depend on Servo-specific APIs), as well as some legacy tests that should be upstreamed at some point.
When run they are mounted on the server under /_mozilla/
.
Analyzing reftest results
Reftest results can be analyzed from a raw log file.
To generate this run with the --log-raw
option e.g.
./mach test-wpt --log-raw wpt.log
This file can then be fed into the reftest analyzer which will show all failing tests (not just those with unexpected results). Note that this ingests logs in a different format to original version of the tool written for gecko reftests.
The reftest analyzer allows pixel-level comparison of the test and reference screenshots.
Tests that both fail and have an unexpected result are marked with a !
.
Updating the WPT manifest
MANIFEST.json can be regenerated automatically with the mach command update-manifest
e.g.
./mach update-manifest
This is equivalent to running
./mach test-wpt --manifest-update SKIP_TESTS
Profiling
When profiling Servo or troubleshooting performance issues, make sure your build is optimised while still allowing for accurate profiling data.
$ ./mach build --profile profiling --with-frame-pointer
- --profile profiling builds Servo with our profiling configuration
- --with-frame-pointer builds Servo with stack frame pointers on all platforms
Several ways to get profiling information about Servo's runs:
Interval Profiling
Using the -p option followed by a number (time period in seconds), you can spit out profiling information to the terminal periodically. To do so, run Servo on the desired site (URLs and local file paths are both supported) with profiling enabled:
./mach run --release -p 5 https://www.cnn.com/
In the example above, while Servo is still running (AND is processing new passes), the profiling information is printed to the terminal every 5 seconds.
Once the page has loaded, hit ESC (or close the app) to exit. Profiling output will be provided, broken down by area of the browser and URL. For example, if you would like to profile loading Hacker News, you might get output of the form below:
[larsberg@lbergstrom servo]$ ./mach run --release -p 5 http://news.ycombinator.com/
_category_ _incremental?_ _iframe?_ _url_ _mean (ms)_ _median (ms)_ _min (ms)_ _max (ms)_ _events_
Compositing N/A N/A N/A 0.0440 0.0440 0.0440 0.0440 1
Layout no no https://news.ycombinator.com/ 29.8497 29.8497 29.8497 29.8497 1
Layout yes no https://news.ycombinator.com/ 11.0412 10.9748 10.8149 11.3338 3
+ Style Recalc no no https://news.ycombinator.com/ 22.8149 22.8149 22.8149 22.8149 1
+ Style Recalc yes no https://news.ycombinator.com/ 5.3933 5.2915 5.2727 5.6157 3
+ Restyle Damage Propagation no no https://news.ycombinator.com/ 0.0135 0.0135 0.0135 0.0135 1
+ Restyle Damage Propagation yes no https://news.ycombinator.com/ 0.0146 0.0149 0.0115 0.0175 3
+ Primary Layout Pass no no https://news.ycombinator.com/ 3.3569 3.3569 3.3569 3.3569 1
+ Primary Layout Pass yes no https://news.ycombinator.com/ 2.8727 2.8472 2.8279 2.9428 3
| + Parallel Warmup no no https://news.ycombinator.com/ 0.0002 0.0002 0.0002 0.0002 2
| + Parallel Warmup yes no https://news.ycombinator.com/ 0.0002 0.0002 0.0001 0.0002 6
+ Display List Construction no no https://news.ycombinator.com/ 3.4058 3.4058 3.4058 3.4058 1
+ Display List Construction yes no https://news.ycombinator.com/ 2.6722 2.6523 2.6374 2.7268 3
In this example, when loading the page we performed one full layout and three incremental layout passes, for a total of (29.8497 + 11.0412 * 3) = 62.9733ms.
TSV Profiling
Using the -p option followed by a file name, you can spit out profiling information of Servo's execution to a TSV (tab-separated because certain url contained commas) file. The information is written to the file only upon Servo's termination. This works well with the -x OR -o option so that performance information can be collected during automated runs. Example usage:
./mach run -r -o out.png -p out.tsv https://www.google.com/
The formats of the profiling information in the Interval and TSV Profiling options are the essentially the same; the url names are not truncated in the TSV Profiling option.
Generating Timelines
Add the --profiler-trace-path /timeline/output/path.html
flag to output the profiling data as a self contained HTML timeline.
Because it is a self contained file (all CSS and JS is inline), it is easy to share, upload, or link to from bug reports.
$ ./mach run --release -p 5 --profiler-trace-path trace.html https://reddit.com/
Usage:
-
Use the mouse wheel or trackpad scrolling, with the mouse focused along the top of the timeline, to zoom the viewport in or out.
-
Grab the selected area along the top and drag left or right to side scroll.
-
Hover over a trace to show more information.
Hacking
The JS, CSS, and HTML for the timeline comes from fitzgen/servo-trace-dump and there is a script in that repo for updating servo's copy.
All other code is in the components/profile/
directory.
Sampling profiler
Servo includes a sampling profiler which generates profiles that can be opened in the Gecko profiling tools. To use them:
- Run Servo, loading the page you wish to profile
- Press Ctrl+P (or Cmd+P on macOS) to start the profiler (the console should show "Enabling profiler")
- Press Ctrl+P (or Cmd+P on macOS) to stop the profiler (the console should show "Stopping profiler")
- Keep Servo running until the symbol resolution is complete (the console should show a final "Resolving N/N")
- Run
python etc/profilicate.py samples.json >gecko_samples.json
to transform the profile into a format that the Gecko profiler understands - Load
gecko_samples.json
into https://perf-html.io/
To control the output filename, set the PROFILE_OUTPUT
environment variable.
To control the sampling rate (default 10ms), set the SAMPLING_RATE
environment variable.
Memory Profiling
Using the -m option followed by a number (time period in seconds), you can spit out profiling information to the terminal periodically. To do so, run Servo on the desired site (URLs and local file paths are both supported) with profiling enabled:
./mach run --release -m 5 http://example.com/
In the example above, while Servo is still running (AND is processing new passes), the profiling information is printed to the terminal every 5 seconds.
./mach run --release -m 5 http://example.com/
Begin memory reports 5
|
| 115.15 MiB -- explicit
| 101.15 MiB -- jemalloc-heap-unclassified
| 14.00 MiB -- url(http://example.com/)
| 10.01 MiB -- layout-thread
| 10.00 MiB -- font-context
| 0.00 MiB -- stylist
| 0.00 MiB -- display-list
| 4.00 MiB -- js
| 2.75 MiB -- malloc-heap
| 1.00 MiB -- gc-heap
| 0.56 MiB -- decommitted
| 0.35 MiB -- used
| 0.06 MiB -- unused
| 0.02 MiB -- admin
| 0.25 MiB -- non-heap
| 0.00 MiB -- memory-cache
| 0.00 MiB -- private
| 0.00 MiB -- public
|
| 121.89 MiB -- jemalloc-heap-active
| 111.16 MiB -- jemalloc-heap-allocated
| 203.02 MiB -- jemalloc-heap-mapped
| 272.61 MiB -- resident
|404688.75 MiB -- vsize
|
End memory reports
Using macOS Instruments
Xcode has a instruments tool to profile easily.
First, you need to install Xcode instruments:
$ xcode-select --install
Second, install cargo-instruments
via Homebrew:
$ brew install cargo-instruments
Then, you can simply run it via CLI:
$cargo instruments -t Allocations
Here are some links and resources for help with Instruments (Some will stream only on Safari):
- cargo-instruments on crates.io
- Using Time Profiler in Instruments
- Profiling in Depth
- System Trace in Depth
- Threads, virtual memory, and locking
- Core Data Performance Optimization and Debugging
- Learning Instruments
Profiling Webrender
Use the following command to get some profile data from WebRender:
$ ./mach run -w -Z wr-stats --release http://www.nytimes.com
When you run Servo with this command, you'll be looking at three things:
- CPU (backend): The amount of time WebRender is packing and batching data.
- CPU (Compositor): Amount of time WebRender is issuing GL calls and interacting with the driver.
- GPU: Amount of time the GPU is taking to execute the shaders.
Identifying and fixing bugs observed in real websites
There are two main classes of web compatibility issues that can be observed in Servo. Visual bugs are often caused by missing features or bugs in Servo's CSS and layout support, while interactivity problems and broken content is often caused by bugs or missing features in Servo's DOM and JavaScript implementation.
Diagnosing JS errors
Error message like the following clearly show that a certain DOM interface has not been implemented in Servo yet:
[2024-08-16T01:56:15Z ERROR script::dom::bindings::error] Error at https://github.githubassets.com/assets/vendors-node_modules_github_mini-throttle_dist_index_js-node_modules_smoothscroll-polyfill_di-75db2e-686488490524.js:1:9976 AbortSignal is not defined
However, error messages like the following do not provide much guidance:
[2024-08-16T01:58:25Z ERROR script::dom::bindings::error] Error at https://github.githubassets.com/assets/react-lib-7b7b5264f6c1.js:25:12596 e is undefined
Opening the JS file linked from the error message often shows a minified, obfuscated JS script that is almost impossible to read.
Let's start by unminifying it. Ensure that the js-beautify
binary is in your path, or install it with:
npm install -g js-beautify
Now, run the problem page in Servo again with the built-in unminifying enabled:
./mach run https://github.com/servo/servo/activity --unminify-js
This creates an unminified-js
directory in the root Servo repository and automatically persists unminified copies of each external JS script
that is fetched over the page's lifetime. Servo also evaluates the unminified versions of the scripts, so the line and column numbers in the
error messages also change:
[2024-08-16T02:05:34Z ERROR script::dom::bindings::error] Error at https://github.githubassets.com/assets/react-lib-7b7b5264f6c1.js:3377:66 e is undefined
You'll find react-lib-7b7b5264f6c1.js
inside of ./unminified-js/github.githubassets.com/assets/
, and if you look at line 3377 you will
be able to start reading the surrounding code to (hopefully) determine what's going wrong in the page. If code inspection is not enough, however,
Servo also supports modifying the locally-cached unminified JS!
./mach run https://github.com/servo/servo/activity --local-script-source unminified-js
When the --local-script-source
argument is used, Servo will look for JS files in the provided directory first before attempting to fetch
them from the internet. This allows Servo developers to add console.log(..)
statements and other useful debugging techniques to assist
in understanding what real webpages are observing. If you need to revert to a pristine version of the page source, just run with the
--unminify-js
argument again to replace them with new unminified source files.
Architecture overview
Servo is a project to develop a new Web browser engine. Our goal is to create an architecture that takes advantage of parallelism at many levels while eliminating common sources of bugs and security vulnerabilities associated with incorrect memory management and data races.
Because C++ is poorly suited to preventing these problems, Servo is written in Rust, a new language designed specifically with Servo's requirements in mind. Rust provides a task-parallel infrastructure and a strong type system that enforces memory safety and data race freedom.
When making design decisions we will prioritize the features of the modern web platform that are amenable to high-performance, dynamic, and media-rich applications, potentially at the cost of features that cannot be optimized. We want to know what a fast and responsive web platform looks like, and to implement it.
Architecture
flowchart TB subgraph Content Process ScriptA[Script Thread A]-->LayoutA[Layout A] LayoutA ScriptB[Script Thread B]-->LayoutB LayoutB[Layout B] end subgraph Main Process direction TB Embedder-->Constellation[Constellation Thread] Embedder<-->Compositor[Compositor] Constellation<-->Compositor Compositor-->WebRender Constellation-->Font[Font Cache] Constellation-->Image[Image Cache] Constellation-->Resource[Resource Manager] end Constellation<-->ScriptA LayoutA-->Image LayoutA-->Font LayoutA-->Compositor
This diagram shows the architecture of Servo in the case of only a single content process. Servo is designed to support multiple content processes at once. A single content process can have multiple script threads running at the same time with their own layout. These have their own communication channels with the main process. Solid lines here indicate communication channels.
Description
Each constellation instance can for now be thought of as a single tab or window, and manages a pipeline of tasks that accepts input, runs JavaScript against the DOM, performs layout, builds display lists, renders display lists to tiles and finally composites the final image to a surface.
The pipeline consists of three main tasks:
- Script—Script's primary mission is to create and own the DOM and execute the JavaScript engine. It receives events from multiple sources, including navigation events, and routes them as necessary. When the content task needs to query information about layout it must send a request to the layout task.
- Layout—Layout takes a snapshot of the DOM, calculates styles, and constructs the two main layout data structures, the box tree and the fragment tree. The fragment tree is used to untransformed position of nodes and from there to build a display list, which is sent to the compositor.
- Compositor—The compositor forwards display lists to WebRender, which is the content rasterization and display engine used by both Servo and Firefox. It uses the GPU to render to the final image of the page. As the UI thread, the compositor is also the first receiver of UI events, which are generally immediately sent to content for processing (although some events, such as scroll events, can be handled initially by the compositor for responsiveness).
Concurrency and parallelism
Implementation Strategy
Concurrency is the separation of tasks to provide interleaved execution. Parallelism is the simultaneous execution of multiple pieces of work in order to increase speed. Here are some ways that we take advantage of both:
- Task-based architecture. Major components in the system should be factored into actors with isolated heaps, with clear boundaries for failure and recovery. This will also encourage loose coupling throughout the system, enabling us to replace components for the purposes of experimentation and research.
- Concurrent rendering. Both rendering and compositing are separate threads, decoupled from layout in order to maintain responsiveness. The compositor thread manages its memory manually to avoid garbage collection pauses.
- Tiled rendering. We divide the screen into a grid of tiles and render each one in parallel. Tiling is needed for mobile performance regardless of its benefits for parallelism.
- Layered rendering. We divide the display list into subtrees whose contents can be retained on the GPU and render them in parallel.
- Selector matching. This is an embarrassingly parallel problem. Unlike Gecko, Servo does selector matching in a separate pass from flow tree construction so that it is more easily parallelized.
- Parallel layout. We build the flow tree using a parallel traversal of the DOM that respects the sequential dependencies generated by elements such as floats.
- Text shaping. A crucial part of inline layout, text shaping is fairly costly and has potential for parallelism across text runs. Not implemented.
- Parsing. We have written a new HTML parser in Rust, focused on both safety and compliance with the specification. We have not yet added speculation or parallelism to the parsing.
- Image decoding. Decoding multiple images in parallel is straightforward.
- Decoding of other resources. This is probably less important than image decoding, but anything that needs to be loaded by a page can be done in parallel, e.g. parsing entire style sheets or decoding videos.
- GC JS concurrent with layout - Under most any design with concurrent JS and layout, JS is going to be waiting to query layout sometimes, perhaps often. This will be the most opportune time to run the GC.
For information on the design of WebXR see the in-tree documentation.
Challenges
- Parallel-hostile libraries. Some third-party libraries we need don't play well in multi-threaded environments. Fonts in particular have been difficult. Even if libraries are technically thread-safe, often thread safety is achieved through a library-wide mutex lock, harming our opportunities for parallelism.
- Too many threads. If we throw maximum parallelism and concurrency at everything, we will end up overwhelming the system with too many threads.
JavaScript and DOM bindings
We are currently using SpiderMonkey, although pluggable engines is a long-term, low-priority goal. Each content task gets its own JavaScript runtime. DOM bindings use the native JavaScript engine API instead of XPCOM. Automatic generation of bindings from WebIDL is a priority.
Multi-process architecture
Similar to Chromium and WebKit2, we intend to have a trusted application process and multiple, less trusted engine processes. The high-level API will in fact be IPC-based, likely with non-IPC implementations for testing and single-process use-cases, though it is expected most serious uses would use multiple processes. The engine processes will use the operating system sandboxing facilities to restrict access to system resources.
At this time we do not intend to go to the same extreme sandboxing ends as Chromium does, mostly because locking down a sandbox constitutes a large amount of development work (particularly on low-priority platforms like Windows XP and older Linux) and other aspects of the project are higher priority. Rust's type system also adds a significant layer of defense against memory safety vulnerabilities. This alone does not make a sandbox any less important to defend against unsafe code, bugs in the type system, and third-party/host libraries, but it does reduce the attack surface of Servo significantly relative to other browser engines. Additionally, we have performance-related concerns regarding some sandboxing techniques (for example, proxying all OpenGL calls to a separate process).
I/O and resource management
Web pages depend on a wide variety of external resources, with many mechanisms of retrieval and decoding. These resources are cached at multiple levels—on disk, in memory, and/or in decoded form. In a parallel browser setting, these resources must be distributed among concurrent workers.
Traditionally, browsers have been single-threaded, performing I/O on the "main thread", where most computation also happens. This leads to latency problems. In Servo there is no "main thread" and the loading of all external resources is handled by a single resource manager task.
Browsers have many caches, and Servo's task-based architecture means that it will probably have more than extant browser engines (e.g. we might have both a global task-based cache and a task-local cache that stores results from the global cache to save the round trip through the scheduler). Servo should have a unified caching story, with tunable caches that work well in low-memory environments.
Further reading
References
Important research and accumulated knowledge about browser implementation, parallel layout, etc:
- How Browsers Work - basic explanation of the common design of modern web browsers by long-time Gecko engineer Ehsan Akhgari
- More how browsers work article that is dated, but has many more details
- Webkit overview
- Fast and parallel web page layout (2010) - Leo Meyerovich's influential parallel selectors, layout, and fonts. It advocates seperating parallel selectors from parallel cascade to improve memory usage. See also the 2013 paper for automating layout and the 2009 paper that touches on speculative lexing/parsing.
- Servo layout on mozilla wiki
- Robert O'Callahan's mega-presentation - Lots of information about browsers
- ZOOMM paper - Qualcomm's network prefetching and combined selectors/cascade
- Strings in Blink
- Incoherencies in Web Access Control Policies - Analysis of the prevelance of document.domain, cross-origin iframes and other wierdness
- A Case for Parallelizing Web Pages -- Sam King's server proxy for partitioning webpages. See also his process-isolation work that reports parallelism benefits.
- High-Performance and Energy-Efficient Mobile Web Browsing on Big/Little Systems Save power by dynamically switching which core to use based on automatic workload heuristic
- C3: An Experimental, Extensible, Reconfigurable Platform for HTML-based Applications Browser prototype written in C# at Microsoft Research that provided a concurrent (though not successfully parallelized) architecture
- CSS Inline vertical alignment and line wrapping around floats - dbaron imparts wisdom about floats
- Quark - Formally verified browser kernel
- HPar: A Practical Parallel Parser for HTML
- Gecko HTML parser threading
Directory structure
- components
- bluetooth — Implementation of the bluetooth thread.
- bluetooth_traits — APIs to the bluetooth crate for crates that don't want to depend on the bluetooth crate for build speed reasons.
- canvas — Implementation of painting threads for 2D and WebGL canvases.
- canvas_traits — APIs to the canvas crate for crates that don't want to depend on the canvas crate for build speed reasons.
- compositing — Integration with OS windowing/rendering and event loop.
- constellation — Management of resources for a top-level browsing context (ie. tab).
- devtools — In-process server to allow manipulating browser instances via a remote Firefox developer tools client.
- devtools_traits — APIs to the devtools crate for crates that don't want to depend on the devtools crate for build speed reasons.
- fonts — Code for dealing with fonts and text shaping.
- fonts_traits — APIs to the fonts crate for crates that don't want to depend on the fonts crate for build speed reasons.
- layout — Converts page content into positioned, styled boxes and passes the result to the renderer.
- layout_thread — Runs the threads for layout, communicates with the script thread, and calls into the layout crate to do the layout.
- msg — Shared APIs for communicating between specific threads and crates.
- net — Network protocol implementations, and state and resource management (caching, cookies, etc.).
- net_traits — APIs to the net crate for crates that don't want to depend on the net crate for build speed reasons.
- plugins — Syntax extensions, custom attributes, and lints.
- profile — Memory and time profilers.
- profile_traits — APIs to the profile crate for crates that don't want to depend on the profile crate for build speed reasons.
- script — Implementation of the DOM (native Rust code and bindings to SpiderMonkey).
- script_layout_interface — The API the script crate provides for the layout crate.
- script_traits — APIs to the script crate for crates that don't want to depend on the script crate for build speed reasons.
- selectors — CSS selector matching.
- servo — Entry points for the servo application and libservo embedding library.
- style — APIs for parsing CSS and interacting with stylesheets and styled elements.
- style_traits — APIs to the style crate for crates that don't want to depend on the style crate for build speed reasons.
- util — Assorted utility methods and types that are commonly used throughout the project.
- webdriver_server — In-process server to allow manipulating browser instances via a WebDriver client.
- webgpu — Implementation of threads for the WebGPU API.
- etc — Useful tools and scripts for developers.
- mach — A command-line tool to help with developer tasks.
- ports
- winit — Embedding implementation for the
winit
windowing library.
- winit — Embedding implementation for the
- python
- servo — Implementations of servo-specific mach commands.
- tidy — Python package of code lints that are automatically run before merging changes.
- resources — Files used at run time. Need to be included somehow when distributing binary builds.
- support
- android — Libraries that require special handling for building for Android platforms
- rust-task_info — Library for obtaining information about memory usage for a process
- target
- debug — Build artifacts generated by
./mach build --debug
. - doc — Documentation is generated here by the
rustdoc
tool when running./mach doc
- release — Build artifacts generated by
./mach build --release
.
- debug — Build artifacts generated by
- tests
- dromaeo — Harness for automatically running the Dromaeo testsuite.
- html — Manual tests and experiments.
- jquery — Harness for automatically running the jQuery testsuite.
- power — Tools for measurement of power consumption.
- unit — Unit tests using rustc’s built-in test harness.
- wpt — W3C web-platform-tests and csswg-tests along with tools to run them and expected failures.
TODO: Update foo_traits and ports/winit.
Major dependencies
- https://github.com/servo/rust-mozjs, https://github.com/servo/mozjs: bindings to SpiderMonkey
- https://github.com/hyperium/hyper: an HTTP implementation
- https://github.com/servo/html5ever: an HTML5 parser
- https://github.com/servo/ipc-channel: an IPC implementation
- https://github.com/image-rs/image: image decoders
- https://github.com/rust-windowing/winit: cross-platform windowing and input
- https://github.com/jrmuizel/raqote: a pure Rust 2D graphics library
- https://github.com/servo/rust-cssparser: a CSS parser
- https://github.com/housleyjk/ws-rs: a WebSocket protocol implementation
- https://github.com/servo/rust-url: an implementation of the URL specification
- https://github.com/servo/webrender: a GPU renderer
Script
TODO:
- https://github.com/servo/servo/blob/main/components/script/script_thread.rs
- JavaScript: Servo’s only garbage collector
SpiderMonkey
Current state of, and outlook on, Servo's integration of SpiderMonkey: https://github.com/gterzian/spidermonkey_servo
DOM Bindings
Script Thread
Layout DOM
Microtasks
According the HTML spec, a microtask is: "a colloquial way of referring to a task that was created via the queue a microtask algorithm"(source). Each event-loop--meaning window, worker, or worklet--has it's own microtask queue. The tasks queued on it are run as part of the perform a microtask checkpoint algorithm, which is called into from various places, the main one being after running a task from an task queue that isn't the microtask queue, and each call to this algorithm drains the microtask queue--running all tasks that have been enqueued up to that point(without re-entrancy).
The microtask queue in Servo
The MicroTaskQueue
is a straightforward implementation based on the spec: a list of tasks and a boolean to prevent re-entrancy at the checkpoint.
One is created for each runtime, matching the spec since a runtime is created per event-loop.
For a window event-loop, which can contain multiple window objects, the queue is shared among all GlobalScope
it contains.
Dedicated workers use a child runtime, but that one still comes with its own microtask queue.
Microtask queueing
A task can be enqueued on the microtask queue from both Rust, and from the JS engine.
- From JS: the JS engine will call into
enqueue_promise_job
whenever it needs to queue a microtask to call into promise handlers. This callback mechanism is setup once per runtime. This means that resolving a promise, either from Rust or from JS, will result in this callback being called into, and a microtask being enqueued. Strictly speaking, the microtask is still enqueued from Rust. - From Rust, there are various places from which microtask are explicitly enqueued by "native" Rust:
- To implement the await a stable state algorithm, via the script-thread, apparently only via the script-thread, meaning worker event-loop never use this algorithm.
- To implement the dom-queuemicrotask algorithm, both on window and worker event-loops.
- And various other places in the DOM, which can all be traced back to the variants of
Microtask
- A microtask can only ever be enqueued from steps running on a task itself, never from steps running "in-parallel" to an event-loop.
Running Microtask Checkpoints
The perform-a-microtask-checkpoint corresponds to MicrotaskQueue::checkpoint
, and is called into at multiple points:
- In the parser, when encountering a
script
tag as part of tokenizing. This corresponds to #parsing-main-incdata:perform-a-microtask-checkpoint. - Again in the parser, ss part of creating an element. This corresponds to #creating-and-inserting-nodes:perform-a-microtask-checkpoint.
- As part of cleaning-up after running a script. This corresponds to #calling-scripts:perform-a-microtask-checkpoint.
- At two points(one, two) in the
CustomElementRegistry
, the spec origin of these calls is unclear: it appears to be "clean-up after script", but there are no reference to this in the parts of the spec that the methods are documented with. - In a worker event-loop, as part of step 2.8 of the event-loop-processing-model
- In two places(one, two) in a window event-loop(the
ScriptThread
), again as part of step 2.8 of the event-loop-processing-model. This needs to consolidated into one call, and what is a "task" needs to be clarified(TODO(#32003)). - Our paint worklet implementation does not seem to run this algorithm yet.
Servo's style system overview
This document provides an overview of Servo's style system. For more extensive details, refer to the style doc comments, or the Styling Overview in the wiki, which includes a conversation between Boris Zbarsky and Patrick Walton about how style sharing works.
Selector Implementation
To ensure compatibility with Stylo (a project integrating Servo's style system into Gecko), selectors must be consistent.
The consistency is implemented in selectors' SelectorImpl, containing the logic related to parsing pseudo-elements and other pseudo-classes apart from tree-structural ones.
Servo extends the selector implementation trait in order to allow a few more things to be shared between Stylo and Servo.
The main Servo implementation (the one that is used in regular builds) is SelectorImpl.
DOM glue
In order to keep DOM, layout and style in different modules, there are a few traits involved.
Style's dom
traits (TDocument
, TElement
, TNode
, TRestyleDamage
) are the main "wall" between layout and style.
Layout's wrapper
module makes sure that layout traits have the required traits implemented.
The Stylist
The stylist
structure holds all the selectors and device characteristics for a given document.
The stylesheets' CSS rules are converted into Rule
s.
They are then introduced in a SelectorMap
depending on the pseudo-element (see PerPseudoElementSelectorMap
), stylesheet origin (see PerOriginSelectorMap
), and priority (see the normal
and important
fields in PerOriginSelectorMap
).
This structure is effectively created once per pipeline, in the corresponding LayoutThread.
The properties
module
The properties module is a mako template.
Its complexity is derived from the code that stores properties, cascade
function and computation logic of the returned value which is exposed in the main function.
Pseudo-Element resolution
Pseudo-elements are a tricky section of the style system. Not all pseudo-elements are very common, and so some of them might want to skip the cascade.
Servo has, as of right now, five pseudo-elements:
::before
and::after
.::selection
: This one is only partially implemented, and only works for text inputs and textareas as of right now.::-servo-details-summary
: This pseudo-element represents the<summary>
of a<details>
element.::-servo-details-content
: This pseudo-element represents the contents of a<details>
element.
Both ::-servo-details-*
pseudo-elements are private (i.e. they are only parsed from User-Agent stylesheets).
Servo has three different ways of cascading a pseudo-element, which are defined in PseudoElementCascadeType
:
"Eager" cascading
This mode computes the computed values of a given node's pseudo-element over the first pass of the style system.
This is used for all public pseudo-elements, and is, as of right now, the only way a public pseudo-element should be cascaded (the explanation for this is below).
"Precomputed" cascading
Or, better said, no cascading at all. A pseudo-element marked as such is not cascaded.
The only rules that apply to the styles of that pseudo-element are universal rules (rules with a *|*
selector), and they are applied directly over the element's style if present.
::-servo-details-content
is an example of this kind of pseudo-element, all the rules in the UA stylesheet with the selector *|*::-servo-details-content
(and only those) are evaluated over the element's style (except the display
value, that is overwritten by layout).
This should be the preferred type for private pseudo-elements (although some of them might need selectors, see below).
"Lazy" cascading
Lazy cascading allows to compute pseudo-element styles lazily, that is, just when needed.
Currently (for Servo, not that much for stylo), selectors supported for this kind of pseudo-elements are only a subset of selectors that can be matched on the layout tree, which does not hold all data from the DOM tree.
This subset includes tags and attribute selectors, enough for making ::-servo-details-summary
a lazy pseudo-element (that only needs to know if it is in an open
details element or not).
Since no other selectors would apply to it, this is (at least for now) not an acceptable type for public pseudo-elements, but should be considered for private pseudo-elements.
Layout
Servo has two layout systems:
- Layout (Layout 2020): This is a new layout system for Servo which doesn't yet have all the features of the legacy layout, but will have support for proper fragmentation. For more on the benefits of the new layout system, see Layout 2020. The architecture described below refers to the new layout system. For more information about why we wrote a new layout engine see the Servo Layout Engines Report.
- Legacy layout (Layout 2013): This is the original layout engine written for Servo. This layout engine is currently in maintenance mode.
Layout happens in three phases: box tree construction, fragment tree construction, and display list construction. Once a display list is generated, it is sent to WebRender for rendering. When possible during tree construction, layout will try to use parallelism with Rayon. Certain CSS feature prevent parallelism such as floats or counters. The same code is used for both parallel and serial layout.
Box Tree
The box tree is tree that represents the nested formatting contexts as described in the CSS specification. There are various kinds of formatting contexts, such as block formatting contexts (for block flow), inline formatting contexts (for inline flow), table formatting contexts, and flex formatting contexts. Each formatting context has different rules for how boxes inside that context are laid out. Servo represents this tree of contexts using nested enums, which ensure that the content inside each context can only be the sort of content described in the specification.
The box tree is just the initial representation of the layout state and generally speaking the next phase is to run the layout algorithm on the box tree and produce a fragment tree.
Fragments in CSS are the results of splitting elements in the box tree into multiple fragments due to things like line breaking, columns, and pagination.
Additionally during this layout phase, Servo will position and size the resulting fragments relative to their containing blocks.
The transformation generally takes place in a function called layout(...)
on the different box tree data structures.
Layout of the box tree into the fragment tree is done in parallel, until a section of the tree with floats is encountered. In those sections, a sequential pass is done and parallel layout can commence again once the layout algorithm moves across the boundaries of the block formatting context that contains the floats, whether by descending into an independent formatting context or finishing the layout of the float container.
Fragment Tree
The product of the layout step is a fragment tree.
In this tree, elements that were split into different pieces due to line breaking, columns, or pagination have a fragment for every piece.
In addition, each fragment is positioned relatively to a fragment corresponding to its containing block.
For positioned fragments, an extra placeholder fragment, AbsoluteOrFixedPositioned
, is left in the original tree position.
This placeholder is used to build the display list in the proper order according the CSS painting order.
Display List Construction
Once layout has created a fragment tree, it can move on to the next phase of rendering which is to produce a display list for the tree. During this phase, the fragment tree is transformed into a WebRender display list which consists of display list items (rectangles, lines, images, text runs, shadows, etc). WebRender does not need a large variety of display list items to represent web content.
In addition to normal display list items, WebRender also uses a tree of spatial nodes to represent transformations, scrollable areas, and sticky content. This tree is essentially a description of how to apply post-layout transformations to display list items. When the page is scrolled, the offset on the root scrolling node can be adjusted without immediately doing a layout. Likewise, WebRender has the capability to apply transformations, including 3D transformations to web content with a type of spatial node called a reference frame.
Clipping whether from CSS clipping or from the clipping introduced by the CSS overflow
property is handled by another tree of clip nodes.
These nodes also have spatial nodes assigned to them so that clips stay in sync with the rest of web content.
WebRender decides how best to apply a series of clips to each item.
Once the display list is constructed it is sent to the compositor which forwards it to WebRender.
Compositor
TODO: See https://github.com/servo/servo/blob/master/components/compositing/compositor.rs
How WebXR works in Servo
Terminology
Servo's WebXR implementation involves three main components:
- The script thread (runs all JS for a page)
- The WebGL thread (maintains WebGL canvas data and invokes GL operations corresponding to WebGL APIs)
- The compositor (AKA the main thread)
Additionally, there are a number of WebXR-specific concepts:
- The discovery object (i.e. how Servo discovers if a device can provide a WebXR session)
- The WebXR registry (the compositor's interface to WebXR)
- The layer manager (manages WebXR layers for a given session and frame operations on those layers)
- The layer grand manager (manages all layer managers for WebXR sessions)
Finally, there are graphics-specific concepts that are important for the low-level details of rendering with WebXR:
- Surfman is a crate that abstracts away platform-specific details of OpenGL hardware-accelerated rendering.
- A surface is a hardware buffer that is tied to a specific OpenGL context.
- A surface texture is an OpenGL texture that wraps a surface. Surface textures can be shared between OpenGL contexts.
- A surfman context represents a particular OpenGL context, and is backed by platform-specific implementations (such as EGL on Unix-based platforms).
- ANGLE is an OpenGL implementation on top of Direct3D which is used in Servo to provide a consistent OpenGL backend on Windows-based platforms.
How Servo's compositor starts
The embedder is responsible for creating a window and triggering the rendering context creation appropriately. Servo creates the rendering context by creating a surfman context which will be used by the compositor for all web content rendering operations.
How a session starts
When a webpage invokes navigator.xr.requestSession(..)
through JS, this corresponds to the XrSystem::RequestSession method in Servo.
This method sends a message to the WebXR message handler that lives on the main thread, under the control of the compositor.
The WebXR message handler iterates over all known discovery objects and attempts to request a session from each of them. The discovery objects encapsulate creating a session for each supported backend.
As of July 19, 2024, there are three WebXR backends:
- headless - supports a window-less, device-less device for automated tests
- glwindow - supports a GL-based window for manual testing in desktop environments without real devices
- openxr - supports devices that implement the OpenXR standard
WebXR sessions need to create a layer manager at some point in order to be able to create and render to WebXR layers. This happens in several steps:
- Some initialization happens on the main thread
- The main thread sends a synchronous message to the WebGL thread
- The WebGL thread receives the message
- Some backend-specific, graphics-specific initialization happens on the WebGL thread, hidden behind the layer manager factory abstraction
- The new layer manager is stored in the WebGL thread
- The main thread receives a unique identifier representing the new layer manager
This cross-thread dance is important because the device performing the rendering often has strict requirements for the compatibility of any WebGL context that is used for rendering, and most GL state is only observable on the thread that created it.
How an OpenXR session is created
The OpenXR discovery process starts at OpenXrDiscovery::request_session. The discovery object only has access to whatever state was passed in its constructor, as well as a SessionBuilder object that contains values required to create a new session.
Creating an OpenXR session first creates an OpenXR instance, which allows configuring which extensions are in use. There are different extensions used to initialize OpenXR on different platforms; for Windows the XR_KHR_D3D11_enable extension is used since Servo relies on ANGLE for its OpenGL implementation.
Once an OpenXR instance exists, the session builder is used to create a new WebXR session that runs in its own thread. All WebXR sessions can either run in a thread or have Servo run them on the main thread. This choice has implications for how the graphics for the WebXR session can be set up, based on what GL state must be available for sharing.
OpenXR's new session thread initializes an OpenXR device, which is responsible for creating the actual OpenXR session. This session object is created on the WebGL thread as part of creating the OpenXR layer manager, since it relies on sharing the underlying GPU device that the WebGL thread uses.
Once the session object has been created, the main thread can obtain a copy and resume initializing the remaining properties of the new device.
Docs for older versions
Some parts of this book may not apply to older versions of Servo.
If you need to work with a very old version of Servo, you may find these docs helpful:
- README.md
- CONTRIBUTING.md
- docs/COMMAND_LINE_ARGS.md
- docs/HACKING_QUICKSTART.md
- docs/ORGANIZATION.md
- docs/STYLE_GUIDE.md
- docs/debugging.md
- docs/glossary.md
- docs/components/style.md
- docs/components/webxr.md
- tests/wpt/README.md
- Building Servo (on the wiki)
- Building for Android (on the wiki)
- Design (on the wiki)
- Profiling (on the wiki)
Style guide
- Use sentence case for chapter and section headings, e.g. “Getting started” rather than “Getting Started”
- Use permalinks when linking to source code repos — press
Y
in GitHub to get a permanent URL
Markdown source
- Use one sentence per line with no column limit, to make diffs and history easier to understand
To help split sentences onto separate lines, you can replace ([.!?])
→ $1\n
, but watch out for cases like “e.g.”.
Then to fix indentation of simple lists, you can replace ^([*-] .*\n([* -].*\n)*)([^\n* -])
→ $1 $3
, but this won’t work for nested or more complex lists.
- For consistency, indent nested lists with two spaces, and use
-
for unordered lists
Notation
- Use bold text when referring to UI elements like menu options, e.g. “click Inspect”
- Use
backticks
when referring to single-letter keyboard keys, e.g. “pressA
or Ctrl+A
”
Error messages
- Where possible, always include a link to documentation, Zulip chat, or source code — this helps preserve the original context, and helps us check and update our advice over time
The remaining rules for error messages are designed to ensure that the text is as readable as possible, and that the reader can paste their error message into find-in-page with minimal false negatives, without the rules being too cumbersome to follow.
Wrap the error message in <pre><samp>
, with <pre>
at the start of the line.
If you want to style the error message as a quote, wrap it in <pre><blockquote><samp>
.
<pre>
treats newlines as line breaks, and at the start of the line, it prevents Markdown syntax from accidentally taking effect when there are blank lines in the error message.
<samp>
marks the text as computer output, where we have CSS that makes it wrap like it would in a terminal.
Code blocks (<pre><code>
) don’t wrap, so they can make long errors hard to read.
Replace every &
with &
, then replace every <
with <
.
Text inside <pre>
will never be treated as Markdown, but it’s still HTML markup, so it needs to be escaped.
Always check the rendered output to ensure that all of the symbols were preserved.
You may find that you still need to escape some Markdown with \
, to avoid rendering called `Result::unwrap()` on an `Err` value as called Result::unwrap()
on an Err
value.
Error message | Markdown |
---|---|
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=\"1\" PKG_CONFIG_ALLOW_SYSTEM_LIBS=\"1\" \"pkg-config\" \"--libs\" \"--cflags\" \"fontconfig\"` |
|
error[E0765]: ... --> src/main.rs:2:14 | 2 | println!("```); | ______________^ 3 | | } | |__^ |
|
TODO: docs/STYLE_GUIDE.md
Style Guide
The majority of our style recommendations are automatically enforced via our automated linters. This document has guidelines that are less easy to lint for.
Shell scripts
Shell scripts are suitable for small tasks or wrappers, but it's preferable to use Python for anything with a hint of complexity or in general.
Shell scripts should be written using bash, starting with this shebang:
#!/usr/bin/env bash
Note that the version of bash available on macOS by default is quite old, so be careful when using new features.
Scripts should enable a few options at the top for robustness:
set -o errexit
set -o nounset
set -o pipefail
Rememeber to quote all variables, using the full form: "${SOME_VARIABLE}"
.
Use "$(some-command)"
instead of backticks for command substitution.
Note that these should be quoted as well.
TODO: docs/glossary.md
How to use this glossary
This is a collection of common terms that have a specific meaning in the context of the Servo project. The goal is to provide high-level definitions and useful links for further reading, rather than complete documentation about particular parts of the code.
If there is a word or phrase used in Servo's code, issue tracker, mailing list, etc. that is confusing, please make a pull request that adds it to this file with a body of TODO
.
This will signal more knowledgeable people to add a more meaningful definition.
Glossary
Compositor
The thread that receives input events from the operating system and forwards them to the constellation. It is also in charge of compositing complete renders of web content and displaying them on the screen as fast as possible.
Constellation
The thread that controls a collection of related web content. This could be thought of as an owner of a single tab in a tabbed web browser; it encapsulates session history, knows about all frames in a frame tree, and is the owner of the pipeline for each contained frame.
Display list
A list of concrete rendering instructions. The display list is post-layout, so all items have stacking-context-relative pixel positions, and z-index has already been applied, so items later in the display list will always be on top of items earlier in it.
Layout thread
A thread that is responsible for laying out a DOM tree into layers of boxes for a particular document. Receives commands from the script thread to lay out a page and either generate a new display list for use by the renderer thread, or return the results of querying the layout of the page for use by script.
Pipeline
A unit encapsulating a means of communication with the script, layout, and renderer threads for a particular document. Each pipeline has a globally-unique id which can be used to access it from the constellation.
Renderer thread (alt. paint thread)
A thread which translates a display list into a series of drawing commands that render the contents of the associated document into a buffer, which is then sent to the compositor.
Script thread (alt. script task)
A thread that executes JavaScript and stores the DOM representation of all documents that share a common origin.
This thread translates input events received from the constellation into DOM events per the specification, invokes the HTML parser when new page content is received, and evaluates JS for events like timers and <script>
elements.
TODO: wiki/Building
Windows
-
Install Python:
- Install Python 3.11.
- After installation ensure the
PYTHON3
environment variable is set properly, e.g., to 'C:\Python11\python.exe' by doing:
Thesetx PYTHON3 "C:\Python11\python.exe" /m
/m
will set it system-wide for all future command windows.
-
Install the following tools:
Make sure all of these tools are on your PATH.
-
Install GStreamer:
Install the MSVC (not MingGW) binaries from the GStreamer site. The currently recommended version is 1.16.0. i.e:
Note that you should ensure that all components are installed from gstreamer, as we require many of the optional libraries that are not installed by default.
Android
Please see [[Building for Android]].
Windows Tips
Troubleshooting the Windows Build
If you have troubles with
x63 type
prompt asmach.bat
set by default: 0. You may need to choose and launch the type manually, such asx86_x64 Cross Tools Command Prompt for VS 2019
in the Windows menu.)
cd to/the/path/servo
python mach build -d