# WebRender Status (amd)

Last updated: 2019-11-03 08:32:32 UTC

# Executive summary

P1: Blocking beta; P2: Blocking release (being worked on or looking for owners); P3: Wanted for release, but not blocking

## Nightly performance

Performance of the last build with at least 1,000 accumulated usage hours/branch (20190709153742):

Metric Median 95% CI
slow_content_frame_time_vsync 109.2% (36.6, 142.4)
tab_switch_composite 106.6% (82.7, 120.2)
content_paint_time 88.3% (70.1, 104.6)
content_full_paint_time 83.8% (71.0, 96.8)

WebRender performance expressed as percent of Gecko. Lower is better. Confidence intervals are bootstrapped.

## Beta performance

Performance of the last build with at least 10,000 accumulated usage hours/branch (20190701181138):

Metric Median 95% CI
tab_switch_composite 126.9% (114.5%, 139.0%)
slow_content_frame_time_vsync 99.6% (79.4%, 115.2%)
content_full_paint_time 89.5% (83.2%, 98.2%)
content_paint_time 86.8% (75.8%, 98.4%)

WebRender performance expressed as percent of Gecko. Lower is better. Confidence intervals are bootstrapped.

# Performance

## CONTENT_FRAME_TIME_VSYNC

#### All builds

Error bars reflect bootstrapped 95% confidence intervals for the median.

CONTENT_FRAME_TIME_VSYNC is expressed in percent of a vsync. Since display updates only occur at vsync intervals, all updates that take between 100% and 200% of a vsync appear identical to the user. 200% is therefore a critical threshold, so it’s important to know how often frames are slower than 200%.

## Tab switch

#### All builds

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

#### All builds

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

## CONTENT_FULL_PAINT_TIME

#### Recent builds

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect bootstrapped 95% confidence intervals for the median.

## CONTENT_PAINT_TIME

#### Recent builds

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user p99s, treating the per-user p99s as log-normally distributed.

Error bars reflect bootstrapped 95% confidence intervals for the median.

## COMPOSITE_TIME

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect bootstrapped 95% confidence intervals for the median.

## CONTENT_FRAME_TIME

#### All builds

Error bars reflect bootstrapped 95% confidence intervals for the median.

CONTENT_FRAME_TIME is expressed in percent of a vsync. Since display updates only occur at vsync intervals, all updates that take between 100% and 200% of a vsync appear identical to the user. 200% is therefore a critical threshold, so it’s important to know how often frames are slower than 200%. We actually measure the fraction of events slower than 192% of a vsync because, the way the histogram is defined, that’s the closest bucket edge to 200%.

# Crash summary

Be cautious when interpreting crash rates from recent builds. We receive pings that tell us about crashes before we receive pings that tell us about usage, so estimates of crash rates are much higher than the true rate for the first few days builds are in the field.

### Nightly

Stability of the last 14 builds with at least 1,000 usage-hours/branch, combined:

Error bars reflect a 95% confidence interval for the ratio of Poisson rates adjusted for total usage hours.

### Beta

Stability of the last build with at least 10,000 usage-hours/branch:

### All builds

Crash rate error bars reflect 95% confidence intervals for rates, assuming that crashes are Poisson-distributed, and based on received usage. Error bars do not account for the reporting delay between crashes and non-crash usage.

# Engagement

Error bars reflect bootstrapped 95% confidence intervals for the median.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

# Bug burndown detail

P1 = Blocking beta

P2 = Blocking release (being worked on or looking for owners)

P3 = Wanted for release, but not blocking

Bugzilla: P2 blockers

Bugzilla: P3 bugs

# Enrollment

Counts of users submitting pings considered for performance and crash metrics:

# Colophon

Please direct questions to tdsmith or the Product Data Science team.

This report follows users enrolled in the experiments prefflip-webrender-v1-2-1492568 and prefflip-webrender-v1-3-1492568.

Data are collected from Spark with this Databricks notebook.

Notebook runs are kicked off at 11am and 11pm UTC and rendered on hala at noon and midnight. The RMarkdown script that renders this page lives in Github. The “last updated” timestamp reflects the time the ETL task terminated.

Database: webrender_amd.sqlite3