Last updated: 2019-04-25 11:58:21 UTC

**P1**: Blocking beta; **P2**: Blocking release (being worked on or looking for owners); **P3**: Wanted for release, but not blocking

Performance of the last build with at least 1,000 accumulated usage hours/branch (`20190424095359`

):

Metric | Median | 95% CI |
---|---|---|

page_load_ms | 106.2% | (93.8, 113.4) |

tab_switch_composite | 101.4% | (96.6, 107.9) |

slow_content_frame_time_vsync | 82.1% | (62.6, 95.0) |

content_full_paint_time | 81.1% | (73.7, 85.9) |

content_paint_time | 70.7% | (63.4, 77.5) |

WebRender performance expressed as percent of Gecko. Lower is better. Confidence intervals are bootstrapped.

Performance of the last build with at least 10,000 accumulated usage hours/branch (`20190422163745`

):

Metric | Median | 95% CI |
---|---|---|

content_full_paint_time | 101.2% | (98.8%, 103.9%) |

page_load_ms | 100.3% | (96.2%, 103.9%) |

tab_switch_composite | 97.9% | (95.8%, 99.8%) |

slow_content_frame_time_vsync | 93.7% | (86.6%, 99.2%) |

content_paint_time | 92.6% | (89.6%, 95.3%) |

WebRender performance expressed as percent of Gecko. Lower is better. Confidence intervals are bootstrapped.

Error bars reflect bootstrapped 95% confidence intervals for the median.

CONTENT_FRAME_TIME_VSYNC is expressed in percent of a vsync. Since display updates only occur at vsync intervals, all updates that take between 100% and 200% of a vsync appear identical to the user. 200% is therefore a critical threshold, so it’s important to know how often frames are slower than 200%.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user means, treating the per-user means as log-normally distributed.

Error bars reflect bootstrapped 95% confidence intervals for the median.

Error bars reflect 95% confidence intervals for the geometric mean of the distribution of per-user p99s, treating the per-user p99s as log-normally distributed.

Error bars reflect bootstrapped 95% confidence intervals for the median.

Error bars reflect bootstrapped 95% confidence intervals for the median.

Error bars reflect bootstrapped 95% confidence intervals for the median.

CONTENT_FRAME_TIME is expressed in percent of a vsync. Since display updates only occur at vsync intervals, all updates that take between 100% and 200% of a vsync appear identical to the user. 200% is therefore a critical threshold, so it’s important to know how often frames are slower than 200%. We actually measure the fraction of events slower than 192% of a vsync because, the way the histogram is defined, that’s the closest bucket edge to 200%.

Be cautious when interpreting crash rates from recent builds. We receive pings that tell us about crashes before we receive pings that tell us about usage, so estimates of crash rates are much higher than the true rate for the first few days builds are in the field.

Stability of the last 14 builds with at least 1,000 usage-hours/branch, combined:

Error bars reflect a 95% confidence interval for the ratio of Poisson rates adjusted for total usage hours.

Stability of the last build with at least 10,000 usage-hours/branch:

Crash rate error bars reflect 95% confidence intervals for rates, assuming that crashes are Poisson-distributed, and based on received usage. Error bars do *not* account for the reporting delay between crashes and non-crash usage.

Error bars reflect bootstrapped 95% confidence intervals for the median.

P1 = Blocking beta

P2 = Blocking release (being worked on or looking for owners)

P3 = Wanted for release, but not blocking

Counts of users submitting pings considered for performance and crash metrics:

Please direct questions to tdsmith or the Product Data Science team.

This report follows users enrolled in the experiments `prefflip-webrender-v1-2-1492568`

and `prefflip-webrender-v1-3-1492568`

.

Data are collected from Spark with this Databricks notebook.

Notebook runs are kicked off at 11am and 11pm UTC and rendered on hala at noon and midnight. The RMarkdown script that renders this page lives in Github. The “last updated” timestamp reflects the time the ETL task terminated.

Database: webrender_nvidia.sqlite3