Compatibility/Meetings/2023-09-05

From MozillaWiki
Jump to: navigation, search

Minutes

  • Scribe: James
  • Chair: Honza

Interop 2024 (Honza)

Call for Proposal Period — three weeks (Sep 14 – Oct 5)

Can we compile a list of recommendations, drawing from our experience with triaging reported issues and conducting gap analysis (Safari release notes).

Goal: Help the web platform teams, increase attention and foster action.

  • Honza: we should make a list of reccomendations. This would help the platform teams.
  • Tom: For the Safari analysis, I don't think there's lots that's an interop nightmare, and some of it is already being taken care of. But I'll look.
  • James: Three interesting things for us to do

1) Provide supporting evidence for proposals that would come up anyway 2) Things that are likely be interop problem in the future. 3) There might be low hanging fruit - Web compat issues shoiuld be proposed too

The process is a bit more complicated this year. The organization will pick top proposals and we'll select from it. We also want to make it simple for the platform team (not have opinion for a lot of proposals)

  • James: have we filed bugs for other browsers? no

Interop 2023, Expected failures (Honza)

We've collected [list of tests](https://docs.google.com/spreadsheets/d/1oEbxBavEC5snlJ8osmycJqv5Dt4f9uu-Wr12CuUMDrk/edit#gid=73940484 ) that are expected to fail at the end of this year. The requests is to calculate the impact on the overall score.

  • Honza: Request from Maire. Collect a list of tests that we expect to not pass at the end of the year for whatever reason (e.g. lack of resources). Being collected in spreadsheet. Next step is to calculate the impact on the final score.
  • James: Depends on what output we want. looks like for all the collected tests figure out what percentage of the score they represent. Each test has different weight (including subtests).
  • Honza: Want to calculate the score we can end up with if all the known failures are excluded. We would treat the reduced score as the target.
  • Tom: We have bugs for ~everything that's failing. We could use those bugs as the data source for figuring out the points.
  • James: The advantage of the spreadsheet is the data's already normalised in a way that makes it easier to reuse.

Reports regarding benchmarks sites, where Firefox is slower compared to other browsers (SV)

In the past, we usually closed issues for such reports as Non-compat. Since at the latest All-Hands, Speedometer 3 was a topic, should we now take into consideration reports where Firefox is slower in benchmark sites? Or are there any specific benchmark sites that should be taken into consideration, such as https://browserbench.org/Speedometer2.0/ ?

  • jgraham: I don't think we need to care about this. People are already tracking the scores on these sites in detail.
  • Tom: Agreed. We should only consider functional regressions in the site. Feel free to flag and ask if you have any questions.