We follow a weekly update cycle at the COVID-19 Forecast Hub. Every Tuesday morning, our team assembles the most recent forecasts from all teams that have submitted in the last week. We use these data to generate an ensemble forecast that synthesizes predictions of COVID-19 cases, hospitalizations and deaths from all eligible models. These forecasts are then reviewed, analyzed, and verified by our collaborators at the US CDC. On Wednesday, the CDC updates their COVID-19 Forecasting webpage with the latest data from the COVID-19 Forecast Hub.
Forecast Summaries
Each week, we also generate a weekly report that provides some top-level summary numbers from the ensemble forecast. These reports summarize the number of expected deaths for the following four weeks in the United States on a national and state level. Browse current and past forecast summaries.
Forecast Evaluations
Periodically, we evaluate the accuracy and precision of the ensemble forecast and component models over recent and historical forecasting periods. Models forecasting incident cases and incident deaths at a national and state level are evaluated using (adjusted relative weighted interval scores (WIS, a measure of distributional accuracy), and adjusted relative mean absolute error (MAE), and calibration scores. Scores are evaluated across weeks, locations, and targets. You can read a paper explaining these procedures in more detail, and look at the most recent monthly evaluation reports.
Forecast Evaluation Dashboard
In collaboration with the COVID-19 Forecast Hub, the Delphi Group at Carnegie Mellon University has created a Forecast Evaluation Dashboard that compares the performance of models over time, using different outcome variables (incident cases and deaths), metrics (Weighted Interval Score, Absolute Error, and Coverage), horizons (1 through 4 week forecasts), and locations (US national and state levels). Learn more about the dashboard and download the code on GitHub.