Development of NewPerformance Metrics
- How do we develop a work plan to formalize the new performance measures?
- What is their relationship to existing “operational” performance measures?
- How does NAME coordinate with operational meteorology?
- How long will this take? Do we need to plan a small workshop to discuss specifics?
- Responsible (lead) labs/programs/persons for each proposed new performance measure?
Notes:
- Message
- We are demonstrating skill for seasonal temperature
- Trends have been influenced by climate patterns (2001 and 2002 weak signals compared to 1997-1998)
- There is room for improvement with added supercomputer capacity to run ensembles and coupled models; we anticipate improved skill in this area
- FY01 Goal: 20; FY01 Actual: 20
- Key Data Points
- Measure compares actual observed temperatures with forecasted temperatures from areas around the country
- There are approximately 100 forecast points across the country
- Verification is done at points where forecast is for other than climatology
- Only use forecast points where there is not an equal chance of the temperature being normal, above, or below (i.e. where the seasonal forecast has predicted above or below normal temperatures)
- This score measures how much better the predictions are than the random forecasts
- The score for a random forecaster is zero
- Skill score of 20 is considered good; this means the forecast was correct in almost 50% of the locations forecasted
- Expanded computing capacity on the new NWS supercomputer