One of the challenges facing forecasters is the formulation of an accurate extended forecast, mainly consisting of a temperature and precipitation forecast three to five days (or more) in advance. Along with graphical guidance from several models, MOS-derived numerical guidance, based on output from the National Center for Environmental Prediction's (NCEP) Medium-Range Forecast Model (MRF), is available (TDL 1993). This guidance (henceforth known as the FMR) provides a max and min temperature forecast along with other forecast parameters.
As with any guidance, FMR forecasts are subject to errors, especially as the forecast increases in time. The purpose of this study was to determine how well the FMR forecast temperatures in West Texas three to five days in advance during the cool season, defined here as the months from October through March.
2. Formulation of FMR guidance
FMR guidance, obtained from 0000 UTC data and available every 24 hr, provides an 8-day (192 hr) forecast of daily max/min temperature, probability of precipitation, mean cloudiness, mean wind speed, and conditional probability of snow (if appropriate), along with climatological values of these parameters. Forecasts for these parameters are generated by applying statistical equations to output from the MRF model (Jensenius et al. 1992). The forecasts are then passed through a calibration procedure that minimizes the forecasts' mean square error, based on previous verification data. The calibration procedure tries to produce forecasts representative of the average values observed for similar MRF forecasts. It should be noted that, as the skill of the objective guidance decreases, the calibration procedure will skew the FMR forecasts more toward climatology (TDL 1993).
FMR and observed temperature data were collected for the following West Texas NWS offices from October 1994 through March 1995: Amarillo (AMA), Lubbock (LUB), Midland (MAF), San Angelo (SJT), and El Paso (EPZ). Observed max and min temperatures were manually input into a data file. Computer programs were written to combine the observed and forecast temperatures for each date into monthly max and min data tables.
Next, a statistical analysis was performed for each station on a month-by-month, three month (October-December and January-March), and six month basis. Statistical values were computed from the total number of max and min data (N) for the Day 3, Day 4, and Day 5 forecasts. These included average observed temperatures (OBSa) and average FMR temperatures (FMRa), and their differences DIFa, defined below as:
Also, the Root Mean Square Error (RMS error) was calculated:
Since the calibration procedure tries to minimize this error, and since the RMS error indicates the average "miss" or absolute difference between OBSi and FMRa one can expect on a long-term basis, this is probably the most important statistic in this study. Other parameters calculated included the standard deviations of the observed (obs) and FMR temperatures (FMR), defined below as:
OBS = SQRT( [(OBSi - OBSa)2]/(N-1))
It should be noted that on occasion the FMR data were not available for a particular day. When an FMR forecast was not available, the observed data were not used for any of the analyses. For the six-month period totaling 182 days, FMR data were available for at least 176 days, or 97% of the time.
4. Summary of weather conditions for West Texas October 1994 through March 1995
Figure 1 shows the temperature departures from normal for the five West Texas stations during the six-month period. Overall, temperatures were above normal through the period, especially from December 1994 through February 1995. The only time any station's temperature departure was more than just slightly below normal was during March at SJT.
As far as precipitation is concerned, rainfall was well below normal for the six-month period at AMA, LUB, and MAF. A slight precipitation deficit resulted at EPZ, while above normal precipitation occurred at SJT. Since precipitation was not studied, there will be no further discussion of this parameter.
a. RMS Errors
Figures 2 and 3 show the RMS errors for max and min temperatures, respectively. Note how the RMS errors increased with forecast day, especially for maximum temperatures. Although in every case the RMS errors using the FMR temperatures were lower than climatology, by Day 5 the difference between the FMR and climatology was at times less than 1.0F. Also, note that the RMS errors for min temperatures were much lower than they were for max temperatures (although RMS errors using a forecast based on climatology were also lower). Although not shown here, RMS errors were anywhere from 1.0-4.0F higher during January-March than October-December at all stations but EPZ, especially for max temperatures. March tended to have the largest RMS errors, mainly due to the fact that the FMR failed to bring a cold front through the area in early March. During early March, the forecast errors for max temperatures ranged from around 35F for forecast Day 3 to as much as 47F for Days 4 and 5, and as much as 24F for min temperatures. However, when this period was climated from the calculations, RMS errors decreased by as much as 1.5F for Day 5 for max temperatures.
Part of the reason for the increase in RMS errors with increase in forecast day can be explained by comparing the standard deviation of observed temperatures (OBS) with those of the FMR (FMR). Results showed that differences between OBS and FMR were much lower for minima than maxima, which is probably why the RMS errors for min temperatures were lower. Also, the differences between OBS and FMR were highest in March and lowest in January, the same two months when the RMS errors were at their largest and smallest, respectively. Finally, OBS was always higher than FMR for both temperature extremes, with differences between OBS and FMR increasing with forecast day. Perhaps the FMR's inability for greater variability in forecasting temperatures, tending toward a climatological forecast by Day 5, may in part explain why RMS errors became larger as the forecast day increased.
b. Average differences (DIFa)
Figures 4 and 5 show the average differences for max and min temperatures, respectively. With the exception of SJT, the FMR had a slight tendency to be too cold overall, most notably at EPZ and AMA, (the latter only for max temperatures). However, note that a forecast based on climatology would have been substantially cooler. Thus, the FMR improved a climatological forecast. It should be noted that DIFa was negative during October-December (FMR too cold), but positive during January-March (FMR too warm) at all stations but EPZ, despite the fact that temperatures were well above normal (See Fig. 1). Part of the reason was due to the frontal passage in early March that the FMR missed; however eliminating this period kept DIFa positive during January-March (except at AMA where it was still within 1.0F).
c. FMR forecast tendencies
Tables 1 and 2 show the percentage of time the FMR forecast temperatures too cold, too warm, or correctly for max and min temperatures, respectively. Except for SJT, the FMR was more likely to forecast too cold a temperature for the six-month period. Note that the FMR forecast temperatures too cold as much as of the time at EPZ. Nevertheless, the FMR still outperformed climatology, and thus indicated the FMR's better performance.
d. Rating the FMR forecasts
Table 3 shows the performance of the FMR for max temperatures. As expected, the percentage of time the FMR forecast temperatures within 5F decreased with forecast day. However, the FMR forecast was still superior to a forecast using climatology, especially for Day 3. Although by Day 5 the differences between forecasts within 5F using the FMR vs. climatology decreased, the FMR did not have as many busts. However, note that by Day 5 the FMR did bust at least of the time at AMA and LUB, and around 30% of the time at MAF and SJT. Thus, it appears that although the FMR outperforms climatology for max temperatures, its forecasts can often provide poor results by Day 5.
Table 4 shows the percentages for min temperatures. When comparing this to Table 3, it is easy to see that the FMR forecast min temperatures much better than max temperatures (except for EPZ), and had fewer busts (mainly for Days 3 and 4). In nearly every case, the FMR did much better than climatology, especially having significantly fewer busts. Note that for Day 5 at SJT the percentage of forecasts within 3F was HIGHER using climatology rather than the FMR, although the FMR was slightly better than climatology for forecasts within 5F. Nevertheless, the FMR can be used with more confidence in forecasting min temperatures than max temperatures, and overall is superior to a climatological forecast.
6. A Summary of the FMR performance during the warm season April 1994 through September 1994
An identical statistical analysis was performed using FMR forecast and observed temperature data during the warm season of April 1994 through September 1994. Overall, the FMR showed the same characteristics during the warm season as it did with the cool season (e.g., errors/differences increasing with forecast day, smaller errors/differences for min temperatures than for max temperatures, and FMR guidance better than a climatological forecast). However, noteworthy differences in the magnitude of the errors/differences were seen, and this will be presented below. Before these differences are discussed, it should be noted that, as with the cool season, observed temperatures over West Texas were much warmer than normal from April through September 1994. In fact, from the period June 23 to July 7, max temperatures in excess of 100F were often observed. The main differences were:
From the results of this study, the following conclusions can be stated:
It should be remembered that results from this study were based on two six-month data sets. Further substantiation of any conclusions from this study can be obtained when more data can be analyzed, especially when observed temperatures contrast the departures from normal inherent in these data. Finally, one should always remember that any guidance should not be taken blindly. Truncation and round-off errors present in any model forecast combined with model biases could result in large discrepancies between predicted and observed temperatures.
Thanks to Loren Phillips for his meticulous editing of this document, as well as his comments and suggestions.
Jensenius, J. S., Jr., K. K. Hughes, and J. B. Settelmaier, 1992: Calibrated perfect prog temperature and probability of precipitation forecasts for medium-range projections. Preprints Twelfth Conference on Probability and Statistics in the Atmospheric Sciences, Toronto, Amer. Meteor. Soc., 213-218.
Techniques Development Laboratory, 1993: The MRF-Based Statistical Guidance Message. NWS Technical Procedures Bulletin No. 411, National Oceanic and Atmospheric Administration, U.S. Department of Commerce, 11pp.