Blog - El Niño project (part 6)

Here is a draft of a post by the statistician Steve Wenner:

Please read it and criticize it! It's actually quite a strong attack on Ludescher's work, though it's phrased in a perfectly polite and pleasant way. It may get some counterattacks. So, we should try to make sure it contains no obvious mistakes... though of course nobody except Steve is "responsible" for his claims.

It's quite interesting.

«1

Comments

  • 1.
    edited August 2014

    I see one weakness that we should try to fix.

    The most standard definition of El Niño uses the Oceanic Niño Index (ONI), which is the running 3-month mean of Niño 3.4 index. An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row. A La Nña occurs when the ONI is below -0.5 °C for at least 5 months in a row.

    Ludescher et al use a nonstandard, less strict definition. They say there's an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months.

    Wenner goes further in this direction. He defines an El Niño initiation month to be one where the Niño 3.4 index is over 0.5°C.

    Perhaps we should make it clear that this is using Ludescher's definition of El Niño, not the standard one.

    Comment Source:I see one weakness that we should try to fix. The most standard definition of El Niño uses the **Oceanic Niño Index** (ONI), which is the running 3-month mean of Niño 3.4 index. An **El Niño** occurs when the ONI is over 0.5 °C for at least 5 months in a row. A **La Nña** occurs when the ONI is below -0.5 °C for at least 5 months in a row. Ludescher _et al_ use a nonstandard, less strict definition. They say there's an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months. Wenner goes further in this direction. He defines an **El Niño initiation month** to be one where the Niño 3.4 index is over 0.5°C. Perhaps we should make it clear that this is using Ludescher's definition of El Niño, not the standard one.
  • 2.

    I will also ask Steve to write a short paragraph introducing himself and mentioning his qualifications.

    Comment Source:I will also ask Steve to write a short paragraph introducing himself and mentioning his qualifications.
  • 3.

    Perhaps I should also write a short post addressing the definition of El Niño. It would be nice to see a graph like this:

    but with the Oceanic Niño Index replacing the Niño 3.4.

    Comment Source:Perhaps I should also write a short post addressing the definition of El Ni&ntilde;o. It would be nice to see a graph like this: <img src = "http://www.azimuthproject.org/azimuth/files/ludescher-replication-v2.png" alt = ""/> but with the Oceanic Ni&ntilde;o Index replacing the Ni&ntilde;o 3.4.
  • 4.
    edited July 2014

    Ludescher et al have supplementary material which should be read before criticising. In particular, there are zoomed in portions of the graph for the borderline decisions.

    Where can I download data for the Oceanic Niño Index? And how many flavours does it come in, and which one would you like?

    The link in comment 1 doesn't work. Hope this does. http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29.

    Comment Source:Ludescher et al have supplementary material which should be read before criticising. In particular, there are zoomed in portions of the graph for the borderline decisions. Where can I download data for the Oceanic Niño Index? And how many flavours does it come in, and which one would you like? The link in comment 1 doesn't work. Hope this does. [http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29](http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29).
  • 5.

    Perhaps I should also write a short post addressing the definition of El Niño

    Dear John (I know this is a tall order)

    Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there.

    I like to start issuing the Machine Learning forecasts + wavelet analysis on daily basis. Practice Makes Perfect is what is needed for machine learning training :)

    What do I mean by Machine Learning forecast: This is a non-cognitive machine forecast free of any human interpretation and inference, solely based upon the well known algorithms dealing with adaptive non-linear approximations to the multivariate functions from one banach/functional space to another which you define as base for weather conditions e.g. EL Nino.

    Dara

    Comment Source:>Perhaps I should also write a short post addressing the definition of El Niño Dear John (I know this is a tall order) Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there. I like to start issuing the Machine Learning forecasts + wavelet analysis on daily basis. Practice Makes Perfect is what is needed for machine learning training :) What do I mean by Machine Learning forecast: This is a non-cognitive machine forecast free of any human interpretation and inference, solely based upon the well known algorithms dealing with adaptive non-linear approximations to the multivariate functions from one banach/functional space to another which you define as base for weather conditions e.g. EL Nino. Dara
  • 6.

    John

    We, with cooperation with WebHubTel and others, could then issue you interim computations and symbolic expressions using Mathematica and other tools, so you could review the actual history of computations to fine tune the forecast algorithms.

    I am thinking of these interim computations will be in form of tech-notes reports with live-code and data, I will post some samples later on.

    Dara

    Comment Source:John We, with cooperation with WebHubTel and others, could then issue you interim computations and symbolic expressions using Mathematica and other tools, so you could review the actual history of computations to fine tune the forecast algorithms. I am thinking of these interim computations will be in form of tech-notes reports with live-code and data, I will post some samples later on. Dara
  • 7.

    The link in the first comment seems to be to a page that hasn't been created yet.

    Comment Source:The link in the first comment seems to be to a page that hasn't been created yet.
  • 8.

    Todd, see comment 5 - the page exists. I don't understand why the link from comment 1 doesn't work.

    Comment Source:Todd, see comment 5 - the page exists. I don't understand why the link from comment 1 doesn't work.
  • 9.
    edited July 2014

    Thanks, Todd! I left out the word "Blog - " So, folks, please read this and criticize it:

    I'm sorry to have taken so long to reply to this and other comments. I had some other kinds of work to do: 3 papers of mine suddenly got accepted for publication, some with corrections required. I'm also trying to finish off another: Operads and phylogenetic trees.

    I asked Steve Wenner to add an introduction but he never replied to me. I'll ask again now, since I want to post this article fairly soon.

    Comment Source:Thanks, Todd! I left out the word "Blog - " So, folks, please read this and criticize it: * [[Blog - El Niño project (part 6)]] I'm sorry to have taken so long to reply to this and other comments. I had some other kinds of work to do: 3 papers of mine suddenly got accepted for publication, some with corrections required. I'm also trying to finish off another: [Operads and phylogenetic trees](http://math.ucr.edu/home/baez/phylo.pdf). I asked Steve Wenner to add an introduction but he never replied to me. I'll ask again now, since I want to post this article fairly soon.
  • 10.
    edited July 2014

    Dara wrote:

    Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there.

    I won't write my blog post like a physics paper. But it should be completely clear and precise.

    I gave a definition that seems fairly clear and precise to me:

    The most standard definition of El Niño uses the Oceanic Niño Index (ONI), which is the running 3-month mean of Niño 3.4 index. An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row. A La Nña occurs when the ONI is below 0.5 °C for at least 5 months in a row.

    There are just two questions:

    1) Where do we get our Niño 3.4 index?

    2) When we define the "running 3-month mean" of a function $f(t)$ (where $t$ is the time in months), do we define it by the formula

    $$ \langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) ) $$ or perhaps

    $$ \langle f(t) \rangle = \frac{1}{3} (f(t) + f(t-1) + f(t-2) ) $$ Answers:

    1) The US National Weather Service provides a file of the monthly Niño 3.4 index here:

    Unlike some other files, this data takes global warming into account! The Niño 3.4 index is in the column "ANOM".

    2) It seems the US National Weather Service computes the 3-month running mean this way:

    $$ \langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) ) $$ You can check this by looking at their ONI table.

    Let me check it! They give the Niño 3.4 index for January, February and March 1950 as

    $$ -1.42, -1.31, -1.04 $$ If we take the mean of these we get

    $$ \frac{1}{3}( -1.42 -1.31 -1.04) = -1.2566... $$ So, I predict their ONI for February 1950 will be about -1.2566... Looking at their ONI table,they say -1.3. That's okay, since they just give 2 digits.

    You could check more examples, but I think this is how the ONI is defined. And that gives the definition of El Niño.

    Comment Source:Dara wrote: > Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there. I won't write my blog post like a physics paper. But it should be completely clear and precise. I gave a definition that seems fairly clear and precise to me: > The most standard definition of El Ni&ntilde;o uses the **Oceanic Niño Index** (ONI), which is the running 3-month mean of Niño 3.4 index. An **El Ni&ntilde;o** occurs when the ONI is over 0.5 &deg;C for at least 5 months in a row. A **La N&ntilde;a** occurs when the ONI is below 0.5 &deg;C for at least 5 months in a row. There are just two questions: 1) Where do we get our Niño 3.4 index? 2) When we define the "running 3-month mean" of a function $f(t)$ (where $t$ is the time in months), do we define it by the formula $$ \langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) ) $$ or perhaps $$ \langle f(t) \rangle = \frac{1}{3} (f(t) + f(t-1) + f(t-2) ) $$ Answers: 1) The US National Weather Service provides a file of the monthly Ni&ntilde;o 3.4 index here: * [http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt). Unlike some other files, this data takes global warming into account! The Ni&ntilde;o 3.4 index is in the column "ANOM". 2) It seems the US National Weather Service computes the 3-month running mean this way: $$ \langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) ) $$ You can check this by looking at their [ONI table](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml). Let me check it! They give the Ni&ntilde;o 3.4 index for January, February and March 1950 as $$ -1.42, -1.31, -1.04 $$ If we take the mean of these we get $$ \frac{1}{3}( -1.42 -1.31 -1.04) = -1.2566... $$ So, I predict their ONI for February 1950 will be about -1.2566... Looking at their [ONI table](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml),they say -1.3. That's okay, since they just give 2 digits. You could check more examples, but I think this is how the ONI is defined. And that gives the definition of El Ni&ntilde;o.
  • 11.

    Thank you John.

    Here is one last question, say I coded a forecast algorithm:

    Forecast :TODAY ---> {index1, index2, Index3, index4, index5}

    It takes the date today and issue the forecast for the next 5 dates.

    Since the definition is 5 MONTHS IN A ROW, how could I qualify if the algorithm predicts the El Nino e.g. is TODAY the beginning of the 5 MONTHS or middle? And then count with the future values in mind?

    Dara

    Comment Source:Thank you John. Here is one last question, say I coded a forecast algorithm: Forecast :TODAY ---> {index1, index2, Index3, index4, index5} It takes the date today and issue the forecast for the next 5 dates. Since the definition is 5 MONTHS IN A ROW, how could I qualify if the algorithm predicts the El Nino e.g. is TODAY the beginning of the 5 MONTHS or middle? And then count with the future values in mind? Dara
  • 12.

    Hi Dara, I don't think I understand the question. 5 months of 3-month rolling averages need 7 monthly values?

    Comment Source:Hi Dara, I don't think I understand the question. 5 months of 3-month rolling averages need 7 monthly values?
  • 13.

    Hello Jim

    The question is I run the forecast algorithm TODAY, it predicts say 5 values of index in coming 5-7 months, so how would I report a forecast whether there is an El Nino in effect of not i.e. as of TODAY? checking for the forecast numbers of the next 5-7 months?

    This a reporting issue since there is a range of numbers constitute the El Nino

    Dara

    Comment Source:Hello Jim The question is I run the forecast algorithm TODAY, it predicts say 5 values of index in coming 5-7 months, so how would I report a forecast whether there is an El Nino in effect of not i.e. as of TODAY? checking for the forecast numbers of the next 5-7 months? This a reporting issue since there is a range of numbers constitute the El Nino Dara
  • 14.
    edited October 2014

    I don't think I've seen:

    Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111)

    http://www.pnas.org/content/111/6/2064.abstract

    Abstract The most important driver of climate variability is the El Niño Southern Oscillation, which can trigger disasters in various parts of the globe. Despite its importance, conventional forecasting is still limited to 6 months ahead. Recently, we developed an approach based on network analysis, which allows projection of an El Niño event about 1 y ahead. Here we show that our method correctly predicted the absence of El Niño events in 2012 and 2013 and now announce that our approach indicated (in September 2013 already) the return of El Niño in late 2014 with a 3-in-4 likelihood. We also discuss the relevance of the next El Niño to the question of global warming and the present hiatus in the global mean surface temperature.

    I see no reason not to email one of the authors (who include H.J.Schnellhuber, founding director of PIK) with any questions or criticisms before publishing.

    Comment Source:I don't think I've seen: Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111) http://www.pnas.org/content/111/6/2064.abstract Abstract The most important driver of climate variability is the El Niño Southern Oscillation, which can trigger disasters in various parts of the globe. Despite its importance, conventional forecasting is still limited to 6 months ahead. Recently, we developed an approach based on network analysis, which allows projection of an El Niño event about 1 y ahead. Here we show that our method correctly predicted the absence of El Niño events in 2012 and 2013 and now announce that our approach indicated (in September 2013 already) the return of El Niño in late 2014 with a 3-in-4 likelihood. We also discuss the relevance of the next El Niño to the question of global warming and the present hiatus in the global mean surface temperature. I see no reason not to email one of the authors (who include H.J.Schnellhuber, founding director of PIK) with any questions or criticisms before publishing.
  • 15.

    One can always issue a forecast that is great for next 2-3 SPECIFIC units of time, this is possible by also flipping coins.

    When we say forecast, at least in computing field, you run BACKTEST forecast against historical data and issue a CONFIDENCE level or MEAN SQUARED error of some kind for a long period of time.

    For example if I do a forecast for El-Nino I go back to past 40 years and test my algorithms on each year/months of the year and see how accurate the algorithm was.

    Somehow I do not see done by the authors of that paper

    D

    Comment Source:One can always issue a forecast that is great for next 2-3 SPECIFIC units of time, this is possible by also flipping coins. When we say forecast, at least in computing field, you run BACKTEST forecast against historical data and issue a CONFIDENCE level or MEAN SQUARED error of some kind for a long period of time. For example if I do a forecast for El-Nino I go back to past 40 years and test my algorithms on each year/months of the year and see how accurate the algorithm was. Somehow I do not see done by the authors of that paper D
  • 16.
    edited July 2014

    It's some time since I read it, so I'll have to re-read the paper.

    PS. enclosing stars (as with the source of) highlighted term does highlighting without the capitals.

    Comment Source:It's some time since I read it, so I'll have to re-read the paper. PS. enclosing stars (as with the source of) *highlighted term* does highlighting without the capitals.
  • 17.

    Let me give a real-life example to make my point. Generally speaking the stocks are moving upwards in US stock markets, so I issue forecasts for their time-series and the results of the forecasts are terrific! and they are so not because the forecast algorithm is so great but because the motility is quite predictable even by naked eye looking at the price charts.

    Therefore to avoid the short-term forecasts which could be deceptive, the forecasters are asked to run BACKTEST algorithms i.e. run the forecast algorithms e.g. in my case from 1997 to present and issue forecast on very unit of time and see how off it was from the actual past value.

    So I am planning to write some code to forecast some of the indices here, obviously I have data all the way from 1950s or even 1800s! So I will then BACKTEST my algorithm and issue the error analysis.

    Then John and other researchers here look at the results and see how good the algorithm was. One way or the other new ideas spring to make changes to improve or explain the results.

    Dara

    Comment Source:Let me give a real-life example to make my point. Generally speaking the stocks are moving upwards in US stock markets, so I issue forecasts for their time-series and the results of the forecasts are terrific! and they are so not because the forecast algorithm is so great but because the motility is quite predictable even by naked eye looking at the price charts. Therefore to avoid the short-term forecasts which could be deceptive, the forecasters are asked to run BACKTEST algorithms i.e. run the forecast algorithms e.g. in my case from 1997 to present and issue forecast on very unit of time and see how off it was from the actual past value. So I am planning to write some code to forecast some of the indices here, obviously I have data all the way from 1950s or even 1800s! So I will then BACKTEST my algorithm and issue the error analysis. Then John and other researchers here look at the results and see how good the algorithm was. One way or the other new ideas spring to make changes to improve or explain the results. Dara
  • 18.

    Hello Jim I was not being critical about what you noted or this odd paper.

    We need to present a methodology for forecasting.

    Comment Source:Hello Jim I was not being critical about what you noted or this odd paper. We need to present a methodology for forecasting.
  • 19.

    No problem. Just a matter of perceived style.

    I agree, I'd expect any forecasting algorithm to be backtested. I don't know how Steve's concerns about sensitivity to parameter settings and different methods could be answered.

    Comment Source:No problem. Just a matter of perceived style. I agree, I'd expect *any* forecasting algorithm to be backtested. I don't know how Steve's concerns about sensitivity to parameter settings and different methods could be answered.
  • 20.

    Just a matter of perceived style.

    I have gone back to full-time coding and communicating with really sharp fast guys, so please excuse my abrupt mannerism it gets worse around 4am GMT ;)

    I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

    Dara

    Comment Source:> Just a matter of perceived style. I have gone back to full-time coding and communicating with really sharp fast guys, so please excuse my abrupt mannerism it gets worse around 4am GMT ;) I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. Dara
  • 21.

    Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

    Sorry I can't help; perhaps somebody else can.

    Best wishes

    Comment Source:> Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. Sorry I can't help; perhaps somebody else can. Best wishes
  • 22.

    I am thinking to code several forecasts (SVR, NN and Knn) like this:

    few months Forecast :TODAY —> {index1, index2, Index3, index4, index5}

    or a year

    Forecast :TODAY —> {index1, index2, Index3, index4, index5, ..., index12}

    TODAY = {month, year} month mode 12

    And then compare that to the past and issue error analysis.

    Then we start adding new params e.g. equator average temp of some number of nodes or whatever

    Forecast :{TODAY,param1, param2 ...} —> {index1, index2, Index3, index4, index5}

    See if the forecast accuracy increases, by trial and error we examine a small set of candidate parameters.

    Dara

    Comment Source:I am thinking to code several forecasts (SVR, NN and Knn) like this: few months Forecast :TODAY —> {index1, index2, Index3, index4, index5} or a year Forecast :TODAY —> {index1, index2, Index3, index4, index5, ..., index12} TODAY = {month, year} month mode 12 And then compare that to the past and issue error analysis. Then we start adding new params e.g. equator average temp of some number of nodes or whatever Forecast :{TODAY,param1, param2 ...} —> {index1, index2, Index3, index4, index5} See if the forecast accuracy increases, by trial and error we examine a small set of candidate parameters. Dara
  • 23.

    Dara said

    I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

    I don't know enough to guide you, but the references here could be a good starting point: http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references

    Comment Source:Dara said > I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. I don't know enough to guide you, but the references here could be a good starting point: [http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references]( http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references)
  • 24.

    Thanx Graham I will upload some relevant references. So far I have not seen any backtesting on any of these related forecasts.

    Comment Source:Thanx Graham I will upload some relevant references. So far I have not seen any backtesting on any of these related forecasts.
  • 25.

    I have a good example of backfitting and backtesting here: http://contextearth.com/2014/01/22/projection-training-intervals-for-csalt-model/

    This is what Dara is referring to -- using historical data as a means for testing models. Any part of the historical data time-series can be used as a training interval to test projections of other parts of the time series.

    With the CSALT model -- which provides a multivariate estimate of the average global temperature -- I need an estimate of the ENSO factor to be able to project the natural variability of temperature. That is actually what got me started on the ENSO El Nino kick. Skeptics argued that the CSALT model was not that good because it needed an accurate forecast of ENSO, but all I had was historical data. So I applied backfitting to demonstrate how well it could work with latter intervals, absent of being able to know the future.

    Perhaps this is being too pedantic, but It is important to consider backtesting due to the lack of a controlled system to experiment with. In other words, use the available information in as many ways that you can creatively dream up.

    Comment Source:I have a good example of backfitting and backtesting here: <http://contextearth.com/2014/01/22/projection-training-intervals-for-csalt-model/> This is what Dara is referring to -- using historical data as a means for testing models. Any part of the historical data time-series can be used as a training interval to test projections of other parts of the time series. With the CSALT model -- which provides a multivariate estimate of the average global temperature -- I need an estimate of the ENSO factor to be able to project the natural variability of temperature. That is actually what got me started on the ENSO El Nino kick. Skeptics argued that the CSALT model was not that good because it needed an accurate forecast of ENSO, but all I had was historical data. So I applied backfitting to demonstrate how well it could work with latter intervals, absent of being able to know the future. Perhaps this is being too pedantic, but It is important to consider backtesting due to the lack of a controlled system to experiment with. In other words, use the available information in as many ways that you can creatively dream up.
  • 26.

    We need backtesting for the new GPM and TRIMM data from satellites. The forecasts will be daily if not hourly and they requires serious examination. It happens in some periods of time the forecast algorithm needs to be shut-off due to high errors, and we could measure those errors with backtesting.

    Otherwise it will be all wild claims and conjectures and politics

    Dara

    Comment Source:We need backtesting for the new GPM and TRIMM data from satellites. The forecasts will be daily if not hourly and they requires serious examination. It happens in some periods of time the forecast algorithm needs to be shut-off due to high errors, and we could measure those errors with backtesting. Otherwise it will be all wild claims and conjectures and politics Dara
  • 27.
    edited July 2014

    If there's going to be a contingency table analysis done in the paper, I think at least a Bayesian counterpart ought to be included or replace the analysis, such as section 14.1.10 of Kruschke (2011). I am happy to help with that, and do it. I'll need to read the article more carefully, and will grab the contingency table when it appears stable.

    Um, if Kruschke (2011) is not available see http://stats.stackexchange.com/questions/90668/bayesian-analysis-of-contingency-tables-how-to-describe-effect-size (sorry, John, the "Help" for the markup below was giving a "404"), or, better still, Bill Press' (sorry again, John). The only quibble, from Kruschke himself, is that Press still ends up with p-values.

    To clarify, the number of times an El Niño initiated or did not, or the number of times the Ludescher, ''et al'' algorithm would indicate arrows or not are not fixed by experiment but are, rather, random counts, hence random variables. Accordingly, these are ''draws'' from distributions presumably having means of the kind "x" and "N-x". Whether or not a Binomial is a good representation, or "x" is Poisson is not for these are important to details, but beside the point. Another time, another sample, the margins might be quite different. A proper calculation of probabilities of getting the particular counts that were gotten should consider these uncertainties, as so is whether or not a credible interval for each cell contains the observed count. Such consideration is treating these margins as ''nuisance parameters" as Press teaches.

    The Bayesian approach to such tables is the standard hierarchical model, using a Poisson model for cell counts, where their means have priors that are Exponential (link functions, in GLM terms) of cominbations of factors unique to each which obey multiplicative independence for each cell, but not necessarily across cells. In other words, the model is a Poisson ANOVA.

    Accordingly, one good set of hyperpriors for the Exponentials are Normals. In Chapter 22, Kruschke recommends hyperpriors of folded-''t'' densities for their ''precisions'', but I've seem him yield back to Gelman's recommendation of Gammas for these in another context. We'd need to experiment to see what works best (in terms of Gibbs convergences, for example). Kruschke also has R and ''JAGS'' code to accompany his text which goes along with this, and that's where I'd start.

    Comment Source:If there's going to be a contingency table analysis done in the paper, I think at least a Bayesian counterpart ought to be included or replace the analysis, such as section 14.1.10 of Kruschke (2011). I am happy to help with that, and do it. I'll need to read the article more carefully, and will grab the contingency table when it appears stable. Um, if Kruschke (2011) is not available see http://stats.stackexchange.com/questions/90668/bayesian-analysis-of-contingency-tables-how-to-describe-effect-size (sorry, John, the "Help" for the markup below was giving a "404"), or, better still, Bill Press' https://www.youtube.com/watch?v=bHK79WKOX-Y (sorry again, John). The only quibble, from Kruschke himself, is that Press still ends up with p-values. To clarify, the number of times an El Ni&ntilde;o initiated or did not, or the number of times the Ludescher, ''et al'' algorithm would indicate arrows or not are not fixed by experiment but are, rather, random counts, hence random variables. Accordingly, these are ''draws'' from distributions presumably having means of the kind "x" and "N-x". Whether or not a Binomial is a good representation, or "x" is Poisson is not for these are important to details, but beside the point. Another time, another sample, the margins might be quite different. A proper calculation of probabilities of getting the particular counts that were gotten should consider these uncertainties, as so is whether or not a credible interval for each cell contains the observed count. Such consideration is treating these margins as ''nuisance parameters" as Press teaches. The Bayesian approach to such tables is the standard hierarchical model, using a Poisson model for cell counts, where their means have priors that are Exponential (link functions, in GLM terms) of cominbations of factors unique to each which obey multiplicative independence for each cell, but not necessarily across cells. In other words, the model is a Poisson ANOVA. Accordingly, one good set of hyperpriors for the Exponentials are Normals. In Chapter 22, Kruschke recommends hyperpriors of folded-''t'' densities for their ''precisions'', but I've seem him yield back to Gelman's recommendation of Gammas for these in another context. We'd need to experiment to see what works best (in terms of Gibbs convergences, for example). Kruschke also has *R* and ''JAGS'' code to accompany his text which goes along with this, and that's where I'd start.
  • 28.

    I agree Dara. A good example of a significant error is with temperature measurements during WWII. A warming bias was definitely introduced during the years from ~1940 to 1945, that becomes evident when trying to fit the entire series. http://contextearth.com/2013/11/16/csalt-and-sst-corrections/

    This image shows how the war resulted in significant patches in spatial coverage, particularly in the ENSO regions of the Pacific. spatial coverage

    So during WWII, we have the problem of missing data and instrumental bias as military ships took over from commercial vessels in performing the SST measurements.

    Comment Source:I agree Dara. A good example of a significant error is with temperature measurements during WWII. A warming bias was definitely introduced during the years from ~1940 to 1945, that becomes evident when trying to fit the entire series. <http://contextearth.com/2013/11/16/csalt-and-sst-corrections/> This image shows how the war resulted in significant patches in spatial coverage, particularly in the ENSO regions of the Pacific. ![spatial coverage](http://img585.imageshack.us/img585/5273/y6w.gif) So during WWII, we have the problem of missing data and instrumental bias as military ships took over from commercial vessels in performing the SST measurements.
  • 29.

    Hi Jan,

    Ian Ross sent me Berliner et al on hierarchical Bayesian EOF analysis of El Ninos

    http://ro.uow.edu.au/cgi/viewcontent.cgi?article=9833&context=infopapers

    I'm trying to write a summary of Ian's thesis if you've got the mileage to comment.

    Cheers

    Comment Source:Hi Jan, Ian Ross sent me Berliner et al on hierarchical Bayesian EOF analysis of El Ninos http://ro.uow.edu.au/cgi/viewcontent.cgi?article=9833&context=infopapers I'm trying to write a summary of Ian's thesis if you've got the mileage to comment. Cheers
  • 30.

    Jim,

    Not sure exactly what you are asking: Commenting on your summary? On Ian's thesis? On Berliner, Wikle, Cressie? But happy to help within timeframe. Not sure how quickly I can turn around reading a thesis, though. Happy to read your summary and comment from what I know of Berliner, et al, though.

    In a couple of weeks soon enough?

    Comment Source:Jim, Not sure exactly what you are asking: Commenting on your summary? On Ian's thesis? On Berliner, Wikle, Cressie? But happy to help within timeframe. Not sure how quickly I can turn around reading a thesis, though. Happy to read your summary and comment from what I know of Berliner, et al, though. In a couple of weeks soon enough?
Sign In or Register to comment.