Popular works cited in the talk:
Several people have suggested podcasts to follow, too. Some especially great (and relevant) episodes are listed here:
Ideas (CBC) Vestigial Tale. Part 1 describes the evolutionary advantage of developing storytelling – for both fiction and nonfiction. It is nicely summarized by one statement within: “Stories make the world make sense.” Part 2 is surprisingly relevant, especially parts about what constitutes a good story (model?). The idea of Proppian story generators is offered (this link will automatically construct a story of your liking!), but then criticized on the basis that automated stories miss the point of what makes a great story great.
Planet Money, of course, covers many of the topics in my talk. But, this episode, in particular, weaves together the idea of the power of story telling as part of the decision making process, perception of risk, and hydrology!
You Are Not So Smart is a fantastic podcast suggested to me by Elizabeth Lewis. Wonky, but very interesting. Episode 14 examines the power of narrative for setting context and changing our ideas. In episode 81, they interview Per Espen Stoknes. He draws on many of the same themes that I do (nudge, story-telling …), but from an economics and science-advocacy perspective. Episode 90 presents an interview with Don Hoffman (see Spark, below). Again, these ideas shake the very foundation of what we are doing as scientists – if our view of reality is not based on reality, then what is the purpose of proposing alternative theories to test? On the other hand, it could be seen as placing the correct emphasis on understanding WHY people need to improve their understanding of the world rather than relying on an objective, ‘higher’ scientific purpose.
Spark is another interesting Canadian podcast. Episode 330, which aired on Oct 7, had a fascinating story about Don Hoffman, a UCI Cognitive Scientist. Through evolutionary modeling, they showed that an organism that sees reality clearly (has perfect sensors) is never able to out-compete one that is attuned to the rules of fitness (knows the utility functions). I would like to follow up to see if this has implications for science-facilitated decision making! Specifically, he notes that evolution develops tricks and hacks that work in a give niche – but, when niche is disturbed, these tricks will fail. This has implications for the fallacy of building one model to represent a system. Furthermore, the mind-body problem, or the hard problem of consciousness – relates to the idea that our perceptions are known to be imperfect, leading to model-based interpretations of the world even at the organism level! Another episode (#333, Oct 30, 2016) included an interview with Technology historian Patrick McCray, who wrote an essay in Aeon called ‘It’s Not All Lightbulbs’. He mentions two ideas that could be useful for building multiple models. The first is to do a ‘pre-mortem’ on a model. That is, once you have the basic model defined, try to get together to imagine how it will have failed from the view of a year from now! The other is to designate a devil’s advocate on any project.
Darcy Lecture files and links:
If you missed my talk or you want to revisit it, download this file. It includes English and Spanish notes in the PPT file – but, they are out of sync with some of the slides! (Translation provided by Montgomery & Associates, Chile, on a previous version of the talk)
Papers cited in the talk:
Report by USGS, Wisconsin office, describing investigation of proposed mine near a Native community.
A personally influential paper on using data/models for decisions by James and Gorelick.
A paper describing the value of collecting data, a nice example of Keith Halford’s style.
A must-read by John Doherty on data, models, and decision making.
Colin Kikuchi’s paper defining discrimination-inference as part of DIRECT. It also provides an example of choosing multiple new observations simultaneously.
A classic paper on the model conceptualization problem – surprise!
Cliff Voss has several papers (1 2 3) that advocate for simpler models. I’m a believer – I don’t think that we can afford to explore model conceptual and structural errors adequately unless we agree to accept less complex (and more defensible) model representations.
This is a classic paper by Brown about Darcy. It discusses work throughout his life and is the specific source for my discussion of Darcy’s consideration of wells as the source of water for Dijon.
My Dean recommended this New Yorker article. I think that it should be required reading for every scientist. Really, it is that good … and that short.
Multiple models in action!
Paul Hsieh pointed me to this. It documents how the US government’s expert witness used Monte Carlo analyses (successfully!) in the penalty phase of the BP oil spill case! The analysis is more of a sensitivity analysis approach, but it shows that a more convincing case can be made by assessing multiple explanations of data. (The first 8 pages are a good summary.)
Colleagues from Waterloo published a fantastic paper in 2013 that discusses way to think about dealing with capture zone uncertainty. Why aren’t we funding more research like this!?
Several people have mentioned work by RAND on Robust Decision Making (RDM). This is clearly related to DIRECT. My read is that DIRECT speaks to how we, as hydrologists, can provide useful uncertainty estimates that can inform RDM and that RDM can help to inform important areas for scientific investigation through DIRECT. This book chapter describes the use of RDM by the State of California for water resources; this one gives some more detail about the RDM process.
This is an interesting technical memo by the USGS. It states as the first point in describing the recommended approach to modeling that the purpose of the model should be defined. Many feel that the objective of modeling, especially for the USGS, should be to build general purpose models for many uses. This contradicts that idea!
Al Freeze’s four paper publication is really the foundation of all of the more recent work in decision support using models. I especially liked paper 1, this is a foundational paper that describes all elements of design process under uncertainty. It also includes the clearest and most succinct description of classical vs Bayesian approaches. A must read. My talk is a marriage of this work and Poeter’s Darcy Lecture from 2006. Describes the use of unconstrained models (no data, just exploring range of possibilities) to define the initial prior for Bayesian analysis. Similarly, it discusses the use of geophysics or other soft data to inform the prior. I also liked paper 4. This one treats the concept of data worth. It is limited to Decision Tree approaches and largely focuses on revising the probabilities of critical failure paths. But, it clearly demonstrates how physical and decision models can be combined to identify data that minimize regret!
At the risk of self promotion … this is an MSc Thesis by a student who worked with me named Stephen Hundt. I think it is an excellent example of how multiple models can be used not only to define uncertainty, but also to plan under uncertainty.
I haven’t addressed the idea of severe testing of models. But, in some ways, I think that it is very consistent with the idea of developing advocate models for stakeholders’ concerns. This is a great short article about the concept.
Recently, Hugo Delottier and colleagues published a very accessible paper that I would recommend for anyone who is planning to have a discussion with clients about why they should consider uncertainty. They make a case for relying on linear uncertainty estimates as good proxies for full uncertainty analysis. I think that this is a really good way to begin a conversation that then leads to the need to consider model conceptual uncertainty and the opportunity to use models to identify discriminatory data. Very nice!
Kyle Stephenson, in Kingston, ON, pointed me to this fantastic resource for communicating scientific uncertainty.
People to read: