Hari,
RE: Once in a while we get something remarkably successful like
Nate Silver's election forecasts, which aggregate and weight various polls. So successful that we are lulled into thinking that there is a rock-solid science of predicting how a population will vote in an election. This perception lasts until an election like 2016 comes along. Looking at the comments section of Silver's website on Nov 9, 2016 you could feel the anger – and the anger turned on the pollsters and statisticians: how could everyone get it wrong?
RESP: There are many pretenders out there. No one should assume that qualified statisticians were involved with these polls/forecasts. Better research tends to mention qualifications; it is part of the information. These polls seldom mention qualifications.
We know how to build rock-solid polls when needed. What we do not know how to do is to convince every decision maker to spend the money to hire it right and do it right. I doubt if many of these polls would survive any statistical scrutiny; they are not designed for that. That costs money and economically, these polls are just for propaganda (creating a band-wagon effect) and entertainment. I have seen obvious manipulation of the results. Furthermore, you can read about all the statistical mistakes made by nonstatisticians on early polls. You should blame the decision makers when they do not follow Best Statistical Practice.
ASA introduced the PSTAT so that knowledgeable consumers can better discern qualifications. I wrote three chapters addressing these problems in the field--lots to say.
Today, my articles/blogs have titles like, 'The Coming Flood Of Statistical Malfeasance.'
------------------------------
Randy Bartlett
Big Data Scientist
Blue Sigma Analytics
Phoenixville PA
------------------------------
Original Message:
Sent: 02-14-2017 15:15
From: Hari Balasubramanian
Subject: All models are wrong, some are useful -- an essay
Dear INFORMS members:
I wanted to share to share a recent essay I wrote for a science/humanities website called 3 Quarks Daily (readers of this website are typically not aware of operations research).
http://www.3quarksdaily.com/3quarksdaily/2017/02/all-models-are-wrong-some-are-useful.html [link]
The piece tries to distinguish between math models as they apply to physical and social sciences. Most of what I write here obvious to an INFORMS member -- who hasn't heard of George Box's famous quote that all models are wrong -- however my point is that, despite knowing this, we can easily get overconfident with our models (maybe we are drawn to their mathematical elegance when that elegance may not have to do much with reality) and in assuming that we know more than we actually do. This particularly true since most of what we do applies to social systems where the quirks of human behavior are not easily accounted for.
Comments are welcome!
Hari Balasubramanian
people.umass.edu/hbalasub
------------------------------
Hari Balasubramanian
Associate Professor
Univ of Massachusetts- Amherst
Amherst MA
------------------------------