Science

"The Nastiest Feud in Science"

Especially important for those who think social science is the runaway winner for bitter disagreements.

What's kinda surprising is the intense name-calling and ad hominem attacks. But I have a theory about that which is probably not original but I don't remember where I saw it. Suppose someone says something to you that is obviously wrong. Say someone tells you "2 + 2 = 19". Do you get angry? Do you want to verbally abuse the person? I think not. I think you explain, calmly and simply, why that's wrong. If the person repeats the statement, maybe you try to explain, again calmly, a different way why that's wrong. If the person persists in claiming 2 + 2 = 19 you just shrug and walk away. The person is either trolling you or is crazy, but you don't get angry.

Now suppose someone makes a statement that you think is clearly wrong but not obviously so. But you think that demonstrating convincingly the statement is wrong will take more time, energy, and care than you're willing to devote. (And you may suspect that even if you took the time and care, you would not be able to prove your case 100%.) That's what prompts the anger, even the furious name-calling. 

And I think that's also what accounts for a lot of the fury in our current politics. 


"Statistical and Machine Learning forecasting methods: Concerns and ways forward"

From the abstract:

After comparing the post-sample accuracy of popular ML [machine learning] methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods.

From the body:

A major innovation that has distinguished forecasting from other fields has been the good number of empirical studies aimed at both the academic community as well as the practitioners interested in utilizing the most accurate methods for their various applications and reducing cost or maximizing benefits by doing so. These studies contributed to establishing two major changes in the attitudes towards forecasting: First, it was established that methods or models, that best fitted available data, did not necessarily result in more accurate post sample predictions (a common belief until then). Second, the post-sample predictions of simple statistical methods were found to be at least as accurate as the sophisticated ones. This finding was furiously objected to by theoretical statisticians [76], who claimed that a simple method being a special case of e.g. ARIMA models, could not be more accurate than the ARIMA one, refusing to accept the empirical evidence proving the opposite.

A co-author of the paper is Spyros Makridakis, currently with nearly 17K Google Scholar citations.


"All Ye Need to Know"

If you ignore the ill-founded, gratuitous slap at economics, this is an interesting and rather disturbing piece about the "demarcation problem" aka "What should define science?"

(Fine, don't label economics a "science". It doesn't lose anything. And when you find a better discipline to understand how individuals and groups behave, call me. But I won't hold my breath.)