Friday, October 28, 2011

The problem with explanatory models

This critique of economic models might be helpful in explaining why I am so strongly skeptical about TE(p)NSBMGDaGF as well as the various dating methodologies. As I have mentioned on several occasions before, scientists in many fields simply don't realize that the conceptual tools they utilize on a regular basis are significantly less reliable than they assume. The foundations upon which their "evidence" rests is much shakier than they understand, for the most part. This is understandable, since very few biologists, geologists, or astronomers have any actual training in history, logic, statistics, probability, higher mathematics, or even spreadsheet use, but that doesn't excuse their stubborn and willful ignorance when errors, or even the likelihood of errors, based on these factors are pointed out to them.
Carter had initially used arbitrary parameters in his perfect model to generate perfect data, but now, in order to assess his model in a realistic way, he threw those parameters out and used standard calibration techniques to match his perfect model to his perfect data. It was supposed to be a formality--he assumed, reasonably, that the process would simply produce the same parameters that had been used to produce the data in the first place. But it didn't. It turned out that there were many different sets of parameters that seemed to fit the historical data. And that made sense, he realized--given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, you'd expect there to be different ways to tweak the parameters so that they can produce similar sets of data over some limited time period.

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward--and sure enough, his calibrated model produced terrible predictions compared to the "reality" originally generated by the perfect model. Calibration--a standard procedure used by all modelers in all fields, including finance--had rendered a perfect model seriously flawed. Though taken aback, he continued his study, and found that having even tiny flaws in the model or the historical data made the situation far worse. "As far as I can tell, you'd have exactly the same situation with any model that has to be calibrated," says Carter.
I first realized the nature of the problem when a perfectly straightforward question about the average speed of the evolutionary process was not so much mocked as greeted with complete confusion. And yet, if a process has taken place more than once over time, logic requires that there must be both various measurable speeds as well as an average speed. It doesn't matter if the process measured is from mutated state to mutated state or from species to species, there must be an answer if the process is occurring. It wasn't the lack of an answer that was the red flag, but rather, the inability to understand that there absolutely had to be an answer even if the answer was unknown at the present time.

The calibration problem that Carter is pointing out is tangentially related to the "backdating" problem I have hitherto observed. Economists and finance guys are keenly aware of the precarious nature of their models because they are forced to see them tested rigorously in real-time. For example, the administration economists who estimated a 1.6 multiplier effect in 2008 already know they were wrong. (They may not find it politically feasible to openly admit this, but they definitely know it, which is why they're not proposing another stimulus package on the same basis.) And investment models blow up literally all the time, sometimes in a spectacular, system-threatening manner.

But that same sort of performance pressure simply doesn't exist in many of the various sciences that concern past events. This is why we can be confident, if not entirely certain, that in the absence of successful predictive models, they have gotten it so substantially wrong that their core concepts will not survive the eventual corrections when they finally arrive.

To give another example, if evolution were a real science, biologists would be able to predict what the next species to evolve would be, as well as which population groups within a species were more evolved than the norm. They would be able to discern the connection between race and evolutionary development in humans. In fact, given the pressure that human activity is putting on various environments, we should be seeing more and more species evolving every more rapidly in comparison with the more sedate natural changes in various environments over the years. But that does not appear to be the case.

And appeals to time don't wash either. Homo sapiens sapiens is supposed to have evolved to full modernity 50,000 years AGO, a process that is said to have taken 150,000 years. But since there are 59,811 species of vertebrates, even if we assume that the complex human evolution is the norm, we should be seeing a new vertebrate species evolve once every 2.5 years, (and a new mammal species every 27.3 years) even without the increased selection pressure of human habitat modification.

Now, there have been a number of new mammalian species, mostly lemurs and monkeys, discovered since 2000. So, perhaps there is some evidence for this process, if any of those species can be determined to be newly evolved rather than merely previously undiscovered. But the observable fact remains that evolutionary biologists, and many other scientists in other fields, simply don't even think in a systematic manner that would allow them to perceive the logical holes in their fundamental models.



Post a Comment

Rules of the blog
Please do not comment as "Anonymous". Comments by "Anonymous" will be spammed.

<< Home

Newer Posts Older Posts