ALL BLOG POSTS AND COMMENTS COPYRIGHT (C) 2003-2016 VOX DAY. ALL RIGHTS RESERVED. REPRODUCTION WITHOUT WRITTEN PERMISSION IS EXPRESSLY PROHIBITED.

Tuesday, July 05, 2016

The bonfire of science

In which it is once more demonstrated that scientific evidence is VASTLY less reliable than other types of evidence, because, in most cases, no one ever bothers to actually check the results:
A whole pile of “this is how your brain looks like” MRI-based science has been invalidated because someone finally got around to checking the data.

The problem is simple: to get from a high-resolution magnetic resonance imaging scan of the brain to a scientific conclusion, the brain is divided into tiny “voxels”. Software, rather than humans, then scans the voxels looking for clusters.

When you see a claim that “scientists know when you're about to move an arm: these images prove it”, they're interpreting what they're told by the statistical software.

Now, boffins from Sweden and the UK have cast doubt on the quality of the science, because of problems with the statistical software: it produces way too many false positives.

In this paper at PNAS, they write: “the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.”

For example, a bug that's been sitting in a package called 3dClustSim for 15 years, fixed in May 2015, produced bad results (3dClustSim is part of the AFNI suite; the others are SPM and FSL).

That's not a gentle nudge that some results might be overstated: it's more like making a bonfire of thousands of scientific papers.
It is not even remotely reasonable to take scientific evidence at face value anymore, much less the pseudoscience so often being substituted for the results produced by genuine, if often flawed, scientody.

It is not even remotely surprising that the flaw the scientists failed to pick up in this situation was statistical, for as I've previously observed, most scientists have very little training in math or statistics, and despite their habit of regularly citing statistics, most of them are more or less statistically illiterate.

Never forget that while there are certainly some brilliant scientists, most of them are literal midwits as there are relatively few credentialed scientists with IQs over 132. A study of all the U.S. PhD recipients in 1958 reported an average IQ of 123; the Flynn effect notwithstanding, it is highly unlikely that the average IQ of today's increasingly diverse and vibrant PhD recipients has risen since then.

Labels: ,

102 Comments:

Blogger Shimshon July 05, 2016 4:59 AM  

The problem is their use of voxels. Had they used Voxlets (mini replicas of the Supreme Dark Lord himself) the effect would have been so profound that no would one ever have discovered the problem.

Blogger Revelation Means Hope July 05, 2016 5:12 AM  

A pet peeve of mine is how they routinely discount all findings where the p doesn't meet that magical cutoff of <0.05, as if a 0.08 isn't a strong indication that something is going on that you can be 92% certain of.... as opposed to the 95% certainty that is somehow a MUCH better cutoff point.

But then, I've seen and demonstrated myself how simple it is to manipulate the stats if you want to artificially create a certain result, provided your raw numbers give you a little room to play around.

Blogger JACIII July 05, 2016 5:21 AM  

The lower IQ just means it is easier to catch them screwing shit up or -gasp!- scamming.

Blogger szopen July 05, 2016 5:58 AM  

Average IQ for PhD receivers is different for different fields. I have data for estimated IQ of PhD students (in USA), and it shows IQ, on average, 115 for sociology and as low as 106 for "public administration PhD" (what? you can get PhD in "public adminsitration"?), IQ 130 for physics. Computer science is 128.5 (which is why i estimate my own IQ to be in range 125-140).

Biology is 121.5, according to this data.

The ranges for this estimation are confirmed by independent small-scale studies, which could be also easily found on google.

Blogger Lovekraft July 05, 2016 6:03 AM  

In this age of identity- and agenda-driven politics, it is natural that science would also suffer. Until this abomination is reduced to tatters, we will see more and more mind-numbing declarations.

Those here on the fringe are not invested in perpetuating these lies, so it will remain a source of truth, for those interested in it.

Blogger szopen July 05, 2016 6:20 AM  

Ah, one more thing: the IQ for science journalists is IMO lower than scientists, and journalists are also more biased. While googling for data on PhD' students' IQ, i went into ideal example:

The conclusion of the study:

Genes within sexual expression profiles may underlie important functional differences between the sexes, with possible importance during primate evolution.

How it was reported:

Lead author Björn Reinius notes that the study does not determine whether these differences in gene expression are in any way functionally significant. Such questions remain to be answered by future studies.

Another thing is a lot of people tend to read only the abstracts, and abstracts sometimes are designed to sell the paper, so they are more ambiguous/bold than actual results (with recent perfect example of paper posted as "atheists don't trust other atheists proof", where abstract ambiguously stated "effect is repeated also in more secular and liberal setting", while after reading the paper it's clear that actually distrust of atheists was driven by religious beliefs and nothing was actually said about whether atheists trust or distrust other atheists).

Lesson: don't trust study results unless you read it yourself.

Blogger Alexandros July 05, 2016 6:29 AM  

At least they bothered to actually run the experiment. When I was in a biology lab, interns would make data up out of thin air so they didn't have to do the work; classic millenial action when confronted with work: lie and cheat.

I'd be surprised if a lot of these experiments were even performed.

Blogger Caladan July 05, 2016 6:41 AM  

And here was I thinking that scientists were some of the smartest people around.

Another day, another illusion shattered.

Blogger Nate July 05, 2016 7:05 AM  

Anyone that has used a gamma knife or a brain lab knew that all of these claims were BS

Anonymous Moonbear July 05, 2016 7:16 AM  

@8 Caladan
That probably was the case in the not so distant past.

Blogger weka July 05, 2016 7:21 AM  

Killed a grant application today after spending most of the day and last night looking at the data myself and realizing the basic models we had used were not robust enough for the planned step and we needed more testing.

You have to get your hands dirty in the data. Including when supervising graduates. Because it is very easy to use the wrong statistical methods, and publish the wrong results.

Blogger weka July 05, 2016 7:24 AM  

Another thing is a lot of people tend to read only the abstracts, and abstracts sometimes are designed to sell the paper, so they are more ambiguous/bold than actual results (with recent perfect example of paper posted as "atheists don't trust other atheists proof", where abstract ambiguously stated "effect is repeated also in more secular and liberal setting", while after reading the paper it's clear that actually distrust of atheists was driven by religious beliefs and nothing was actually said about whether atheists trust or distrust other atheists).

Lesson: don't trust study results unless you read it yourself.


You or your institution have to pay up to $40 US to get at that text. You will not be able to get at the data -- Researchgate and trials registries are making this better, as have some lawsuits.

The academic publishing model assumes that you pay for the journal that is written and edited by academics.

Anonymous Dave Gerrold's 6184th Cabana Boy July 05, 2016 7:26 AM  

I always suspected that the fMRI studies were bogus. The actual signal to noise ratios in normal MRI are very low to start with. They don't get any better with fMRI sequences. The boxes sizes aren't as small as people think either.

The whole idea of determining higher level functions in the brain through measuring blood flow/oxygen is akin to diagnosing a running car motor by looking at its infrared signature. There are definite limits to what you can see, particularly when your spatial resolution is a limited as an fMRI scan.

Blogger VD July 05, 2016 7:30 AM  

(with recent perfect example of paper posted as "atheists don't trust other atheists proof", where abstract ambiguously stated "effect is repeated also in more secular and liberal setting", while after reading the paper it's clear that actually distrust of atheists was driven by religious beliefs and nothing was actually said about whether atheists trust or distrust other atheists).

You're wrong and you're just flat-out lying again. "It wasn’t just the highly religious participants who expressed a distrust of atheists. People identifying themselves as having no religious affiliation held similar opinions."

There wasn't just one paper, there were SIX to which the 2012 Scientific American article, "In Atheists We Distrust" was referring. And neither the article nor the six studies were the basis of my statement, which was based on polls cited in my 2008 book, The Irrational Atheist.

Constantly shading the truth in this manner doesn't make atheists look any more trustworthy, Szopen. Quite the opposite.

Anonymous Shut up rabbit July 05, 2016 7:33 AM  

I see government funded researcher as a way of keeping potentially intelligent people from ever doing anything useful with their life and removing them from questioning the way society is run by bribing them with an easy life. Consequently it also attracts those who aren't even midwits.

If all those inquisitive people spent their time wondering why the world was being deliberately messed up rather than spending 30+ years examining conformational changes of one specific protein of several hundred involved in a metabolic pathway that might be involved in cancer in an environment which is basically school for adults, think of the real improvements that would be made.

There are no clever people in science experimenting to make the world a better place, there are brown-nosers like any other business who kiss the boss' arse and do whatever the grant awarders demand until the boss retires and they take over.

[Disclaimer: former academic w. PhD who met some of the dumbest, most obsequious and unimaginative people on the planet while doing research]

Blogger VD July 05, 2016 7:37 AM  

For crying out loud, Szopen, this is hardly a new concept either.

“I want my lawyer, my tailor, my servants, even my wife to believe in God, because it means that I shall be cheated and robbed and cuckolded less often."
- Voltaire

Blogger szopen July 05, 2016 7:38 AM  

@12 weka
Well, in my case: my uni pays for access which is enough for 80% of papers I ever need to read (including for fields which are far, far removed from mine). Half of the remaining 20% is easy to find copies via google: drafts or full copies on author's sites. Of the remaining 10% you can contact the authors and usually they will provide you the paper (in my experience, I can remember only two cases where I was either refused or got no answer for my polite request for a paper - out of few dozens).

Note, it's about the paper itself. Full agreement about the data, it's much harder to get it. Sometimes, because it's not freely available (sometimes I got answer "we cannot release the data, because we received it from a commercial company").

Anonymous Be Not Afraid July 05, 2016 7:41 AM  

That fMRI news is gonna sting a bit. In general, scientists are not well trained in statistics. They trust a package or whatever for the analysis, and if the package is buggy or if it isn't actually suitable, the researcher can't tell. And this assumes the folks are well-meaning. Climate "scientists" are well known for playing games with statistics. Ignorance+arrogance+bias= bad news.

Anonymous Faceless July 05, 2016 7:51 AM  

@4

These fields that look like they should not have PhDs - they are often history papers or good demonstrations.

Examples:

I know a guy who spent 8 years writing one 20,000 line function. He documented the function. He got a PhD. Nobody looked at the code to see if what was in the documentation was actually implemented.

Math Ed vs. Pure Math: Most Math Ed programs will offer a PhD (if they are not lying, they'll actually confer an Ed D, but they will present it as a PhD) for a history paper. I remember somebody I was in a master's program with - one year ahead, she decided she wanted to be called "Dr", so, after touring Europe and smoking a lot of pot, she switched to Math Ed and wrote some paper on the history of mathematicians using drugs.

Blogger Artisanal Toad July 05, 2016 7:53 AM  

@6 Lesson: don't trust study results unless you read it yourself.

Depending on the study, it isn't enough to read the report, you have to take a hard look at the study protocol and understand the methodology. I can think of more than a few study protocols that were written to ensure certain results did not appear because previous testing had indicated such results were possible. This can sometimes be impossible to spot without specific information available only to the people who wrote the protocol.


Follow the money is usually the best advice when evaluating the results of any particular study. Find out who paid for it and how the results of the study will impact their bottom line and view the protocol with a very jaundiced eye. Studies done with grant money observe this rule quite openly, in that a researcher can accidentally produce the wrong (politically incorrect) results and find themselves blacklisted and then later discredited when further studies are commissioned to specifically disprove their findings.

Blogger exfarmkid July 05, 2016 8:04 AM  

I was a post-doc when fMRI first arrived in the research world. In general it seemed that us hardware nut-jobs were pretty skeptical while the medical and bio-chem users were "oooohhhh - pretty".

Think about it: You start with a method (MRI) that has intrinsically horrible SNR. Been out of the MRI world for over 20 years, but I recall that one then takes two 3D maps of blood flow under different stimuli, looking for (possibly) fractional-percentage signal differences (which are quite-likely smaller than the noise figure).

Not necessarily BS, and there are many hardware improvements since the early 90', but every mother-loving axiom and assumption should be on the table when a publication comes out.

Blogger Shimshon July 05, 2016 8:07 AM  

I worked in an astrophysics lab a long time ago developing software for the test harness. That was some pretty cool stuff. Since physics also uses a lot of this type of software (think of the ginormous quantities of data produced by the LHC as just one example), I wonder how susceptible the field is to this phenomenon too?

Blogger sykes.1 July 05, 2016 8:08 AM  

Raymond Tallis' "Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity" is an excellent and philosophically sophisticated critique of the whole business:

https://www.amazon.com/Aping-Mankind-Neuromania-Darwinitis-Misrepresentation/dp/1844652734

He is, himself, a neuroscientist.

Blogger szopen July 05, 2016 8:10 AM  

@14 VD
This will be long, because "you are lying" is one particular offense which I really take to my heart.

I am _not_ lying. I am _not_ saying and I _was not_ saying there was no proof that "atheists do not trust the atheists". You may go back to the comment thread and you will see that I have written (in my poor English, errors not corrected): " Maybe you had something other paper in mind." and later "Either VD had in mind other study, or he read just the abstract (which is so ambiguous that it may indeed suggest that), or he had not paid enough attention during reading. Or he based his assertion on his personal experiences with atheists."

All I was and I am just saying that _the particular paper_ which got linked in a comment section (a SINGLE paper which contained SIX studies) was not providing a proof that atheists distrust other atheists. Or disproving. All six studies from this single paper had not provided any clue on atheists' behaviour i.e. were not relevant to the particular statement.

I am refering to this paper (because this paper was linked to in comments' section):

http://www.ncbi.nlm.nih.gov/pubmed/22059841

The results of this paper were described here:

http://www.scientificamerican.com/article/in-atheists-we-distrust/

This particular site also refered several other papers, but they also do not seem to be relevant to the question:

"Like a camera in the sky? Thinking about God increases public self-awareness and socially desirable responding" is not relevant to the question at all

"Atheists As “Other”: Moral Boundaries and Cultural Membership in American Society" also, in my understanding, discusses attitudes to atheists in a general society, and I see there no data which would discuss attitudes of atheists to atheists.

Now, as for the particular claim:

"People identifying themselves as having no religious affiliation held similar opinions" were NOT atheists.

In a paper they were listed as a separate category from atheists and agnostics, and "atheist distrust
was significantly positively associated with the degree to which these participants rated God as important in their lives". Note that mean for "God as important in my life" in that group was 4.34 on 1-10 scale (with 3+ SD).

To repeat my analogy, it's like making a poll on diverse group of readers, and 10% of them would be alt-righters, 20% conservatives and 50% liberals and the rest SJWs. The study would, hypothetically, found out that alt-righters were the most distrusted group, the more distrusted the more "progressive" view one held, and that even amongst conservatives alt-righter would be the most distrusted (but again, the more "progressive" the conservative, the higher his distrust of alt-righter would be).

Could one use that study to prove that alt-righters distrust alt-righters, because, after all, conservatives are somehow similar to alt-righters?

Blogger Blunt Force July 05, 2016 8:21 AM  

low IQ + low integrity + cash incentives = guaranteed results

http://dailycaller.com/2016/06/23/federal-lab-forced-to-close-after-disturbing-data-manipulation/


Anonymous FrankNorman July 05, 2016 8:26 AM  

Let's not forget that Climate "Scientist" who when asked to provide the raw data, responded:
"Why should I let you have the data when you just want to find fault with it?"

Blogger VD July 05, 2016 8:30 AM  

"People identifying themselves as having no religious affiliation held similar opinions" were NOT atheists.

Szopen, do you really think we are not familiar with what you're doing here? I called it "the Atheist dance" back in 2008.

Atheists call people with no religious affiliation "atheists" when it suits them, such as when they're trying to make the case for the inevitable end of Christianity. Then they turn around and claim they are not "atheists" when it doesn't suit them, such as when they are trying to claim atheists are smarter, better-educated, less criminal, or more trustworthy.

Every poll I have ever seen taken on the subject reliably shows that atheists are the least trusted, least popular group. This has been a well-known concept for hundreds of years; there is a reason that there is a term "the village atheist".

It is not just religious people who don't trust atheists. Even atheists don't trust them, and rightly so.

Could one use that study to prove that alt-righters distrust alt-righters, because, after all, conservatives are somehow similar to alt-righters?

To conclusively prove it? No. Is it evidence? Yes. Which, if you recall, was the full extent of my original statement to the guy who claimed there was no evidence at all.

Blogger Human Animal July 05, 2016 8:33 AM  

Scientists* don't want to know if they're wrong, or right, because it doesn't help them and most people don't care, certainly not enough to check, especially Scientists*. And most people don't have the time.

Especially Scientists*.

Clearly we're going to need a financial support program for Science-Americans so they can breathe easier and double check their work. Start with (semi)Universal Basic Income for Creativity Scientists, Post-Paleo Dieticians, Arboral Ambassadors and the Society for the preservation of Fictional Languages.

Okay. I'm kidding. We'll give two grand a month to everyone everyone who didn't major in English or Economics.

Blogger VD July 05, 2016 8:34 AM  

I am _not_ lying. I am _not_ saying and I _was not_ saying there was no proof that "atheists do not trust the atheists".

All right, I'll take your word for it. But keep in mind that you were responding to a discussion in which an atheist was claiming that there was absolutely no evidence that atheists trust atheists less than other people, and moreover, that my refusal to provide him with any citations was proof that I had completely invented the idea myself.

Merely the title of the Scientific American article alone was sufficient to blow apart his false claims and his faulty logic.

Blogger Human Animal July 05, 2016 8:38 AM  

Let's not forget that Climate "Scientist" who when asked to provide the raw data, responded:
"Why should I let you have the data when you just want to find fault with it?"


If climate change isn't real, how do you explain ISIS?

(Bonus points: Don't say Current Year.)

Blogger szopen July 05, 2016 8:56 AM  

@27 VD
Atheists call people with no religious affiliation "atheists" when it suits them, such as when they're trying to make the case for the inevitable end of Christianity

But I am not "other people" and I do not think christianity will end. In fact, quite the opposite - I am of opinion that (without genetic engineering, which I find quite unlikely for a lot of reasons) it is very likely the future will be _eventually_ much more religious and conservative than it is now.

As for polls, I've seen them too; I do not dispute that atheists are distrusted in _a general population_.

However, I have not seen the polls taken amongst the atheists which would show atheists being distrusted by themselves (compared to other group) - I do not say such polls do not exist, simply I could not find them when I was googling for them.

I assume you agree that "Nones" in the study ("religiously unaffiliated") are not the same as atheists (at least, for the purposes of this discussion).

And I disagree on treating the study finding that "the less likely is that a person holds a view X, the likely he is to distrust people holding X" as an evidence that "people holding X distrust people holding X", but see no point in discussing it further (unless you want to).

Blogger Mr.MantraMan July 05, 2016 9:03 AM  

No religious affiliation here, in my estimation atheists are the lowest. Atheism is just virtue signaling for white people, in due time the glorious people of color will kill them as bad ju ju, think Moroks.

Blogger darkdoc July 05, 2016 9:05 AM  

One of the lies of any kind of medical or human research are the three simple words "Studies show that...".

At best you could only say, "studies suggest that...".

There is no such thing as atrue double blind study in human research. You can not control for many variables such as - genetics, diet, locale, employment, age, often gender, race, background radiation, and many others. To suggest the only variable in a study is that which is the subject of your interest is a pipe dream.

And the venerable p-value? Even that statistical tests inventor has written that he thinks the test is misused and given too much credit for proving anything.

Anonymous coyote July 05, 2016 9:05 AM  

perhaps this "expose" of faulty and often deliberately made-up research helps me feel better about my "atheistic" views towards modern physics, and the dogma of everything from big bangs, dark matter, strings and things. the search for a "god particle" being the ultimate example of the materialists seeking salvation through some techno-magic wand.

Anonymous MendoScot July 05, 2016 9:08 AM  

Statistical problems with fMRI signal processing were identified and published as "double dipping" years ago, 2009 IIRC. Kriegskorte, I think. Affected about a third of the studies he looked at.

As predicted when it was first proposed and rejected "publish or perish" has hopelessly corrupted the literature (and peer review). I outright disbelieve about a third of the studies I read, but I'm experienced enough to make a judgement call on them. I pity the youngsters.

False measures ... now what was the punishment?

Anonymous Conservative Buddhist July 05, 2016 9:11 AM  

@2 Science based on obsequence to the p-value is not really inquiry. Please see a cogent discussion of failing of statistics at http://wmbriggs.com/post/9338/

Also a rich site for conservative philosophy.

Anonymous MendoScot July 05, 2016 9:12 AM  

And as a scientist, this is the worst aspect of it for me:

The researchers used published fMRI results, and along the way they swipe the fMRI community for their “lamentable archiving and data-sharing practices” that prevent most of the discipline's body of work being re-analysed.

ClimateGate now appears to be the norm, not the exception.

Blogger Dean Esmay July 05, 2016 9:24 AM  

We now have studies in multiple fields including medicine that half of published peer reviewed papers are not replicable or even based on sound science. The other 50% may or may not be good, but 50% is known garbage out the door. That means statistically, you can pretty much assume that in multiple fields, without even looking at the paper you should bet that it's wrong, since even if it's of the 50% that has sound replicable methodology it still may be mistaken. We haven't seen proof of it being this bad yet in the other sciences like physics or astronomy, but why wouldn't it be?

We made a Church to Science. Science is a poor god and becomes deranged when you worship it.

Anonymous George of the Jungle July 05, 2016 9:29 AM  

So many lies, so little time.

Blogger VD July 05, 2016 9:31 AM  

I assume you agree that "Nones" in the study ("religiously unaffiliated") are not the same as atheists (at least, for the purposes of this discussion).

I consider them potential Low Church atheists, distinct from the self-identified High Church atheists. We simply don't know. All we know is that they are not conventional religious believers.

Blogger dc.sunsets July 05, 2016 9:37 AM  

I used to think most people were close to my level of intelligence.

It's become obvious I was wrong. At approximately 143, I do not consider myself a genius; to me, true genius is a nearly transcendent condition, brilliance that vaults above my capabilities.

It's sobering to realize just how much of "what we think we know" is the product of people I consider dull-witted.

Anonymous Rhetoric Man July 05, 2016 9:37 AM  

"A study of all the U.S. PhD recipients in 1958 reported an average IQ of 123; the Flynn effect notwithstanding, it is highly unlikely that the average IQ of today's increasingly diverse and vibrant PhD recipients has risen since then."

Really, no one knows. One can simply speculate, but that's like throwing a dart blindfolded and claiming one hit the bullseye.

There have been two recorded studies involving this phenomenon. The results from Gibson and Light (1967) are more reliable than Roe
(1953) since the sample size was larger and the same test administered to each member.

http://www.religjournal.com/pdf/ijrr10001.pdf

Anonymous BGKB July 05, 2016 9:42 AM  

How much civilization enhancing science have we lost because the results are unPC or undesirable?

$40 US to get at that text. You will not be able to get at the data

I wish Putman's data was available to see if as an area becomes more non Asian minority everyone does less charity or if the remaining whites do the same but non Asian minorities do like non Asian minorities.

"Dr", so, after touring Europe and smoking a lot of pot, she switched to Math Ed and wrote some paper on the history of mathematicians using drugs.

Just how much taxpayer money was thrown at Dr Feelgood's education?

If climate change isn't real, how do you explain ISIS?(Bonus points: Don't say Current Year.)

I hear it's the fault of STR8 White Church Going Christian NRA Men & the tranny bathroom law. That makes more sense than HillIAry, the CIA and mossad arming them

Blogger VD July 05, 2016 9:49 AM  

There have been two recorded studies involving this phenomenon.

No, obviously not, as the one I quoted referred to the 1961 Harmon study, not Roe or Gibson and Light. You failed to note the reference to two studies was only to "interdisciplinary" comparisons, not PhD intelligence across the board. The article even compared the much larger Harmon study to the Gibson and Light one you cited.

"We can see that these are approximately comparable to the IQ scores of Gibson and Light’s (1967)".

Blogger B.J. July 05, 2016 9:57 AM  

From Heinlein:

"There are but two ways of forming an opinion in science. One is the scientific method; the other, the scholastic. One can judge from experiment, or one can blindly accept authority. To the scientific mind, experimental proof is all important and theory is merely a convenience in description, to be junked when it no longer fits. To the academic mind, authority is everything and facts are junked when they do not fit theory laid down by authority."

Nothing new under the sun.

Anonymous EH July 05, 2016 10:19 AM  

Reminds me of the study of dead salmon fMRI response to social stimuli:
"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction" Journal of Serendipitous and Unexpected Results, Bennett et al. 2010.

Which won an Ig Nobel. Here's a section of that linked Scientific American blog:
"It all started when Bennett et al were setting up their...um...real experiment. They were going to look at humans and their responses to social stimuli. But in order to do this, they have to first test the machine. Apparently, when you usually test an fMRI machine, you put a big balloon in there filled with mineral oil. Just to test it and look for contrast, etc. The authors of this study wanted something different, they wanted something with more contrast and different types of texture.

So first they bought a pumpkin.

They got some good signals, but not very good contrast. They next tried a Cornish Game Hen (dead, defeathered, from the store). That also produced good visuals but wasn't quite what they needed. The authors needed something with good contrast, but also with several clearly defined and distinguishable types of tissue: fat, bone, muscle, etc.

Enter the salmon.

The lead author, Dr. Craig Bennett, wanted to get something fresh, so he headed in to the grocery story first thing in the morning. At the fish counter, he spoke the words that will echo down the centuries as a testimony to the dedication and drive of neuroscientists throughout the ages:

"I need a full length Atlantic Salmon. For science."

I am still shocked that the guys at the fish counter didn't give him a discount. Can't you get a discount for science?!

Having procured the specimen, the authors placed the salmon in the fMRI and ran all the usual anatomical scans, and then ran the experimental set of the study as well. In this study, the salmon was shown images of people in social situations, either socially inclusive situations or socially exclusive situations. The salmon was asked to respond, saying how the person in the situation must be feeling. The salmon, as far as I can tell from the paper, did not comply with instructions. Naughty salmon.

The results were set aside and not looked at for a good while, until one of the other authors of the study was running a seminar on how to properly analyze fMRI data. They wanted to do some improper analysis on something improbably, and remembered that they had the salmon data on the computer. And a study was born."

[Sorry for the tone of the blog writer. The point of the study was that you get spurious readings of neural activity if you don't correct for multiple comparisons, which comes from that if you make lots of hypotheses, 5% of them will be significant at the p < 0.05 level just by chance. That particular problem was fixed, but statistics is hard, testing statistical software is very hard, and proving the correct practical use of statistics software is all but impossible.]

Blogger Rusty Fife July 05, 2016 10:22 AM  

Alexandros wrote:At least they bothered to actually run the experiment. When I was in a biology lab, interns would make data up out of thin air so they didn't have to do the work; classic millenial action when confronted with work: lie and cheat.

I'd be surprised if a lot of these experiments were even performed.


I can provide an alternative explanation. Virtually every PhD I've met in engineering is a foreigner or second generation immigrant.

Anonymous BGKB July 05, 2016 10:26 AM  

Global warming is now global cooling https://www.lewrockwell.com/2016/07/no_author/junk-scientists-flipflop-global-cooling/

Anonymous SaltHarvest July 05, 2016 10:27 AM  

Revelation Means Hope wrote:A pet peeve of mine is how they routinely discount all findings where the p doesn't meet that magical cutoff of <0.05, as if a 0.08 isn't a strong indication that something is going on that you can be 92% certain of.... as opposed to the 95% certainty that is somehow a MUCH better cutoff point.

But then, I've seen and demonstrated myself how simple it is to manipulate the stats if you want to artificially create a certain result, provided your raw numbers give you a little room to play around.


There is no reason we would want to make the hole 1.6 times wider (or worse) for researchers and SJW bs artists to push flawed conclusions through based on spurious statistics. That it might in theory get a few more good papers noticed isn't the worth the additional garbage it will generate in practice.

Blogger buzzardist July 05, 2016 10:30 AM  

When I was a graduate student, I edited dozens of papers, articles, dissertations, and book drafts for graduate students and even faculty. Although I wasn't trained in statistics, I sometimes had a better grasp of statistics than did the people whose scholarship relied on statistical analysis.

I remember one occasion in particular when my asking a couple of questions chopped the legs out from underneath an unfortunate social science graduate's thesis. She'd set up her data collection and statistical analysis in such a manner that the results spit out utter gibberish, which she was trying to pass of as statistically significant data. When I suggested that her conclusions needed her to collect and analyze the data in this other way, she replied, "Well, then I wouldn't have a paper and wouldn't be able to graduate. I'd lose all of my data and have to collect all the data over again, but I've already used up my research grant, so I can't." Regardless of her plight, honest science demanded that she start over again. I'm certain she went on to graduate without any roadblocks or objections from her committee.

Or there was the other grad student who was working on an article for publication. When I challenged her use of a particular statistical analysis because the written explanation she gave of it was riddled with gaps and errors, she admitted to me, "I really don't understand this statistical analysis. I'm using this equation because my advisor told me to." I imagine that she also has a satisfying career in academia now.

Especially in the social sciences and medicine, people like these were the rule, not the exceptions. The statistical models that they use are developed and understood fully only by a handful of statisticians who spend their entire careers working on these models. The researchers who then deploy these models rarely have a strong grasp of their complex details and limitations. At best, it's 50-50 that they make serious mistakes with the statistical analysis that renders any results useless.

Anonymous EH July 05, 2016 10:37 AM  

WTF? My last comment posted and then disappeared, that's the second time it's happened.

The MRI statistics problem reminds me of the study that looked at the fMRI neural response of a dead salmon to social stimuli:
"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction" Journal of Serendipitous and Unexpected Results, 2010.

Discussed humorously here.

Anonymous Dave Gerrold's 6184th Cabana Boy July 05, 2016 11:02 AM  

FWIW: the raw signal to noise for a garden variety 1.5 Tesla MRI scanner is on the order of 6 parts per million.

Blogger John Wright July 05, 2016 11:08 AM  

I hope someone notes the irony:

We are to mistrust scientific studies because of the lack of rigor, halfwittedness and downright lazy dishonesty of the interns and scientists involved, but when it comes to IQ tests, those are not phrenology, not junk science, but firm and reliable measurements of objective reality.

Blogger The Other Robot July 05, 2016 11:17 AM  

The FBI coverup ... is complete.

Anonymous Rhetoric Man July 05, 2016 11:25 AM  

“No, obviously not, as the one I quoted referred to the 1961 Harmon study, not Roe or Gibson and Light.”


In your post, you simply noted the 1958 study without stating the source was from the Harmon study from 1961.

“You failed to note the reference to two studies was only to "interdisciplinary" comparisons, not PhD intelligence across the board.”

Yes.

Blogger Human Animal July 05, 2016 11:29 AM  

IQ tests, those are not phrenology, not junk science, but firm and reliable measurements of objective reality.

That's because IQ tests are a lot more like a simple chemistry experiment... performed by a unbiased adult social scientist and a child whose parents couldn't care less about extra attention and resources for high or low results.

The FBI coverup ... is complete.

They're just covering their own ass from all directions. If they can't do that do you really trust them to catch serial killers and libertarian white hat hackers?

Blogger clk July 05, 2016 11:29 AM  

"...For example, a bug that's been sitting in a package called 3dClustSim for 15 years, fixed in May 2015, produced bad results (3dClustSim is part of the AFNI suite; the others are SPM and FSL)."

This is why I love it here..

That's a small part of the story but quite interesting to me --- we often claim that the implementation of a theory into a applied use is the only true way to prove the science -- if it cant be engineered then its not real .. well here is an example of a designed tool, engineered product, in use for 15 years that ended up having a error...

When I see things like this I am always reminded of the advice of Gandalf regarding Saruman and the palantir "Perilous to us all are the devices of an art deeper than we possess ourselves"...

Today much of what we used is based on other peoples works and much technology is only slightly less mysterious than magic -- (how many people here could build a cell phone -- or even a simpler ham radio .. or build your own computer, your own operating system or even the power supply of said computer ) -- so much of what we use every day is unknown to us. And its not a criticism of the intelligence of men or the education system but rather a observation on the complexity modern technology.

Much of our science and engineering depends on tools that are far more complex than the users themselves ... I cant tell you how many FEA of designs I have seen that are beautiful looking but wrong, or simulations that simply dont make sense... -- you have to understand the math behind these things, and extensively test your results .. and never assume extrapolated accuracy. These people were using a tool beyond its capability and probably the application of its designers.

What I have never gotten a fair answer to is if science and its methods are so bad.. what are we suggesting as an alternative - faith based study ... Science can and often is ugly... but its the best approach we got --- the failures that we like to point to are actually caused by factors outside pure science -- the requirement to publish or perish, create successful drugs to recoup R&D budges and make CEOs and stock holder rich, a political bent -- there's are failures do the human involvement and existing in different forms in all human endeavors......

Anonymous Bob July 05, 2016 11:29 AM  

Whenever I hear stories like this, I remember the backlash against (at the time, non-credentialed) Denise Minger. She dug into the data behind the China Study (a landmark study on diet) to discover that the official summary and conclusion was downright wrong.

Link: https://rawfoodsos.com/2010/07/07/the-china-study-fact-or-fallac/

It caused me to start reading data instead of abstracts. I discovered that most rat studies on diet are worthless, because the "high fat" diet usually contains more sugar than the "baseline" diet. Even then, the conclusions usually have glaring errors in the statistic analysis that damage the result even further.

So, when people come to me with reports that scientists are practicing poor scientody, I am never surprised.

Blogger Escoffier July 05, 2016 11:37 AM  

Moonbear: as to the 'not do distant past' scientists not being midwits?

I kinda seriously doubt it. I'm doing some research for a book relating to junk science and let me just say, oh my God, there's a lot of awful science and interpretation of data out there.

Blogger szopen July 05, 2016 11:53 AM  

@52 John Wright
In this case (a) a lot of evidence in favour of IQ/g came from people trying actively to _disprove_ the validity of "g", (b) there is a lot, a lot of converging evidence amassed over several decades (c) there is no problem with replicating the results, in fact they are reliably replicated by dozens of different researchers in widely different environments.

Compare this to "stereotype threat" BS, when evidence is non-conclusive, there are problems with replication, there are contradictory results AND even existing effects can be explained by at least one another, well known theory (which was proposed earlier than stereotype threat and explains much more than "stereotype threat").

(to be fair, evidence in favour of "g" can be also explained by one other theory of intelligence, but this other theory is virtually unknown and also much harder to understood)

Blogger James Dixon July 05, 2016 12:08 PM  

> "Well, then I wouldn't have a paper and wouldn't be able to graduate. I'd lose all of my data and have to collect all the data over again, but I've already used up my research grant, so I can't."

And a proper review of her work by the committee would have allowed her to publish her work with the result that her methodology had turned out to be flawed and no useful conclusions could be drawn, then allowed her to graduate. That's the way real science is supposed to work. Not every study is going to be successful or find anything useful.

Anyone think that would have actually happened?

> Science can and often is ugly... but its the best approach we got ---

True. But it has limitations. And most of us here recognize those limitations. The "We f***ing love science" crowd, not so much so.

> ...the failures that we like to point to are actually caused by factors outside pure science -- the requirement to publish or perish, create successful drugs to recoup R&D budges and make CEOs and stock holder rich, a political bent -- there's are failures do the human involvement and existing in different forms in all human endeavors......

Well, duh. Humans are flawed. The results of our flawed efforts are going to be flawed as well. That's the point. Proper use of the scientific method works around these flaws by ensuring that results can be reproduced and studied further. We're fallen a long way from that ideal.

Blogger Jim July 05, 2016 12:26 PM  

I thought PET scans were used to study brain function/activity. When did fMRI enter the picture?

Anonymous Noah Nehm July 05, 2016 12:32 PM  

Thanks VD, for pointing out something that should be obvious, but for many isn't, namely, that there is a limited supply of truly smart scientists.

I've argued for years that giant government boondoggles, like ITER, are not merely a waste of money, but worse: They lock up plenty of smart physicists and engineers in unproductive work, and as a result they're unavailable for other projects that need their expertise.

You'd be surprised, though, how many respond as if there is an unlimited supply of smart experienced scientists, or that, even if there were a shortage, more can always be trained by making the profession attractive to those who otherwise wouldn't be interested.

Blogger Jim July 05, 2016 12:39 PM  

I thought PET scans were used to study brain function/activity. When did fMRI enter the picture?

Blogger VD July 05, 2016 12:39 PM  

Thanks VD, for pointing out something that should be obvious, but for many isn't, namely, that there is a limited supply of truly smart scientists.

Sure. You'd think it would be fairly obvious that if someone was genuinely smart, he would be more interested in a) being a rock star or b) getting rich than sitting in a lab writing up grant applications.

Anonymous SciVo July 05, 2016 12:40 PM  

From the article linked in the 2010 post:

The consensus seems to be that we simply can't rely on the researchers to do it. As Shaffer noted, experimentalists who produce the raw data want it to generate results, and the statisticians do what they can to help them find them. The problems with this are well recognized within the statistics community, but they're loath to engage in the sort of self-criticism that could make a difference. (The attitude, as Young described it, is "We're both living in glass houses, we both have bricks.")

I have a degree in math, and I trust statistics about as much as voodoo. I found it very difficult to wrap my head around, since my bias is toward theory -- I actually focused on proof-heavy subjects, where my verbal IQ helped the most -- and statistical theory appears to be almost entirely unrelated to statistical practice, except as a vocabulary and style of presentation.

As I understand it, as soon as you start using the data to form a hypothesis instead of to test one, a lot of the theory underpinning validity goes out the window -- yet scientists do that kind of after-the-fact data mining all the time. Again, I want to be clear that statistics was my worst math subject by far; but AFAIK, that's a very easy-to-hide way to follow the fashion without the substance.

Anonymous TJK July 05, 2016 1:00 PM  

An amusing side-note is that the bulk of Sam Harris' neuroscience work on the differences between atheist and theist brains uses fMRI.

Anonymous MendoScot July 05, 2016 1:11 PM  

Jim wrote:I thought PET scans were used to study brain function/activity. When did fMRI enter the picture?
In the 90s, IIRC. By early 00s there were already over 10K fMRI studies in the literature. PET is used for studies, but is a lot harder and gives conceptually different results.

Blogger Yvonne Lorenzo July 05, 2016 1:32 PM  

As F. William Engdahl posted today:


"Nobel Science Prize Ain’t No Proof Ya Got a Brain"



According to a report in the Washington Post, precisely 107 of the living Nobel Science Prize awardees have done just that. They foolishly signed a letter urging Greenpeace to stop opposing genetically modified organisms (GMOs). The letter specifically asks Greenpeace to cease its efforts to block introduction of so-called “Golden Rice,” a genetically manipulated rice variety that allegedly “could reduce” vitamin A deficiencies in infants in the developing world. This demonstrates either that those 107 Nobel laureates are not truly intelligent or that they are yet another group of scientist prostitutes willing to whore their reputation for a few shekels from Monsanto & Co. Or both…

http://journal-neo.org/2016/07/05/nobel-science-prize-aint-no-proof-ya-got-a-brain/

Blogger Patter Gritch July 05, 2016 1:40 PM  

The implications of this are enormous! For example, how much of the transgender research thats been done used this analyzation software?

Blogger SirHamster July 05, 2016 1:48 PM  

clk wrote:What I have never gotten a fair answer to is if science and its methods are so bad.. what are we suggesting as an alternative - faith based study ...

When you distinguish science from faith based study ... do you think science does not involve faith, or that it does not involve study?

Anonymous DT July 05, 2016 2:25 PM  

@56 - What I have never gotten a fair answer to is if science and its methods are so bad..

The scientific method is a very useful tool for learning about the physical universe.

The problem is not with the method. The problem is with all the people who claim to have used the method when they have not used it or used it correctly.

The scientific community seems to have grown oblivious to the fact that their method requires actual replication. Not "peer review." Not "publishing." And certainly not "consensus." A theory is not worth the paper it's printed on unless and until it has predicted observed events repeatedly and for more than one person.

And even that is not a guarantee of correctness. As we clearly see in this case replication cannot eliminate all errors.

Perhaps the most important trait one would hope to find in a scientist, more important than even a high IQ, is a ruthlessly critical approach to every idea and every theory. A scientist needs to be able to critique his own theory, and needs to be willing to throw his entire body of work in the trash if he discovers a fatal flaw. He also needs to be ready and willing to do the same with the work of his peers regardless of any social pressure.

And his peers should thank him for that.

What we find today is the exact opposite. One look at climatology is enough to tell you that science in the western world is broken. Instead of welcoming critique the "scientists" of the field actively punish it. Some of the most prominent proponents of a single theory are actually trying to enlist government help in silencing any critique. A theory which, I might add, has completely failed in its predictions.

No, the problem is not with the scientific method. It's with the people who claim to use it.

Anonymous SciVo July 05, 2016 2:41 PM  

To elaborate, when I studied statistics a couple of decades ago, it seemed to me that there were three ways to do statistical probability:

1. Determine a likelihood about random subsets of the universe by walking in with a hypothesis formed by reason, and testing it against the dataset;

2. Determine a likelihood about random subsets of the universe by using one dataset to form a hypothesis, and testing it against a completely independently collected dataset; or

3. Determine a likelihood about random subsets of a dataset by using it to form a hypothesis, and then stopping there. (Even using disjoint subsets in the style of #2 would have the fundamental problem of their superset being the dataset, not the universe.)

But it seems like everyone does #3 and then pretends that they've determined something about the universe, so either A. I still don't get it or B. everyone else does it wrong. (I think it's B.)

Anonymous Richardthughes July 05, 2016 2:56 PM  

Meh. Science corrects. Religion blunders on...

Anonymous Eric the Red July 05, 2016 3:21 PM  

OT (a little), why is Google pushing a self-driving car?

https://www.google.com/selfdrivingcar/

Any argument that it's simply a way to branch out into a promising business opportunity is suspect. Since google is a thoroughly converged organization, there has to be some hidden political agenda behind it.

Anonymous Eric the Red July 05, 2016 3:33 PM  

@73 Richardthughes...

Double Meh. Science pretends it's not a faith-based philosophy and has no politicized agenda, whereas Christianity is completely honest about itself and the truth.

BTW, in spite of this type of crushing critique described by the article, your science has yet to correct itself. None of the original research has been recanted by its authors, nor has the larger issue itself been addressed by the entrenched scientific establishment. They will therefore continue to do exactly the same insanity in the future, aided and abetted by the government, the media, and the rest of the leftist political hacks.

Anonymous Man of the Atom July 05, 2016 3:41 PM  

MendoScot wrote:Jim wrote:I thought PET scans were used to study brain function/activity. When did fMRI enter the picture?

In the 90s, IIRC. By early 00s there were already over 10K fMRI studies in the literature. PET is used for studies, but is a lot harder and gives conceptually different results.


Add to that fMRI is a non-ionizing modality. While people don't like being in the bore of an MR for hours, you can be. With high B fields (3 T or better), the fMRI can yield high-resolution images and you can gather a fair bit of data.

PET (usually CT/PET) is an ionizing radiation modality of Nuclear Medicine (or more recently Molecular Imaging) that produces functional/physiologic data. Great for cancer diagnoses if using a sugar-based pharmaceutical tagged to fluorine-18 (especially for brain imaging), but exposure to annihilation radiation (0.511 MeV) is not something you want to give a subject repeatedly for research purposes.

Add to that Nuclear Medicine (including PET) is still the "photon poor" modality compared to CT, MR, and even radiographic imaging. Resolution is improving, but still lags CT and MR in most systems.

Is research done in PET? Sure. But you will have a much higher hill to climb in convincing an institutional review board that you'll get better results with PET versus fMRI. One of the reasons for fMRI's popularity in brain function research.

Blogger Scott C July 05, 2016 3:56 PM  

A study of all the U.S. PhD recipients in 1958 reported an average IQ of 123

Yeah, but how many of those PHDs were in the humanities? That may have reduced the population mean by 10 pts.

Blogger tublecane July 05, 2016 4:03 PM  

Journalistic incompetence can be explained by IQ, but not their mendacity. You never know if they're confused or propagandizing (or propagandizing confusedly)

Blogger Aeoli Pera July 05, 2016 4:06 PM  

1. See Ritchie's book on IQ. Particularly,

"This is seen whenever a decent-sized sample of people sits a variety of tests like the ones above - the test scores will all correlate positively together. Charles Spearman, whom we encountered in the previous chapter, called this finding 'the positive manifold', and a century of research has confirmed he was correct. A comprehensive review by the psychometrician John Carroll (1993) showed that the positive manifold was found in every one of over 460 sets of intelligence testing data. Some researchers have even deliberately created cognitive tests tapping different skills that they expected not to correlate together, but they always did. The positive manifold of seemingly unrelated tests is one of the most well-replicated findings in psychological science."

-Intelligence: All That Matters (location 303)

2. Phrenology is legit, but it is just a pseudoscience like biology.

Anonymous Richardthughes July 05, 2016 4:07 PM  

Eric, the news article is less than a week old.

You say "Science pretends it's not a faith-based philosophy and has no politicized agenda" - what faith is it based on? What is its "politicized agenda"?

Blogger Aeoli Pera July 05, 2016 4:12 PM  

That is, phrenology is pseudoscience unless the strong hypothesis turns out to be true (causation rather than correlation). This may be true in a couple of highly specific traits, but I suspect most traits will be simple correlations with bone features used by forensic anthropologists to identify the race of human remains.

So, figure you have a set of bones that you can tell is from a woman. That woman probably had such and such suite of traits. Figure correlations of such weak-medium strength in most cases.

Blogger Aeoli Pera July 05, 2016 4:16 PM  

Richardthughes wrote:Eric, the news article is less than a week old.

You say "Science pretends it's not a faith-based philosophy and has no politicized agenda" - what faith is it based on? What is its "politicized agenda"?


Its faith is atheist materialism, and its politicized agenda is the promotion of a worldwide scientific socialism.

The latter is primarily accomplished by a selection effect, where money disproportionately funds research that is friendly to The Cause.

Blogger Aeoli Pera July 05, 2016 4:26 PM  

SciVo wrote:I have a degree in math, and I trust statistics about as much as voodoo. I found it very difficult to wrap my head around, since my bias is toward theory -- I actually focused on proof-heavy subjects, where my verbal IQ helped the most -- and statistical theory appears to be almost entirely unrelated to statistical practice, except as a vocabulary and style of presentation.

As I understand it, as soon as you start using the data to form a hypothesis instead of to test one, a lot of the theory underpinning validity goes out the window -- yet scientists do that kind of after-the-fact data mining all the time. Again, I want to be clear that statistics was my worst math subject by far; but AFAIK, that's a very easy-to-hide way to follow the fashion without the substance.


That's the correct interpretation from your perspective. Handling statistics relies much more heavily on the integrity and good sense of the practitioner, therefore it's more of an art than a craft.

Treat statistics like culture and vet them before allowing them into your home.

Blogger tublecane July 05, 2016 4:29 PM  

@34-Don't take "God particle" seriously. I don't know where the term came from, and for all I know it could've started as a joke intended to discredit it, like "big bang theory." Really, they use it as an advertising slogan.

You oughtta not to believe in modern physics. It's been stuck in a rut since the 1930s, at least fundamentals-wise. String Theory isn't science, particle physics is wacky, super-symmetry is super-wacky, and I'm halfway convinced the universe isn't expanding, dark matter doesn't exist, and every supposed particle beyond the neutrino is hokum.

Blogger Kyle Smith July 05, 2016 4:33 PM  

I am admittedly a midwit.

Most the advances in science and the world are made by midwits (this is almost tautological for the statistically minded).

In grad school a friend of mine discovered that an assay used for maybe 5-15 years was not behaving properly when he tried to take it in a new direction. This resulted in his publishing the results and a radical change in his field.

In general I agree that we should be skeptical of scientific results that do not generate real world applications. My own field is filled with false claims. However, science is generally self correcting. If fMRI is a sham with false positive than the first person who tries to use it for anything useful will probably discover it is unreliable. The same method that has produced fMRI has produced the iPhone. Random "scientific studies" disconnected from their discipline to be carried forth by media for political ends are certainly crap. But applied science and predictive sciences weed out garbage over time nicely, its the other stuff which really remains gibberish for generations.

Anonymous Eric the Red July 05, 2016 4:35 PM  

@80 Richardthughes...

The faith of reductionist materialism, that everything can and should be explained only by a material frame of reference, and that nothing else exists outside of it. Meanwhile it arrogantly steps outside that limitation to hypocritally declare that no other type of explanation is permissible by anyone else.

The age of this news release is irrelevant (BTW the paper itself was received February 12, 2016). Over the years this blog has repeatedly displayed similar examples in science, which keeps rolling along without correcting itself nor offering any apology whatsoever.

Obviously you are vastly uninformed, although I'm sure you're well marinated in leftist Sacred Memes.

Blogger Aeoli Pera July 05, 2016 4:37 PM  

Kyle Smith wrote:Most the advances in science and the world are made by midwits (this is almost tautological for the statistically minded).

This is a relatively recent phenomenon. Historically, the "eminent" types would publish at rates we'd consider insane.

Blogger Aeoli Pera July 05, 2016 4:39 PM  

Eric the Red wrote:@80 ...Sacred Memes.

Oho, I like this here turn of phrase :-).

Blogger VD July 05, 2016 5:10 PM  

However, science is generally self correcting.

No, it is not. You can make a better case for saying that accounting is self-correcting. Audits take place far more often than attempts at replication.

And you're appealing to engineering to defend science. Bad logic.

Blogger VD July 05, 2016 5:13 PM  

Meh. Science corrects. Religion blunders on...

No, it obviously doesn't. What happens is that science smacks into the real world, which then forces corrections on science if it is to remain relevant.

Science virtually never corrects itself. It usually falls to engineers to tell the scientists that they are wrong.

Blogger VD July 05, 2016 5:15 PM  

What I have never gotten a fair answer to is if science and its methods are so bad.. what are we suggesting as an alternative - faith based study ... Science can and often is ugly... but its the best approach we got

Best approach to what? Because it is not the best approach to everything. That's OBVIOUSLY false.

Anonymous Daniel H July 05, 2016 5:32 PM  

All science Ph.Ds should require a concomitant M.S. in Statistics. If you can't satisfy the M.S. in Statistics you don't get your Ph.D.

Blogger clk July 05, 2016 6:44 PM  

Ask a silly question, get a silly answer....Obviously science is the bestway of doing science :)... as a method of understanding the universe, an approuch to understanding something new, investigate the unknown... while trying to be funny, you are actually right that we have a poor definition ...

Science and the scientific method as the latest approach following imperfect attempts to understand the world around us via mythology, philosophy, religion, mysticism....name you poison...

Blogger clk July 05, 2016 6:52 PM  

This comment has been removed by the author.

Blogger newanubis July 05, 2016 6:53 PM  

Or if they did discover it, dare not make mention.

I see a Vox Day Facts in the embryonic stages. (A la Chuck Norris)

Anonymous krymneth July 05, 2016 8:01 PM  

Another difference between IQ research and this fMRI stuff is that IQ research largely confirms what people with open eyes observe about the world. There are smart people. They seem to be generally smart about many things at once. There are stupid people, who seem to be generally stupid about many things at once. The challenging thing about IQ is believing that there is no such concept. It turns out to be so challenging that even some very, very dedicated attempts to deny it with the full force of every bit of numerical manipulation that can be brought to bear have not had great success. To bury the concept required an outright application of political force.

While science has very often discovered surprising things about the world, the overpromotion of such discoveries fools people into thinking that's the norm, rather than the exception. In general it's a bad sign for a study to radically overturn observations that can be made by anybody with eyes... instead, we should usually look to science to refine such observations. IQ theories generally conform with our experience of the world, and then refine them.

I don't believe IQ is necessarily determinative; for one thing there's a rather large gap between IQ and wisdom. If IQ can help guide one to wisdom, it can also greatly assist one in dodging it when a putatively stupider person would long since have gotten the lesson. I think a lot of the hostility to the idea stems from misinterpretations, because the concept itself isn't all that much more interesting that "some people are smarter than others".

fMRI research is reading twitches in the tea leaves, by contrast, and we have little prior beliefs with which to analyze the results. What prior beliefs we had, we often read about how they were violated, such as some dubious studies on the nature of "free will", because what else gets published? (I say they are dubious not because they challenge my personal beliefs so much as because they're just plain dubious in general; even if you accept the studies at face value they don't necessarily show what they were claimed to show.) It isn't that shocking that the whole foundation would be shown to be shoddy. Especially in light of papers like this (PDF), "Could a neuroscientist understand a microprocessor?". There's a good argument to be made that the neuroscience toolkit is fundamentally incapable of proving very much. Grand pronouncements out of the field of neurophysiology should have been treated more skeptically even before the discovery of the fMRI issues.

Anonymous Dave Gerrold's 6184th Cabana Boy July 05, 2016 8:16 PM  

"When did fMRI enter the picture?"

Early 90's. It was available as an option on many 1.5T and higher scanners by the late 90's.

It is a form of perfusion imaging that relies upon deoxygenated blood having different magnetic characteristics from oxygenated blood.

Specifically, dexoxyhemoglobin has a much larger magnetic moment than its oxygenated self, and causes a reduction in the total transverse relaxation time of the tissue around it. In English: deoxygenated blood and the tissue around it has less signal and shows up as darker on an MRI image than fresh oxygenated blood.

The problem is that the signal differences on the already low MRI signal is even lower for fMRI: pixel differences are less than 5% between the two types of blood. Think: trying to observe inch-high ripples on 3-foot surf from 100 yards away through binoculars.

The measurements are often oversampled (read: repeated) several hundred or even thousand times to ensure that the boxes variations are real and not artifact. This limits the effective spatial resolutions of the scan even on 3T+ scanners. In MRI resolution and scan time are trade offs, however the time constraints on the blood flow changes fMRI is looking for are a bottleneck.

3 Tesla and up scanners have made fMRI more reliable and repeatable, but it is still fraught with error. A host of corrections have to be made for things like patient movement, experimental paradigm, and time correlation have to be made just to get started.

I have always looked at fMRI results with a jaundiced eye...this revelation doesn't surprise me one bit.

Blogger dvdivx July 05, 2016 8:48 PM  

It's not an IQ problem per se it's agenda driven "science" from people with no common sense and whom are Marxists.

Anonymous andon July 05, 2016 11:52 PM  

i'm surprised anyone in the UK or Sweden is still doing any science

OpenID dudequest July 06, 2016 4:12 AM  

@85- One thing that is absolutely staggeringly consistent is how often 'Science' and engineering are confused by the ignorant.


(The following is not about biological sciences, where the very definition of 'scientist' is radically altered)

Science, at it's best, exists only to explain, justify, or condemn engineering. At it's worse, it exists to thwart engineering, explain why something that clearly works cannot, and 'move the goalposts' to the point where engineers have to spend lifetimes trying to meet impossible or ill-defined goals.

Scientists are absolutely necessary, just as necessary as philosophers, musicians, and visual artists, but they do not 'create' progress, and they are no more acquainted or qualified to judge cold hard fact than fantasy authors, musician, or any other professional dreamer.

Engineers address cold hard fact. They are the ones that create, and build, and work out new feats of creation for scientists to try to explain and pick apart. Those 'midwits' that you speak of may be lower in IQ points than your average sciency fantasist, but that is a failure of the measurement metrics.

In practical applicability they would score vastly higher, but in general IQ best measures what they deem 'learning ability'(Rote memorization and pattern recognition).

I hope someday someone comes up with a measure for determining 'practical intelligence' that does not rely exclusively on pattern recognition or rote memorization, but I am not holding my breath.

Anonymous Richardthughes July 07, 2016 5:41 AM  

VD: "No, it obviously doesn't. What happens is that science smacks into the real world, which then forces corrections on science if it is to remain relevant.

Science virtually never corrects itself. It usually falls to engineers to tell the scientists that they are wrong."

This correction came via a peer-reviewed scientific paper, "Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates"

Post a Comment

Rules of the blog
Please do not comment as "Anonymous". Comments by "Anonymous" will be spammed.

<< Home

Newer Posts Older Posts