### Statistical misleadings

I've been pointing out for years now that scientists simply don't have the statistical mastery required to back up the statistics-based results that they've been pointing out:

And notice that it is an economist who is a critic of the unreliable methods being used by the scientists. That's not a coincidence. Economists are some of the biggest skeptics in the academic world, mostly because they see their models failing almost as soon as they are constructed. The fact is that the scientists quite literally have no idea what they're talking about:

The results were “plain as day”, recalls Motyl, a psychology PhD student at the University of Virginia in Charlottesville. Data from a study of nearly 2,000 people seemed to show that political moderates saw shades of grey more accurately than did either left-wing or right-wing extremists. “The hypothesis was sexy,” he says, “and the data provided clear support.” The P value, a common index for the strength of evidence, was 0.01 — usually interpreted as 'very significant'. Publication in a high-impact journal seemed within Motyl's grasp.Of course, if one considers the nonsensical hypotheses many of these scientists are attempting to statistically test, it is abundantly clear that they also lack a sufficient mastery of basic logic.

But then reality intervened. Sensitive to controversies over reproducibility, Motyl and his adviser, Brian Nosek, decided to replicate the study. With extra data, the P value came out as 0.59 — not even close to the conventional level of significance, 0.05. The effect had disappeared, and with it, Motyl's dreams of youthful fame.

It turned out that the problem was not in the data or in Motyl's analyses. It lay in the surprisingly slippery nature of the P value, which is neither as reliable nor as objective as most scientists assume. “P values are not doing their job, because they can't,” says Stephen Ziliak, an economist at Roosevelt University in Chicago, Illinois, and a frequent critic of the way statistics are used.

For many scientists, this is especially worrying in light of the reproducibility concerns. In 2005, epidemiologist John Ioannidis of Stanford University in California suggested that most published findings are false; since then, a string of high-profile replication problems has forced scientists to rethink how they evaluate results.

And notice that it is an economist who is a critic of the unreliable methods being used by the scientists. That's not a coincidence. Economists are some of the biggest skeptics in the academic world, mostly because they see their models failing almost as soon as they are constructed. The fact is that the scientists quite literally have no idea what they're talking about:

For all the P value's apparent precision, Fisher intended it to be just one part of a fluid, non-numerical process that blended data and background knowledge to lead to scientific conclusions. But it soon got swept into a movement to make evidence-based decision-making as rigorous and objective as possible. This movement was spearheaded in the late 1920s by Fisher's bitter rivals, Polish mathematician Jerzy Neyman and UK statistician Egon Pearson, who introduced an alternative framework for data analysis that included statistical power, false positives, false negatives and many other concepts now familiar from introductory statistics classes. They pointedly left out the P value.

But while the rivals feuded — Neyman called some of Fisher's work mathematically “worse than useless”; Fisher called Neyman's approach “childish” and “horrifying [for] intellectual freedom in the west” — other researchers lost patience and began to write statistics manuals for working scientists. And because many of the authors were non-statisticians without a thorough understanding of either approach, they created a hybrid system that crammed Fisher's easy-to-calculate P value into Neyman and Pearson's reassuringly rigorous rule-based system. This is when a P value of 0.05 became enshrined as 'statistically significant', for example. “The P value was never meant to be used the way it's used today,” says Goodman.

Labels: science

## 56 Comments:

Taleb's contribution to that "which idea needs to be discarded" piece at edge was standard deviation.

Are we actually trying to infer visual acuity from political viewpoint? Or are they talking shades of grey in the presentation of information? Having had several Psych courses in my youth, i fear the former

This scientist deserve nobel prize for statistics. Appoint him head of ministry of plenty.

I doubt this is of any concern to the modern Marxist. They only care that people say things they want to hear, they don't care why they say it.

Hi there!

Even though I don't share your general scepticism in science, I fear that you are right that a lot of so called scientists don't know what they are doing with their data.

I am a phd student in mathematics, and although stachastics and statistics isn't my specialty, I know the basics.

And I think that the problem is even worse than that they have a false madel to begin with. I think a lot don't know at all what they are doing with their data. I witnessed medicine professors who gave their students statistical questions that made no sense whatsoever. Also, researchers in biology who make up their graphs like this:

They have data, in the form of points in a graph - and then they connect it. That's it.

The problem is that the students/phds/... often have sort of a program/ or an algorithm they can put their data into, and don't know at all what is happening, or how to interpret the results correctly, what to do with inconsistencies, etc.

"For all the P value's apparent precision, Fisher intended it to be just one part of a fluid, non-numerical process that blended data and background knowledge to lead to scientific conclusions."

Yes! I see it all the time in which the process of learning, whether doing R&D or testing or whatnot, becomes a one-study-will-do-it-all sort of thing. Drives me crazy. It's an iterative process in which the initial, simple design of a study, usually a screening design, should yield a p-value that gives you a clue of whether there's anything worth pursuing further. No disappointment if not, it's just an initial hint. And, if so, one should proceed cautiously with different study designs, ones that incorporate additional factors and variables, to do further investigation. Don't just keep piling data on top of the screening design. Fisher was a genius, he invented much of modern statistics, and is undoubtedly rolling in his grave when he looks at how it is often applied.

Sensitive to controversies over reproducibility, Motyl and his adviser, Brian Nosek, decided to replicate the study.Liberal: "It's self correcting!"

Common Sense: "But only because he was 'sensitive to controversies'"

Liberal: "What are you talking about. There are no controversies in XYZ"

Common Sense: [walks out the door]

I shall demonstrate my erudition by quoting Mark Twain on this subject, as no one has ever done before.

You may admire me now.

"For all the P value's apparent precision, Fisher intended it to be just one part of a fluid, non-numerical process that blended data and background knowledge to lead to scientific conclusions."

This happens all the time. A professional will come up with a tool that has a narrow utility... then others find out about it... and start applying it far beyond that narrow utility.

BMI is a classic example of this.

I shall demonstrate my erudition by quoting Mark Twain on this subject, as no one has ever done before.

You may admire me now.

I huzzah in your general direction.

There is a .02% chance that scientists will pay any attention to this article.

As an example, I recall in physics we experimented with multiple ways of calculating the force of gravity. One of them, measuring speed of a weighted cart attached to a weight on a pulley, had amazingly consistent results, p-value <.01, but the calculated value was way off. Other methods had a higher p-value, but the results were much closer to reality.

It's incredible how they waste money.

I consider myself a classic conservative but from the mainstream viewpoint I would be described as a reactionary or ultraconservative because they have gone too far with their "progress".

I am a normal conservative like 50 years ago or more but the perception of what is extreme or moderate change overtime.

Other than the relativity of the concept of "moderate" and "extremist" I am very curious to see how my political ideas affect my physical senses.

I was a "moderate" leftist in the past and I didn't noticed any difference in how I see colours after my religious conversion and shift to the right.

B-but the results were “plain as day”

The Original HermitOh hai, did I stole your name?

I didn't know this place was a hermitage.

This webcomic seems highly appropriate:

What works in Science

I work in process improvement, where there are two main fields (among others): 6 Sigma and Lean. It is amazing when we are studying the 6 sigma stuff, where student black belts and green belts are doing their best to analyze the data to get some P values, because this is seen by many as the ultimate test of your data. Hours and hours are spent trying to get data to cooperate to generate some P values.

Usually entire measuring systems are setup to study a process just so the data will deliver a good P value. Whether the process being studied really needs that type of examination or not. As a mentor to these students, I have to question them over and over to get them to question their own assumptions on why they have to get a P value, as it often is going against common sense.

0.05 and 0.01 are the enemy of much progress. Many times while reading JAMA and other medical journals, I've seen results that would be of great benefit essentially written off because they didn't reach that magical number of statistical significance. They'll throw in a quick reference to it, and say something along the lines of "this interesting trend should be explored further"......but everyone knows that there won't be any $$$$ coming to study something that won't benefit a major drug company or other big money interest.

What's the statistical P value in the ongoing banker suicides?

The problem is that the students/phds/... often have sort of a program/ or an algorithm they can put their data into, and don't know at all what is happening, or how to interpret the results correctly, what to do with inconsistencies, etc.Yes, and there's so much more to be said. All of the above, and more, is what I have been saying ever since I worked as an undergrad lab tech. Studies relying on statistics are almost

useless- not only because ofp, not only because hardly anyone understands the proper use of the various statistical tests, but for other reasons, too. A large amount of data in "scientific" research is collected and written down by undergrad "lab assistants" who fudge the numbers because they`ve got a hot date that night and measuring things accurately just takes too much time...VD "I've been pointing out for years now that scientists simply don't have the statistical mastery required to back up the statistics-based results that they've been pointing out ...a psychology PhD student ..."

Now I understand the source of my frustration with VD ... I misunderstood what he means when he says "scientist" -- for me -- I general range the "sciences" based on the required amount of math -- with hard science being math/physics/engineering and way at the other shallow end "soft" being subjects biology/ zoology/ etc .. ...at the soft end of the spectrum they almost always need math experts to do their data crunching .. so the risk is the people doing the math don't understand the science, and the "soft scientist" doesn't understand the math.

I would never consider a psychologist/sociologist/doctor as a scientist -- hell I barely think that biologist are scientists .... all are notoriously bad at math.

The problem here is computers -- when you had to do these calculations by hand, using intergals, you had to know what you were doing .. now any Tom, Dick or Harry can install a statistics software package, put in some data, push some buttons and out comes the answer .. don't like the answer,, push some different number until you get the answer you like.

"Oh hai, did I stole your name? "

It's ok. I 'm only an occasional commenter so you'd have no way of knowing unless you happen to see my rare comment every few weeks or so.

In grad school I put together a statistical model predicting track specific speed ratings for horses. The model was an exceptionally good fit, r-sq over 80% and all the independent variables had off the chart t-ratios....my stats prof passed my paper around to his colleague and at one point I'm pretty sure they were singing my praises as a mathematical super hero.

Funny thing is...the model was really good at predicting that the odds on favorite was the odds on favorite, and the long shots were the long shots. But it wasn't worth a damn at telling you whether a long shot was likely to win, or that on odds on favorite was a bad bet. I recognized this immediately, but I don't know how long it took my stats prof.

In my view these sorts of models are typically quite good at telling you something which is otherwise obvious, but not very good at telling you anything unobvious that you'd really like to know.

I rest easy knowing that just about every aspect of 18-24 year olds has been tested, however poorly, at the expense of the rest of the population.

I have a B.S.E. in applied physics and a M.S. in EE, and this issue makes my skin crawl. I've tried to talk to scientists about their crap methods, but most are just not inclined to accurate analysis and real understanding. They follow the statistical cookbooks they are given because that's the path of least resistance to getting published.

Ask yourself a simple question: why do engineers use bayesian methods almost exclusively?

Answer: because they are correct and consistent, unlike the random collection of ad hoc statistical methods used in many fields

See, engineers actually have to make things that work. Scientists don't suffer the consequences of poor analysis, because there's no direct feedback. Only decades later when a noticeable number of people are dying from the recommended medical treatments, does anyone notice that the original studies were flawed.

Progressives pushing preconceived publish/perish paradigms produce problematic p-value promotion.

I think most of the commenters are missing the point of the article. Statistics is marginally given as an example of the frauds being perpetrated by Scientistic Marxists, but actually the Marxist element behind this phenomena is the real story. Through the "Long March through the Institutions", the Marxists have hijacked science inmuch the same way they have hijacked Government, Media, and unfortunately Academia in order to make them mouthpieces of Marxism disguised as science. This is the Grand Agenda of Marxism, which is intended to produce a "False Consensus" on the unwitting populace by reinforcing Lunatic False Marxist Hokum by dispensing it through many or all "Official and Licensed" sources to bilk people into believing the false narratives of Marxism which visibly conflict with observable reality.

I would never consider a psychologist/sociologist/doctor as a scientist -- hell I barely think that biologist are scientists .... all are notoriously bad at math.Yeah, the definition of scientist used here is an issue. Like here

Scientists don't suffer the consequences of poor analysis, because there's no direct feedback.Oh, and there is this

http://xkcd.com/882/

@Hoots

After learning Bayesian methods, I got quite confused as to why anyone would stick with frequentist methods. Bayesian methods have far better interpretability, because everything is based on the "amount of knowledge" ideas of probability, rather than the "if we tried this experiment 100 times, 80 times we would see the mean fall within this confidence interval" junk.

The first chapter of Sivia's Bayesian Data Analysis book is enough, in my view, to demonstrate that the Bayesian way is more flexible and superior.

@DrTorch

I think that's actually a partial test of whether one is doing science (specifically scientistry per VD's useful nomenclature) or engineering. Now, any applied science requires some engineering, and a lot of engineering requires quality science, but if your end product is nothing but a paper, you certainly aren't doing engineering and there's a good chance you're not doing any useful science either.

@indpndnt

There was an outspoken physicist (you may have heard of him) by the name of E.T. Jaynes who wrote extensively about the errors of the frequentists. He gave concrete examples of cases where basic statistical methods in common use weren't just misleading, but gave results that were simply wrong. He didn't put a lot of effort into hiding his contempt for certain individuals. Makes for fun reading.

"I think that's actually a partial test of whether one is doing science (specifically scientistry per VD's useful nomenclature) or engineering. Now, any applied science requires some engineering, and a lot of engineering requires quality science, but if your end product is nothing but a paper, you certainly aren't doing engineering and there's a good chance you're not doing any useful science either."

A lot of scientific data we use today to produce useful results started out as completely useless facts...how do you know what knowledge will be useful 5, 10 , 50 or 100 years down the road?

This is very entertaining. What you, and others, have been driving at for some time is finally being noticed. Don't expect a 'thank you', or acknowledgement. You, and I, are just doubters and will be for the rest of our lives. No matter that doubting is better science than simply believing. Well, actually, I think that is the point.

"I would never consider a psychologist/sociologist/doctor as a scientist -- hell I barely think that biologist are scientists .... all are notoriously bad at math."

Given that a doctors education in science is significantly greater than a biologists... I think you have this kinda backwards.

"how do you know what knowledge will be useful 5, 10 , 50 or 100 years down the road?"

You don't, and collecting data is useful as long as the conditions are well documented. It is a necessary, but not sufficient, part of scientistry. A data collector is not a scientist any more than a seed collector is a farmer. I mean, my thermostat collects data too.

the point of science is to increase our collective understanding of natural phenomenons. Like all human activities, this can be done well or poorly. But again like every other human activities, each individual scientific endeavors has to be judge individually according to its individual merits or lack thereof.

One of the major problems with statistics is it is just too new. A lot of the major advances did not happen till the 20th century. There is so much there NO ONE KNOWS in the field. It is expanding rapidly and is very, very, hard. It is like calculus developing in the 1600s. Who knew back then all the techniques you would have in differential equations and analysis. They developed slowly over time.

And yet they are trying to do everything with statistics, like it is as simple and settled as 2 + 2 = 4.

although stachastics and statistics isn't my specialty, I know the basics.Stochastics?

Also, researchers in biology who make up their graphs like this:They have data, in the form of points in a graph - and then they connect it. That's it.

They do that in elementary school. I guess they missed the best-fit line part. It is not hide and go seek the dots.

As an example, I recall in physics we experimented with multiple ways of calculating the force of gravity. One of them, measuring speed of a weighted cart attached to a weight on a pulley, had amazingly consistent results, p-value <.01, but the calculated value was way off. Other methods had a higher p-value, but the results were much closer to reality.It is like type I and type II errors, or false positive and false negative. They are inversely proportionate... but still just a GUESS. And can even be manipulated by increasing the sample size n.

What's the statistical P value in the ongoing banker suicides?Well that depends. Are the x(1), x(2), ..., x(n) random variables

dependent or independentfor some random x(i) banker?0.05 and 0.01 are the enemy of much progress. Many times while reading JAMA and other medical journals, I've seen results that would be of great benefit essentially written off because they didn't reach that magical number of statistical significance.Not to mention the 0.01 and 0.05 are just pulled out of someone's ass. They merely correspond to 99/98% and 95/90% confidence intervals. Again, another GUESS.

After learning Bayesian methods, I got quite confused as to why anyone would stick with frequentist methods. Bayesian methods have far better interpretability, because everything is based on the "amount of knowledge" ideas of probability, rather than the "if we tried this experiment 100 times, 80 times we would see the mean fall within this confidence interval" junk.Probably because selecting proper priors in Bayesian analysis is labor-intensive, delicate, still an active area of research and large portions were not developed until the 1980s.

Who's to say a third statistical school might not come about eventually...

There was an outspoken physicist (you may have heard of him) by the name of E.T. Jaynes who wrote extensively about the errors of the frequentists. He gave concrete examples of cases where basic statistical methods in common use weren't just misleading, but gave results that were simply wrong. He didn't put a lot of effort into hiding his contempt for certain individuals. Makes for fun reading.He was also an ass clown with a hideous flat-top haircut like H. R. Haldeman, Nixon's White House Chief of Staff. And:

Richard T. Cox showed that Bayesian updating follows from several axioms, including two functional equations and a controversial hypothesis of differentiability. It is known that Cox's 1961 development (mainly copied by Jaynes) is non-rigorous, and in fact a counterexample has been found by Halpern.Naughty!

Thanks Hoots and Doc. Idle:

I googled up Bayesian methods and it did not take me long to realize why, though I had significant interest in science, I was never able to handle the math. Gods. You kidded not Idle when you said stats were an developing science. I usually manage to stand in awe of real, honest to gosh, competent mathematicians. Not that I can understand much of what they say after about the 4th. derivative, and the little squiggle that looks like 90 degree out of phase sine wave on the equations.

But, these are the sort of discussions that really make this blog a must visit. Daily.

You will know the value of P when the rest area you expect is closed. You won't void no matter how much you wish to. Or regret it if you do.

Or as Barbie said when you pulled her string, "Math class is hard".

It might take masculine fortitude to ascend the mountains of pi.

Not that I can understand much of what they say after about the 4th. derivativeJust apply the differential operator four times...

Or go up to d^ny/dx^n = (d/dx)^(n-1) * (dy/dx) for the nth derivative.

and the little squiggle that looksA tilde? ~

like 90 degree out of phase sine wave on the equations.So a cosine wave?

Probably because selecting proper priors in Bayesian analysis is labor-intensive, delicate, still an active area of research and large portions were not developed until the 1980s.I should rephrase, because I agree with that. Look at me updating my posterior! (There's a plastic surgery joke in there somewhere). Anyway.

In my view, it's the structure and meaning of the Bayesian way that should be highly motivating to push through those issues. I also don't mind the whole prior thing. I figure if your assumptions are clearly stated, and you can back up the prior with some reasonable knowledge/hypotheses, then the rest flows from there and we can talk about the results.

That's all based on my perspective as an engineer, though, where I'll take ease of interpretation over most other things.

To the Bayes fans on here. I use Bayesian methods regularly in my work. I also use frequentist methods. My graduate adviser had summed it up this way: Bayesians answer the right question the wrong way, Frequentists answer the wrong question the right way. All you have to do is take the red (blue?) pill (i.e., go Bayesian) and all your problems go away. All. It's the perfect solution to all statistical issues. Now cough up that red (blue?) pill and return to reality because the friggin' "subject matter experts" who provided your prior were completely off base. Furthermore, your model is also wrong. Bad prior + bad model = bad results.

Sorry Idle: I was talking straight out my butt on that. Before I die I would love to understand enough calculus to actually solve problems like that. A simple one anyway. But it's really fascinating to read about. And, I agree with the general consensus here that good Engineers are a big part of what makes the world work.

A bayesian is forced to state his prior. This is a good thing. Not everyone has the same prior, and that's fine. It's far more useful than a concept like "null hypothesis." Have you ever looked at the priors implied by popular statistical tests? I have. They are ugly as sin, and couldn't possibly be justified under any but the most manufactured conditions. But nobody knows this because statisticians don't share their priors.

No need to go past the 4th derivative. Unless you still believe in interdimensional coexistence or something, or if you think physics is actually onto something (I don't). Even then, I think trig or geometry would better suit your fiction. Further, as poorly as math comes out without massive resources thrown at it in time and effort, I doubt if, even with need, that the right solutions past the fourth would be readily obtainable.

I'm just suggesting that being in awe of a mathematician is like being in awe of a typical guy with too much time on his hands, and, really, who ends up using trial and error a lot more than actual brain power to get the formulas to mean anything in realtime. Talk to a sniper if you want to meet a practical mathematician whom you should appreciate.

Doom-- I've known a few snipers and I was surprised at how well they mix what you probably call instinct or guts, with mathematics and physics. My second hand experience is that it is a oddball mix of sacred myth and hard nosed calculation. The spotter I knew was particularly good at mixing them into a seamless soup that ended up with direct hit more often than statistically predicted.

" . . .a movement to make evidence-based decision-making as rigorous and objective as possible."

Well hey, I've got an idea. Why don't we do an

experiment.I know it sounds crazy, but it just might work.

My only exposure to the scientific community is through StackExchange, so I admittedly don't have solid grounds for saying so, but it seems to me the modern scientific community has devolved into a giant circle jerk. You can't say anything unless you can quote ten other guys who said the same thing. New ideas are shot down because you can't cite any references to people who agree with you. I'm not talking about citing data, I'm talking about citing conclusions.

For instance, I made the simple comment that Infinity isn't a number and therefore shouldn't be treated as such. It seemed a pretty noncontroversial thing to say, but because Infinity is regarded as a number by certain branches of mathematics*, I was somehow speaking Martian.

It's like I'm not allowed to air any new ideas at all. It has to be part of the consensus or you get a downvote. And you only need three of those to get your post or answer deleted.

Again, it's not the most rigorous platform ever, but still. I don't think it's much better anywhere else.

*Infinity is regarded as the last number in the set {1,2,3...}, which is a logical contradiction because the moment you make it a number, it becomes finite.

The series is unbounded at the high end. Infinity is not regarded as the last number of the set, because the set is

definedas having no last number. Hence no contradiction.Where mathematics treats infinity

as ifit were a number it is actually performing operations on theset. A number is an entity, a set is an entity and may be operated on as such, but being an entity does not imply being a number.You are correct. Infinity is not a number, it is an abstract concept

aboutsets of numbers. But hey, I'm just an erstwhile physicist, so what do I know?Nate,

Because of the situation with my unborn child and wife (her water broke way to early) I have had some rather frank and interesting talks with bride's OBGYN. The first week she pulled me aside and said "90% of the women who have PPROM lose the baby within 48 hours". Ok, easy stat. But when I asked "Why did the membranes tear for most of those cases? Is that the case here?" She didn't know.

She is a great doc, and has done something most won't, which is push for us to preserve the life of our baby (the majority would just look at the stats and push for a abortion). But looking at the stats doesn't solve the question "Why?" in this case. The question is not even being asked, just the numbers.

dh,

I also know a few long range shooters. I play at it some, but while I can do the math, I don't have the "feel". There is an old guy at the range who just has it. I have seen him do things with a rifle that is just amazing, and while he knows his ballistics very well, he has the gift of being able to read the wind like no other person I have met.

And that includes some who make long range shots in interesting places for a living.

Here's some abuse of statistics

http://market-ticker.org/akcs-www?post=228432

The series is unbounded at the high end. Infinity is not regarded as the last number of the set, because the set is defined as having no last number. Hence no contradiction.

Where mathematics treats infinity as if it were a number it is actually performing operations on the set. A number is an entity, a set is an entity and may be operated on as such, but being an entity does not imply being a number.

You are correct. Infinity is not a number, it is an abstract concept about sets of numbers. But hey, I'm just an erstwhile physicist, so what do I know?

I wasn't claiming to be the ultimate authority, you know. I was simply saying that I found no objections to what I said save for incredulity over my lack of citations. The definition of infinity also isn't mine. I would have said that it is the number of members in the set or something. Thanks for the lesson though. It's appreciated. It's also what I was expecting from a forum dedicated to such things.

VD , please check out wmbriggs.com for wonderful discussions on philosophy and statistics.

" . . .my lack of citations."

The more difficult concept is fully bounded infinities. Say the set of real numbers from 1 to 2, inclusive. Resolving this, more or less, gave us the calculus.

Thus the citations are to be found in the

firstchapter of any decent, freshman calculus textbook. It is the foundation of the Theory of Limits.Which gave us the resolution to various of Xeno's Paradoxes.

In my view, it's the structure and meaning of the Bayesian way that should be highly motivating to push through those issues. I also don't mind the whole prior thing. I figure if your assumptions are clearly stated, and you can back up the prior with some reasonable knowledge/hypotheses, then the rest flows from there and we can talk about the results.Unless you want something brand-brand new. Sometimes you just have to whip your dick out and spin the fucking wheel.

And, I agree with the general consensus here that good Engineers are a big part of what makes the world work.Yes, but you have to be careful here. The criticisms that engineers are lobbing at scientists are the same ones that get lobbed at engineers by technicians, machinists, and craftsmen.

"METAL DOES NOT BEND THAT WAY! I don't care what your engineering plans say."

"Well, that design is fucking intuitive..."

No need to go past the 4th derivative.Have you heard of a

Taylor Series? Taylor has. And so has every freshman engineering student.Unless you still believe in interdimensional coexistence or something, or if you think physics is actually onto something (I don't). Even then, I think trig or geometry would better suit your fiction.This is assuming the derivatives are only taken with respect to spatial dimensions, like x,y,z. They can easily be taken with respect to time t, like simple movement.

1st - Velocity dr/dt

2nd - Acceleration d^2r/dt^2

3rd - Jerk d^3r/dt^3

4th - Jounce d^4r/dt^4

5th - Snap d^5r/dt^5

6th - Crackle d^6r/dt^6

7th - Pop d^7r/dt^7

These are used all the time, hence the

actual names.Unless, of course, you think cars changing acceleration on the road is fiction.Further, as poorly as math comes out without massive resources thrown at it in time and effort, I doubt if, even with need, that the right solutions past the fourth would be readily obtainable.You can literally do this stuff by hand without a calculator if the function is simple.

I'm just suggesting that being in awe of a mathematician is like being in awe of a typical guy with too much time on his hands, and, really, who ends up using trial and error a lot more than actual brain power to get the formulas to mean anything in realtime.I think you are mistaking a mathematician with someone who counts on his hands.

Talk to a sniper if you want to meet a practical mathematician whom you should appreciate.You mean like all the mathematicians that used to calculate where the cannon ball should land? Projectile motion and gravity, bitch.

Or shockwaves. Or munitions. Or the Manhattan Project.

I would have said that it is the number of members in the set or something.That concept is called cardinality. It is related to ordinal.

The more difficult concept is fully bounded infinities. Say the set of real numbers from 1 to 2, inclusive. Resolving this, more or less, gave us the calculus.There is also countable and uncountable infinities. Yes, there are different sizes of infinity.

Which gave us the resolution to various of Xeno's Paradoxes.Zeno of Elea.

5th - Snap6th - Crackle

7th - Pop

I'm

stillannoyed we didn't name quarks chocolate, vanilla and strawberry. The whole "flavors" thing doesn't make sense without that.Zeno of Elea.You've never heard of the great philosopher, Xeno of New Zelea?

My typing fingers are going to be taken out and shot, then beaten until their morale improves.

## Post a Comment

Rules of the blog

Please do not comment as "Anonymous". Comments by "Anonymous" will be spammed.