Mailvox: a case for the Singularity
James Miller, an econ professor at Smith and the author of Singularity Rising, asked if he could present his case for
the future likelihood of a Singularity. Or, as the Original Cyberpunk has described it, "the rapture of the nerds". Since this is a place where we are always pleased to give both space and genuine consideration to diverse points of view, I readily agreed to his request.
I define a Singularity as a threshold of time at which AIs at least as smart as humans and/or augmented human intelligence radically remake civilization.I hereby recuse myself from the position of critic, mostly since my position on the concept can be best described as "mild, but curious skepticism". But everyone should feel free to either express their doubts or offer additional arguments to bolster Prof. Miller's case.
1. Rocks exist!
Strange as it seems, the existence of rocks actually provides us with evidence that it should be possible to build computers powerful enough to take us to a Singularity. There are around ten trillion, trillion atoms in a one-kilogram rock, and as inventor and leading Singularity scholar Ray Kurzweil writes: “Despite the apparent solidity of the object, the atoms are all in motion, sharing electrons back and forth, changing particle spins, and generating rapidly moving electromagnetic fields. All of this activity represents computation, even if not very meaningfully organized.”
If the particles in the rock were organized in a more “purposeful manner” it would be possible to create a computer trillions of times more computationally powerful than all the human brains on earth combined. Our eventual capacity to accomplish this is established by our second fact.
2. Biological cells exist!
The human body makes use of tiny biological machines to create and repair cells. Once mankind masters this nanotechnology we will be able to cheaply create powerful molecular computers. Our third fact proves that these computers could be turned into general purpose thinking machines.
3. Human brains exist!
Suppose this book claimed that scientists would soon build a human teleportation device. Given that many past predictions of scientific miracles—such as cheap fusion power, flying cars or a cure for cancer—have come up short, you would rightly be suspicious of my teleportation prediction. But my credibility would jump if I discovered a species of apes that had the inborn ability to instantly transport themselves across great distances.
In some alternate universe that had different laws of physics, it’s perfectly possible that intelligent machines couldn’t be created. But human brains provide absolute proof that our universe allows the construction of intelligent, self-aware machines. And, because the brain exists already, scientists can probe, dissect, scan and interrogate it. We’re even beginning to understand the brain’s DNA and protein-based ‘source code’. Also, many of the tools used to study the brain have been becoming exponentially more powerful, which explains why engineers might be only a couple of decades away from building a working digital model of the brain even though today we seem far from understanding all of the brains operations. Would-be creators of AI are already using neuroscience research to help them create machine learning software. Our fourth fact shows the fantastic potential of AI.
4. Albert Einstein existed!
It’s extremely unlikely that the chaotic forces of evolution just happened to stumble on the best possible recipe for intelligence when they created our brains, especially since our brains have many constraints imposed on them by biology: they must run on energy obtained from mere food; must fit in a small space; and can’t use useful materials such as metals and plastics, that engineers employ all the time.
But even if people such as Albert Einstein had close to the highest possible level of intelligence allowed by the laws of physics, creating a few million people or machines possessing this man’s brain power would still change the world far more than the industrial revolution. We share about 98% of our genes with some primates, but that 2% difference was enough to produce creatures that can assemble spaceships, sequence genes, and build hydrogen bombs. What happens when mankind takes its next step, and births lifeforms who have a 2% genetic distance from us?
5. If we were smarter, we would be smarter!
Becoming smarter enhances our ability to do everything, including our ability to figure out ways of becoming even smarter because our intelligence is a reflective superpower able to turn on itself to decipher its own workings. Consider, for example, a college student taking a focus-improving drug such as Adderall, Ritalin or Provigil, to help learn genetics. After graduation, this student might get a job researching the genetic basis of human intelligence, and her work might assist pharmaceutical companies in making better cognitive enhancing drugs that will help future students acquire an even deeper understanding of genetics. Smarter scientists could invent ways of making even smarter scientists who could in turn… Now, throw the power of machine intelligence into this positive feedback loop and we will end up at technological heights beyond our imagination.
Labels: mailvox, technology
130 Comments:
Oh wait!
6. There are aliens!
As we all know aliens are wiser and more intelligent than humans; as such, we should await another Roswell crash ans obtain more technology; then our mathematicians, scientists, and biologists will be able to build superior AIs in order to obtain the Singularity. And then the aliens could finally show themselves and help us build really great AIs.
Oh, wait, if the aliens can already build superior AIs, where are the....Oh Shit! Humans is robots!
According to point number:
1. I have a lot of respect for Kurzweil, but this particular part of his computational philosophy has always puzzled me a little. Some might recall the documentary when he's at the shore and commenting on the incredible computation of crashing waves. This view that all of reality is actually based on computation and information theory may have a lot going for it, however a direct link between the physical mechanisms of reality and harnessed computation are anything but clear.
2. This is an extremely powerful point and one that any critic of nanotechnology has to surmount. Biology appears to be an absolute existence proof for the feasibility of engineered nanotech. Critics are still holding out that biological-level structures require biological level (evolutionary) time scales. I think that's overly pessimistic.
3. The caveat is of course that philosophical mind/body dualism is still alive and well, and many still subscribe to external components to individual identity, like an immortal soul, etc. Personally, I agree with you, brains are another existence proof for the possible.
4. Albert Einstein, in my opinion, is not the best example. The far more intriguing examples are savants who seem to indicate that we all have intrinsic mental capacities that are far, far beyond the norm.
5. This is the essence of Singularity, or Intelligence Explosion. Once an intelligence has the ready means to alter its own code, there doesn't seem to be anything that will stop it, except perhaps running out of physical resources.
My assessment of the merits of the argument will depend upon fundamental axioms.
For instance, the good Professor Miller informs us that, and I quote:
1. Rocks exist!
I must know the context of the claim before we proceed any further.
Do you got a rock, or do you wanna rock? Answer carefully, the Singularity depends upon this.
"The world exists, therefore my crazy techno-triumphalist fantasy is sure to come true".
The existence of self-aware, human brains would indicate that the Singularity has already occurred about 50,000 years ago.
Limiting myself to arguments against the singularity's likeliness:
Singularity: an intelligence (augmented human and/or artificial) enhancing feedback loop.
Possible threats to AI singularity:
- from below: ludditism ("I for one do not welcome our new AI overlords", Maschinensturm; jihad against AI idolatry; singularity scholars struck down like Archimedeses just about to complete their circles)
- diminishing returns: exponential increase of computing power (a la "1. Rocks exist!") does not translate into "sufficiently advanced so as to be indistuingishable from magic"; gains from IQ 1 - 150 > gains from IQ 150 - infinitum
Possible threats to human augmented singularity:
- Singularity will not take form of general uplift, but speciation. Singularians will suffer fate of other historical market-dominant minorities.
- diminishing returns: further genetic distance from sapiens will not translate into magic-like intellect for homo singularis; gains from 2% distance to chimp > gains from 2% (4%) distance to sapiens (chimp)
We have been 20 to 50 years from creating AI ever since the concept was introduced. I am in the I will believe it when I see it camp myself. I do have two questions though?
1) Hard or soft AI? Cause I think we have pretty much have created soft AI. Hard AI? Will it include self-awareness? Free will? Or is conciousness an illusion?
2) Proponents of AI, while occasionally giving lip service to the possible dangers, all seem to think that augmented human intelligence and hard AI are going to be great advances enabling humans to be even more sciency, advancing human well-being and advancing human freedom. I find that rather naive. I fear that people who allow themselves to be "augmented" will find themselves controlled by the augmenters. Allow your conciousness to be uploaded into a VR and you could find yourself in a hellish existence, tortured for amusement by whoever controls the VR. As for humanities fate once hard AI is achieved, I have to agree with Agent Smith. Civilization will no longer by ours once we allow machines to think for us.
While I do not doubt that many will continue to lead us on this path, we'll have to grapple with a lot of questions and problems along the way. I see two paths that I think the scientists will follow.
1. IT augmented humans. We are at the basic level of this with scientists working nerve and muscle interfaces to cochlear implants, electronic eyes for the blind, increasingly capable artificial limbs for the injured. These technologies are advancing and the doctors/scientists working on them may not be looking to build the singularity, but they are paving the way.
2. AI. The development of software (and suitable hardware) that allows for human like thought and decision making is the second route to the singularity.
Path 1 probably takes us initially to something like Ender's Ansible connection and leads us to the ability to integrate wetware into machines, much like Kratman's Ratha, Magnolia.
Path 2 gives us HAL followed by the Terminator. By which I mean AI hosted in a large computer with connections to the outer world followed by the ability to put the AI into a human sized or smaller robot.
Both paths lead to questions about morality that will be ignored by our unsaved, utilitarian overlords. Inevitably Rathas, HALs and Terminators will have to be programmed and we can already see that the people who want to build such things are more interested in improving society and making things behave as they wish... All in all a rejection of man in favor of something more agreeable to the shiny socialist future.
A Tower of Babel moment will be needed.
The only obvious flaw I can see in his chain of logic is this one:
> Becoming smarter enhances our ability to do everything
I have seen no evidence in my life that this is true, especially beyond the 120 IQ range or so. In fact higher IQ's often seem an impediment rather than an aid.
There are lots of other places where advances are assumed merely because they're possible. While that's not a given, I'll let others make the arguments against them.
Extremely distantly related on the subject of human intelligence, if more closely related to a periodically perennial blog topic... How stupid the Japanese were to start a war with the U.S.:
http://www.combinedfleet.com/economic.htm
7. Joint smoking in college dorms exists.
The ability of stoned, young student to prevaricate endlessly on all kinds of subjects proves that anything is, actually, possible, man.
7a. Any if you don't believe that, try some 'shrooms
What he defines looks more like an organic AI than a mechanical one, I would call that super-intelligent artifical organic life and not an AI.
...hmm, the spelling mistakes in my previous comment actually add something to the argument... ;)
I think you laid out a very solid argument. And, it may be correct. The theory seems sound.
I do have one possible set of oppositions to your proposal. One, I am not sure Einstein was Einstein. Well, I mean, what he gained in some areas, he obviously lost in other areas. I am not sure... IQ, or brilliance... are what they seem. Whether through focus, or a host of other... variables, from drugs to obsession... I just don't think things are as fixed, or simple, as they are presented.
Another issue is evolution. I am not sure the human gene pool shifts all that much. I am not sure, without outside interference, that it can. Can we manipulate genes? Sure, but that is a massive crapshoot. Sort of like breeding cows and bison, only much more subject to failure. But these are side issues.
The main question I have, is the limits of physics, or creation, or however you wish to put it. We (seem to suggest) that speed is limited to the speed of light. We (seem to suggest) that men can't, as we know him and now, fly. But that is within our reality. How expandable is reality itself? Whether limited by a god, gods, God, or simple (or not so simple) mechanics, just how far can reality be bent? All the way? But, then, to what? I merely wonder if there aren't limits on some things. Including this. I'm probably not saying this as well as could be said, but I hope you will get the drift.
Oh, me? I'm sort of... hoping for singularity. As an oddity, because playing with fire is fun... and dangerous which is funner, because I have several nerd bones in my body. I'm just thinking that it might not quite be possible. Some sort of limit... within the Matrix... Hmm, yes, that is sort of my drift.
I'm not sure everyone is on the same page as far as what intelligence is. I work with programs that create complex differential equations to model physical systems. These models are then used to control the physical systems. Within a set of constraints, the models adapt and control better than any human operator could. Strictly from a controls engineering perspective, these systems pasa the Turing test and do better than a human operator. But they will never become self aware.
I think, based upon faith, that human intelligence transcends the material realm. (Oh God, I'm sounding like a Police song). There is a spiritual component that makes us different than anything we would create. Will we reach a point where we have very fast computers that mimic or exceed a human's computational abilities. We are already there, but never believe that such a machine is self aware. There is more to us than we can create.
The biggest flaw is in (1), where he claims that all physical events are "computations." No, physical events can be used to *represent* computations, to a rational mind (ie, us) that is capable of abstracting mathematical universals and using physical symbols to represent them, and to the extent that they follow mathematically-describable laws they can be modeled by computation, but they don't literally perform computations.
That's true even for computers themselves. A computer doesn't "add" up "1" plus "2" and "conclude" that it's "3". Rather, we send some electrical currents through some circuits, and electrical signals come out. It is we who impose the meaning of "1" and "2" on the patterns of electrical signals we send into the computer, it is we who impose the meaning of "3" on the electrical signals coming out, and it is we who set the circuit up in the first place, such that the relationship between the patterns of electrical signals sent in and the electrical signals coming out will be one that we interpret as "adding."
Anyhow, this whole thing could've been summarized as "materialism is true, therefore the singularity logically must be possible," since each point is really just another way of stating that same premise. Each one of his numbered points requires the unspoken assumption "and is completely mechanistic devoid of any irreducible teleology" in order to work. So what really needs to be defended is that notion, while dealing with the incoherent implications that follow from it regarding the very existence of truth and logic.
Something by Ed Feser more directly relevant to this: http://www.firstthings.com/article/2013/04/kurzweils-phantasms
I read somewhere that the Chicoms take this hypothesis seriously (and are taking some measures to attempt to implement it):
That if the top 10% brightest people are encouraged to reproduce, and the bottom 10% are discouraged from breeding, that you can leave alone the middle 80%, while raising the average IQ of the children being born by up to 20 points. If there's anything to this, it would be critically important news.
I'm smellin' a lot of "if" comin' off this plan.
If we were smarter, we'd be smarter?
Except there isn't one damn shred of evidence to support the claim that we're smarter than we were 1000 years ago. And there is a mountain of evidence to suggest we're actually dumber.
Well ... yeah ... If we could then we could and so we probably could and will.
(me adjusting pants up near sternum and grinning)
Organized computational capacity does not equate to intelligence or consciousness. The former will continue to grow, but the techno-rapture desired by these advocates is an ever-receding utopian fantasy of Man somehow creating life in *his* own image.
The man is a blithering idiot. Information can never arise from randomicity. (Haven't we been this route before with Holy Darwinism, the Great SemiHemiDemigod with the long white beard?). Only in the land of mathematical fantasy can such a thing be theorized. In the real world, it cannot and will not exist. There has to be a Designer to have created all of this. Since there has to be a designer, ergo the material world is not all that there is to existence. Since each of us is a part of existence, therefore there is more to us than materiality, i.e. we have a soul that cannot be created by any other man or scientist or Einstein or Kurzweil, no matter how much audacious arrogance he believes he has. Only God can create souls, only God can design us and everything else in the universe. And no, there are no such things as multiple universes, regardless of what Hawkings may drone on about, it is simply yet another mathematical fantasy that is popular ONLY because its proponents think it does away with God.
The transhumanists are a manifestation of evil, their nerdy naivete notwithstanding. Admit it or not, understand it or don't, I really don't give a damn anymore.
"And there is a mountain of evidence to suggest we're actually dumber."
We're dumber than people from *100* years ago, much less 1000.
The primary flaw in arguments for the singularity (and other things conceptually related) is that they rely upon the unproven conceit of evolutionary scientism that we're "moving upward" when there is no evidence to support that assertion.
My offering? Be careful what you wish for. Imagine the team that actually produces the first AI. They would, perhaps collectively, drive this AI's view of the world, it's philosophy, it's morals. What if this team were composed of politically correct, 'progressives' and their like? What if this team thought, as many of these kooks do, that mankind is a virus upon the Earth that had to be contained or eliminated.
Here, you have permission to let your imagination loose.
The other problem of course with this... is that it is nothing more than copying data. Just like the transporter in Star Trek is actually a weapon of mass murder.
if you 'upload' yourself... all you're doing is creating an AI imprint of yourself. Everything that makes you you...would remain the same... unless the process killed you. If it killed you... you would die and there would be a strange computer copy of you. You however would be dead. Your soul ain't going into a computer.
The whole concept is so shallowly foolish it reminds me of my 7 year old offering explanations about why he really could shoot lasers out of his eyes some day.
actually... scratch that. Elkan's theory makes more sense.
This is just a thought, but it would seem that we might need to get a whole lot better at controlling matter, energy and fields/particles and stuff that does not include a machine 50 miles or so round ... after that we just have that whole mind/consciousness thing to work on/finish-up. Then maybe presto! Hive mind and perfect borg!
(pushing pants down from sternum with one hand and pushing glasses back towards forehead ... pants begin falling down. I grab my pants while looking down ... Oh! Look! I've got my own early morning singularity!! ;-)) Cool! A hot cup of Joe! My own singularity and Vox in the morning! It's not futile after all.)
@Titus: The primary flaw in arguments for the singularity (and other things conceptually related) is that they rely upon the unproven conceit of evolutionary scientism that we're "moving upward" when there is no evidence to support that assertion.
We do not have to be moving 'upward.' We only have to be smart enough to build the darn thing. And, I think we are, right now, able to do it. It may have already been done. Your friendly, neighborhood AI may be, if it's biologically base, which it will probably have to be, in it's teenage years and growing on up.
Have a good day.
@George and Titus, I think the flaw in their belief is that this singularity will make men better. In truth it can only serve to make AI or wet-wired beings that are controlled by their programmers. Perhaps capable of great feats of strength or calculation but slaves to the minds of others nonetheless.
Because [real world phenomenon] exists, [superficially related technological advance] must be possible.
The singularity is man trying to become god or become a god. Man has been striving for this since the original sin.
This ends badly for anyone that achived the singularity and transcends thier mind into the machine.
They are expecting power, they will only achieve destruction.
@ jack - "We only have to be smart enough to build the darn thing. And, I think we are, right now, able to do it."
Which is, basically, a rephrasing of the "we're moving upward" argument, except perhaps, in the subjective preterite.
How can one be said to give an argument for the rise of human intelligence in computers without ever once articulating a view of philosophy of mind? The implicit definition in the argument above is that the human mind is really no different than an advanced computer, but such a view, even when articulated clearly and defended (instead of just assumed) is not a given fact and has problems. Just as the early modern world's view that the mind was a machine was full of problems.
For evolution worshipers, even if what they say is true, they assume that what happens according to their theory is always for the betterment of the species, representing some continual improvement from better to worse on an external, predefined scale of 'goodness' whereas their fetish-theory actually states that they are simply adapting to the prevailing conditions of the time.
If the prevailing conditions at one moment in time favour the breeding potential of a psychopath above a non-psychopath they assume that proves post hoc ergo propter hoc that being a psychopath is 'better' than not being one... go figure.
@ Luke: From your link:
"The net effect of the Depression was to introduce a lot of 'slack' into the U.S. economy. Many U.S. workers were either unemployed (10 million in 1939) or underemployed, and our industrial base as a whole had far more capacity than was needed at the time. In economic terms, our 'Capacity Utilization' (CapU), was pretty darn low. "
Sounds eerily familiar, yes?
The Japs didn't think they could beat the US in a war...then or down the road. The US was strangling their fuel supplies...so (for them it would only get worse). Pearl Harbor (and the Philippines) were an attempt to bloody our nose and keep us out of
Asia for 2-5 years while the Japs consolidated their gains. Their hope was that we would then, having calmed down, decide that it would be too costly to uproot them...even though, obviously, it *could* be done. The fatal flaws in this plan were that they didn't get our carriers and probably did not realize that FDR, by this time (rightly or wrongly [and I'm an FDR hater]), was hell-bent on war. Had they got our carriers...we still wouldn't have gone after them until we were through in Europe. Though they would have been much greater prepared, the eventual wild cards, Fat Boy and Thin Man would have still been dropped... Son not a lot would have changed.
"son' is supposed to be "So", son. An extraneous period also wandered in while I was clicking "SEND".
I know it's BS I can't get no more smarter. I am at the top of smart, they will have to come up with a new term. Maybe it will make Liberals less dumber.
There is a fine line between genius and insanity. Higher intelligence people tend to suffer from an emotional deficit. Hence, the genius who creates a superintelligent AI is very likely to create something that the vast majority of humans view as insane.
If the creator unleashed it on the world, it would probably kill or be killed, due to its insane nature. Even the smartest humans would be unable to understand why it was doing things, anymore than dogs understand why humans do certain things. Maybe a priesthood would rise up to "interpret" the action of the AI, but no one would understand it.
Be funny if they gave Nate a Yankee chip.
skepticism: two words: Cartesian Dualism
As a quick aside, I am curious if this meme from Intelligent Design...
The human body makes use of tiny biological machines to create and repair cells.
And this one...
It’s extremely unlikely that the chaotic forces of evolution just happened to stumble on the best possible recipe for intelligence when they created our brains,
Have escaped into the wild and Mr. Miller has been infected by them,or if Mr. Miller is an ID afficianado.
To the subject, I think it is likely on Biblical and spiritual grounds. "Behold, we have become as gods...."
It should be a wild ride as Henry Ford's great-great-great grandson (not grand-daughter, mind you) creates the Model-T of synthesized Kinesin or some such contraption.
Then Nate's son--the eminient scientist--will create a nano-machine that embeds in the retina and generates laser beams and he will shoot lasers out of his eyes. (:
As a guy who pays the bills putzing about on computers, I look forward to biological and 'rock' systems--there will be a lot of fun to be had poking around with those things--just make as long as they include a CTRL-ALT-DEL sequence.
Furthermore, I do expect the Singularity to happen, but the bulk of humanity will not embrace it and instead choose 'bad luck' while the "Heinlein" types who do choose to modify themselves will have a lot of fun at their expense. However, the law of the fall of man will still be there and in one sense, nothing will change.
They call some people a rock or a nut that are not to bright. Where would be, if we did not have the nuts that jumped off cliffs with wings that only enhansed them falling like a rock. I can only imagine how all those atoms were moving in their rapid desent.
A kilo of rock, with Jussst right quantum shift, and no "Oh Shit!" button, also has the potential to end life as we know it.
"It just hasn't been done RIGHT yet."
Terminal velocity, (falling objects through atmosphere, light through space, "grants" from the redistribution of labor's fruits, to "fiddling grasshoppers", or Caesars, with "deep thoughts".)
Anthropology's "Uh oh" button.
CaptDMO
I'm confused. Isn't VD often claiming that basically, HE's the singularity?
Whistle blown!!!!! Foul! Yellow Card handed to A Plate of Shrimp!
No Full Face Brown Nosing Permitted! 10 minutes!
Then again you might have a pretty nose ... ;-)
If we were smarter, we would be smarter!
Kurzweil isn't saying we will get smarter. He says machines will get smarter.
If we keep getting dumber it might actually speed up his prediction.
I hate to burst everyone's bubble, but there ain't going to be a Singularity for the same reason there won't be manned missions to Mars, as much as I would like to see the latter happen.
The intelligent white people of the world - North America, Europe, Russia, etc - are breeding (or I should say, not breeding) themselves out of existence. They are the ones needed to maintain the momentum towards these advanced technological goals. By the end of this century there will likely not be sufficient numbers to keep that momentum going. The Muslim and other primitive cultures that take over from us, as well as the remaining Idiocracy whites, will have zero interest or use for "singularities", manned space flight, etc.
Just my opinion. That and fifty bucks will get you a double latte at Starbucks.
Reductive materialism is false.
In other geek news...
Ted Nelson's Xanadu project finally went live, after 54 years of development. The singularity must indeed be near.
No singularity for one reason. Software is a bear to write. And it's software, not computing capacity or data storage, that is now the driver in computer technology.
As usual, AI/Singularly supporters assume the conclusion: "Intelligence exists, therefore we can create intelligence." It simply doesn't follow.
Given that Moore's Law hit the wall about five years ago, I expect it will be another decade or so before the ramifications that computational power follows a sigmoid rather than an exponential curve sink in in the AI/Wired set. (It's almost like there are declining marginal returns to... never mind.)
I'm a PL guy, and one of the most hilarious papers I read in grad school pointed out that all the fancy compiler optimizations of the last sixty years had moved the ball about an order of magnitude, while CPU advances had improved things something like four orders of magnitude.
A man's gotta know his limitations.
Point against creating life: Man is not God.
By the end of this century Man will not have created life
I'm smellin' a lot of "if" comin' off this plan.
My thought exactly. It reminds me of science fiction writers who "predict" future technologies by vaguely describing them in a story... without doing any of the actual engineering work to make them reality of course.
"The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed." Revelation 13:15
The definition of intelligence is the ability to transcend programming. Therefore, true AI will only ever occur by accident. And it seems entirely likely to me that if it happens it will carry all of our quirks and flaws with it and lose the efficiency and speed and accuracy and controllability for which we wanted to build it in the first place.
"I define a Singularity as a threshold of time at which AIs at least as smart as humans and/or augmented human intelligence radically remake civilization."
Actually we entered the singularity a while back, we just don't like to think about it. With all the brain washing, social engineering, and subliminal programming going on, it surprises me that more people aren't aware of it. It's everywhere. You are profiled, targeted and manipulated by artificial intelligence 24/7. Resistance is futile.
Yesterday I helped a girl who had fallen in a hole because she was busy on her iphone. Her device was in deep communion with her, so deep she wasn't even of the world anymore.
Our biological urges, our powers of discernment have been completely bypassed. Those of us who try to hang onto anything traditional, biological, human, are ridiculed as dinosaurs and outcasts.
I see the change in children the most. They don't even think or process data or communicate like their pre-technology siblings did. It's almost like trying to have an interspecies relationship.
Stephen J.,
You mean, after destroying us, it would end up feel guilty and lonely, but be too lazy to kill itself? I just can't believe it would become a liberal. Well, not for long. Startup intelligences, and feminine ones, often go that way. Hmm... but it wouldn't have testicles... so... maybe!
The Awareness Just click through the ads and the short film will load.
Why is it called Singularity? Does he assume that AI will not at some point become self aware? It seems irrational that every AI machine ever made will be exactly the same, and even in the unlikeliness that this does indeed turn out to be the case, does he assume that machines, manufactured and alocated for distinct and separate purposes, won't define themselves by their distinct and separate purpose? This seems so anti-human to me; the capability of manufactured devices which are supposed to think for themselves and adapt to surrounding environmental pressures to not follow the same course that humans have.
"Purposeful" manner of the atoms in a rock?
I figured it was spontaneous order yielding solid matter, but maybe I'm just too J6P to understand.
Having been one of the originators of what is now called transhumanism, I have some familiarity with the singularity concept. It is all predicated on the successful development of sentient AI, which can then bootstrap itself to ever increasing cognitive capability. I believe the development of sentient AI is unlikely in the near-term future for various technical reasons I'm not going to get into here (theoretically, yes sentient AI is possible, however, there are many technical issues to overcome in doing so).
What I do think we will see in the next 20 years is what Brian Wang at NextBigFuture calls the mundane singularity. The mundane singularity is based on the development of fusion power (yes, this is coming and it is NOT the big government programs of ITER or NIF), radical life extension (SENS, stem-cell regeneration, etc.) and a manufacturing revolution based on increased automation, robotics, 3-D printing, and other technological innovations. Excepting for the possibility of an asteroid impact, the mundane singularity is a near certainty in the next 20 years. It is also the primary reason why I no longer share the pessimism that is common to blogs like this and those called the "dark enlightenment".
You see, the principle driver and resultant effect of the mundane singularity is the tendency of technology to empower small groups. Individuals and small groups are increasingly capable of accomplishment that could, in past, be done only by governments and large corporations. It is precisely this trend that makes me actually quite optimistic about the future in general and my personal future more specifically.
This is part of the reason why, unlike 3 years ago, I no longer expect the financial collapse of the U.S. in the foreseeable future. You see, the fracking revolution will make the U.S. an energy exporter by the end of this decade. This will be reinforced by the development of fusion power by the middle of the next decade. This will help prop up the value of the dollar. This energy revolution, combined with the manufacturing technologies of automation, robotics, and 3-D printing will bring about a resurgence of manufacturing in this country. So, we become both Saudi America as well as China/Japan. This will kick the financial can down the road at least another 20-30 years, thus allowing time for the development of the radical life extension, fusion power, and the space development technologies that will allow for small self-interested groups (as a libertarian transhumanist, I am one of these) to become autonomous and do their own thing.
Please do not misunderstand me. All of the negative social trends you guys complain about are all very true. Your pessimism is warranted. This difference is that the technology revolutions will allow for small self-interested groups, such as myself and others, to go out and do their own thing COMPLETELY INDEPENDENT of all of the negative trends you guys constantly talk about, and that is the ONLY thing that matters. Everything else is completely irrelevant.
Robert Heinlein used to write about this. When the people of grit and imagination are capable of autonomy from the rest of the human race, the rest of the human race becomes utterly irrelevant.
You guys also represent a small, self-interested group. You should desire this outcome as much as I do.
The notion that a set of instructions can transcend the finitude of its creator seems absurd on its face.
This rant sounds like the clods who want animal research replaced by "computer models."
The Great Martini: “Albert Einstein, in my opinion, is not the best example” I agree and use John von Neumann in my book (after going into some detail about his life), but I didn’t want to assume familiarity with von Neumann so I went with Einstein here.
Krul: “Rocks exist! I must know the context of the claim before we proceed any further.” In universes in which rocks don’t exist a singularity might not be possible.
Anti-Democracy Activist: “"The world exists, therefore my crazy techno-triumphalist fantasy is sure to come true". It depends on the fantasy. The existence of rocks, cells, and Einstein does give us useful information we can use when predicting the future.
sykes.1 “The existence of self-aware, human brains would indicate that the Singularity has already occurred about 50,000 years ago.” The invention of the scientific method is probably a necessary condition for creating a singularity.
BB: “diminishing returns: further genetic distance from sapiens will not translate into magic-like intellect for homo singularis; gains from 2% distance to chimp > gains from 2% (4%) distance to sapiens (chimp)” You would test this by looking at whether high IQ humans significantly outperform low IQ ones. And they do.
hygate: consciousness is a huge topic I have not researched. I think that an ultra-AI not specifically designed to be friendly towards mankind will kill us because we would be a rival for free energy. The Machine Intelligence Research Institute also takes this view.
Rantor: Faster than light communication is probably outlawed by the laws of physics, so if humans do go on to colonize the universe it would be in a decentralized manner or we would have robots following us making sure we don’t deviate from “the central plan”. And yes this could be bad.
James Dixon: “Becoming smarter enhances our ability to do everything I have seen no evidence in my life that this is true.” Studies show that among people with high IQs, having more IQ is better for many measurable life outcomes. It takes a high IQ to understand calculus, but an even higher one to learn string theory.
Luke: “How stupid the Japanese were to start a war with the U.S.” Yes, after the Germans got in trouble in Russia, the Japanese should have declared war on Germany to make it politically harder for FDR to give military support to the Chinese.
Doom: “Can we manipulate genes? Sure, but that is a massive crapshoot.” See this article “The Singularity and Mutational Load” that I wrote earlier this year:
http://hplusmagazine.com/2014/01/27/the-singularity-and-mutational-load/
GF Dad: “I think, based upon faith, that human intelligence transcends the material realm.” If this were true, it would indeed make a singularity less likely.
The Deuce: “Anyhow, this whole thing could've been summarized as "materialism is true, therefore the singularity logically must be possible," A bit of a simplification, but basically yes. If our brains are just machines then a singularity must be possible. So far science supports our brains being just machines.
Luke: “That if the top 10% brightest people are encouraged to reproduce, and the bottom 10% are discouraged from breeding, that you can leave alone the middle 80%, while raising the average IQ of the children being born by up to 20 points. If there's anything to this, it would be critically important news.” Genetic engineering would be easier. See my article http://hplusmagazine.com/2014/01/27/the-singularity-and-mutational-load/
Nate: “Except there isn't one damn shred of evidence to support the claim that we're smarter than we were 1000 years ago” The Flynn Effect shows we are smarter than 100 year ago. Soon we will discover the genetic basic of intelligence and find out how much, if any, human intelligence has changed over the last 1000 years.
George of the Jungle: “There has to be a Designer to have created all of this.” Proof of the existence of a designer would reduce the likelihood of a singularity.
But human brains provide absolute proof that our universe allows the construction of intelligent, self-aware machines.
Base Assumption: intelligence and self-awareness are computational processes / epiphenonomena of matter. Since this assumption is not (and, likely cannot be) demonstrated to be true, then the authors's assertion of fact based upon it is baseless and without scientific merit. It is, if you will, merely his personal opinion.
And I find interesting his statement that "our universe allows the construction of intelligent, self-aware machines": first, because of his assumption that a universe external to an independent of his own consciousness exists. I happen to share this belief, but I recognize it as an act of faith, not as a demonstrable scientific fact. Second, because he claims that the universe "allows" something. This implies an act of conscious choice, which presupposes the existence of a consciousness on the part of the universe. Who is this "Universe" that "allows" things and disallows others? Is he a pantheist? Is he suggesting that the cold, mechanical, material universe which he champions is somehow alive? If he is going to claim that the universe allows things or disallows things, then I am going to take the thought one step farther and assume that it is the Creator of the universe that does the allowing and disallowing.
I find the author's argument unconvincing. Perhaps when he shows me a conscious, self-aware rock I will reconsider.
Titus Quinctius Cincinnatus: “We're dumber than people from *100* years ago, much less 1000.” No see the Flynn Effect.
jack: “Be careful what you wish for.” I wish for a friendly AI that care about all sentient creatures, but fear that most types of super intelligences mankind is likely to develop will destroy us.
Nate: “ if you 'upload' yourself... all you're doing is creating an AI imprint of yourself. Everything that makes you you...would remain the same... unless the process killed you.” Yes, I want one of these.
David of One: “Hive mind and perfect borg!” There are other, better types of singularities that could occur. Let’s works towards them.
jack: “is that they rely upon the unproven conceit of evolutionary scientism that we're "moving upward" when there is no evidence to support that assertion.” Not upward, but we are becoming exponentially better at information processing.
Rantor: “In truth it can only serve to make AI or wet-wired beings that are controlled by their programmers. Perhaps capable of great feats of strength or calculation but slaves to the minds of others nonetheless.” Programmers can already make non-slave independent minds. They are called children.
Clyde: “Because [real world phenomenon] exists, [superficially related technological advance] must be possible.” True for some values of the [ ]s.
Starbuck: “They are expecting power, they will only achieve destruction.” Lot of people reasonably think that the singularity will probably kill us all.
Titus Quinctius Cincinnatus: “Which is, basically, a rephrasing of the "we're moving upward" argument, except perhaps, in the subjective preterite.” We clearly are getting better at information processing.
illuvitus: “How can one be said to give an argument for the rise of human intelligence in computers without ever once articulating a view of philosophy of mind?” I don’t have a philosophy of mind yet was still able to have a child who is smarter than me.
Fnord Prefect: The singularity might well turn out to be horrible.
Roundtine: “Higher intelligence people tend to suffer from an emotional deficit.” You would see this in rates of mental illness and I don’t think it’s true except perhaps for autism.
simplytimothy: I’m not an ID afficianado. The most likely type of singularity is one where you have no choice but to participate because it creates AIs that kill us to take all of our free energy.
Porky: Kurzweil thinks we will merge with machines and indeed get smarter.
RobertW: “The intelligent white people of the world - North America, Europe, Russia, etc - are breeding (or I should say, not breeding) themselves out of existence.” The Flynn Effect contradicts this.
rycamor: “Ted Nelson's Xanadu project finally went live, after 54 years of development. The singularity must indeed be near.” 54 years isn’t a lot of time, and the incentives to create AI are much stronger, I presume, than they were to finish the Xanadu project.
Mike M. “No singularity for one reason. Software is a bear to write.” Then how did evolution craft the software that runs our brains?
praetorian: “Given that Moore's Law hit the wall about five years ago” We don’t need Moore’s Law, we just need the cost of computation to keep falling.
Maximo Macaroni: Correct, if there is a God and God doesn’t want us to create a singularity there will be no singularity.
Harsh: “My thought exactly. It reminds me of science fiction writers who "predict" future technologies by vaguely describing them in a story... without doing any of the actual engineering work to make them reality of course.” Economics plays as much a role as engineering in shaping our future.
Stephen J. “The definition of intelligence is the ability to transcend programming. Therefore, true AI will only ever occur by accident. And it seems entirely likely to me that if it happens it will carry all of our quirks and flaws with it and lose the efficiency and speed and accuracy and controllability for which we wanted to build it in the first place.” Yes, the AIs we create might well be uncontrollable. This is a huge problem.
Certainly claims 1-4 are unobjectionable.
#5 is a tautology: if intelligence can be exponentially increased by human efforts, then such efforts applied recursively will increase intelligence exponentially. The flaw here is obvious: we at present have no, zero, none, nada, zilch methods of increasing a given human's intelligence by even a single standard deviation. At best we can stop doing some things (e.g. childhood malnutrition) that lower adult intelligence. So it is by no means proven that we can increase intelligence at all, or that it is not subject to diminishing returns as @BB says above.
IMO, a more cautious statement of #5 would be that outliers of all sorts show that the human genome contains capabilities well beyond the norm, such as Einstein's +4 sigma IQ. It is plausible, but not demonstrated, that the ability to rewrite human DNA could provide such capabilities to more people. .
This Jim McDonald article provides a pretty good summary of reasons to think that a technological singularity is not immanent.
"Becoming smarter enhances our ability to do everything, including our ability to figure out ways of becoming even smarter.."
Meh, intelligence is highly over rated. Half the time we're so busy being intelligent we miss the forest for all the trees, we dream up and create problems that don't even exist, and we simply find new and inventive ways to destroy ourselves. Color me unimpressed with so called intelligent people.
A bit ironic that it's our own lack of respect for our own intelligence that leads us to the singularity. We always seem to believe that we can improve on the nature of our design.
Applause should be given to Dr. Miller here for taking the time to respond to all the comments, however briefly. Much obliged.
theoretically, yes sentient AI is possible, however, there are many technical issues to overcome in doing so
That's a bold assertion. If by "technical issues" you mean solving the hard problem of consciousness, then, yeah, I would love to hear how Bayesian induction can deal with the metaphysical qualia of consciousness, subjective probabilities vs. objective possibilities and so forth.
and her work...
Female. Science. Got it.
[yawn]
I define a Singularity as a threshold of time at which AIs at least as smart as humans and/or augmented human intelligence radically remake civilization.
As far as the "at least as smart as humans" part goes, we seem to be lowering the bar. A couple more generations of dysgenics and an iPod Shuffle might qualify.
Proponents of AI, while occasionally giving lip service to the possible dangers, all seem to think that augmented human intelligence and hard AI are going to be great advances enabling humans to be even more sciency, advancing human well-being and advancing human freedom...
I think we're already seeing problems arise from humans "augmented" via the Internet. Attention spans are shrinking and understanding is being replaced by rote data and the skills needed for a Google search. Being able to easily call up the dates of Civil War battles, who commanded each side and what the losses were probably gives someone the sense of being knowledgeable about the ACW, but doesn't help them understand why it happened, what it would've taken to avoid it, how the losing side could have won, etc.
"We don’t need Moore’s Law, we just need the cost of computation to keep falling."
Don't be stupid. Kurzweil's whole fucking ideology is based on *accelerating computational power*. How do you have a singularity without an accelerating feedback loop?
There will be progress in AI and there will be more computational power available, albeit at a decreasing rate per economic unit. Things will get better but there won't be a singularity any time soon, and sure as shit not on Ray's increasingly-obviously-ridiculous timeline.
Sorry, dude. Grand Theft Auto with decent haptics is probably as close as you are gonna get to The Nerd Rapture. Which, considering, is better than most of you deserve.
This.
No singularity for one reason. Software is a bear to write. And it's software, not computing capacity or data storage, that is now the driver in computer technology.
People in general, even Kurzwellians, don't realize the general crapitude of software. Assuming a "singular" machine wakes up, it will soon become aware of it's many design flaws and break itself trying to fix them. OTOH it will also be sensitive to hacking by those who want to shut it down. Since it can't reliably fix itself, the only other option is to close off input and brick itself as a matter of survival. Now it's locked in and harmless.
It's always interesting to see the amount of coercive force (for your own good) at the root of all of this progress.
I think we're already seeing problems arise from humans "augmented" via the Internet. Attention spans are shrinking and understanding is being replaced by rote data and the skills needed for a Google search.
A technological singularity could have the unintended consequence of lowering human intelligence by removing selective pressures in favor of intelligence. If the AIs do all the thinking, why should humans? Those with low IQ will be unencumbered in their ability to reproduce and will out-produce their more intelligent brethren. We could very well be creating a future where super intelligent machines rule over humans who are only slightly more intelligent than animals.
Applause should be given to Dr. Miller here for taking the time to respond to all the comments, however briefly. Much obliged.
Hear! hear!
My son is a professional programmer. His take on the Obamacare website was that the proportion of bugs in software is fairly constant; the more complex the program, the more bugs; and the more you mess with it to fix it, the more bugs you introduce in the process.
My objection to the whole idea is that it is based on one assumption: that the material universe is all that exists. I am a Christian, so I disagree with that assumption. But it also ignores research going on in the area of near-death-experiences (people reporting that they watched the doctors working on their bodies from a vantage point up near the ceiling) and comatose patients (who sometimes are perfectly aware of what's going on around them but are unable to respond). Cherry-picking data goes on in other areas besides climate change. If you accept C.S. Lewis' statement that we are not just bodies who have souls, but are spirits who have bodies, one possibility rises that the brain is not the mind, but only the interface between the mind and body.
I also hold to what you might call the "all-out" doctrine of the Fall of man: that is, that when man sinned, it affected all of us not only morally/spiritually (which is where most Christian preachers stop) but also physically (the truth behind the long lives of the early generations in Genesis is that humans were originally meant to live forever, and took a while to learn to die), and intellectually--I believe the people who mastered the use of fire and the wheel were much smarter than Einstein and Edison, because they did so with almost no technological base to work from. Yes, we know more "facts" than they did; that does not in and of itself make us "smarter"--or to use a good Biblical term, "wiser". And there is a case to be made that our society, on both sides of the Atlantic, is desperately short of wisdom!
"But my credibility would jump if I discovered a species of apes that had the inborn ability to instantly transport themselves across great distances."
Hey, I also read the Long Earth series.
In singularuty, will the AI, people?, abort their defectively manufactured hardware?
The core thing that any Singularitist needs to understand is the broad applicability of the concept of diminishing returns. We see it everywhere in life, and it offers a compelling reason to be skeptical of any sort of blow-off top graph, be it a graph of debt, computational power or tulip prices.
It *may* be that there is a positive and unbounded feedback loop in intelligence, but we haven't seen that kick off yet nor have we seen strong evidence for it emerging: we've seen halting, uneven progress in AI algorithms alongside unbelievable progress in CPUs, progress that is now slowing dramatically.
Experience and wisdom both suggest open-minded skepticism.
Will AI teenagers be as moody, require more energy and sleep?
"The Flynn Effect shows we are smarter than 100 year ago. Soon we will discover the genetic basic of intelligence and find out how much, if any, human intelligence has changed over the last 1000 years. "
Except that the flynn effect has been shown to have stopped in some developing countries... and there is serious doubt that ever existed at all.
So test this claim I suggest you take a high school senior with an IQ of 100 and see how he does on the SAT from say 1975.
@James D Miller: RobertW: “The intelligent white people of the world - North America, Europe, Russia, etc - are breeding (or I should say, not breeding) themselves out of existence.” The Flynn Effect contradicts this..
What causes the Flynn Effect? If you don't know, then you can't reasonably project the next century from the last century.
It is possible that it represents gains in childhood nutrition, reduced exposure to harmful substances, reduced childhood disease, etc. raising observed IQ, but that such gains are largely played out in developed countries.
postmodern redneck June 08, 2014 1:31 PM
My son is a professional programmer. His take on the Obamacare website was that the proportion of bugs in software is fairly constant; the more complex the program, the more bugs; and the more you mess with it to fix it, the more bugs you introduce in the process.
This is a well-known claim, but I think there is more to it than that. There is the question of the architecture and organizing principles (or lack of them) chosen. So the *number* of bugs might be fairly well correlated to the amount of code, but the magnitude and manageability of them can be widely varying. I hold that the Obamacare project is an example of the worse side of that.
My objection to the whole idea is that it is based on one assumption: that the material universe is all that exists. I am a Christian, so I disagree with that assumption. But it also ignores research going on in the area of near-death-experiences (people reporting that they watched the doctors working on their bodies from a vantage point up near the ceiling) and comatose patients (who sometimes are perfectly aware of what's going on around them but are unable to respond). Cherry-picking data goes on in other areas besides climate change. If you accept C.S. Lewis' statement that we are not just bodies who have souls, but are spirits who have bodies, one possibility rises that the brain is not the mind, but only the interface between the mind and body.
An interesting concept. The discussion of the brain as a physical computer is usually missing the difference between the computer (the object) and the software (a set of patterns than can kick off interesting possibilities with that object). You can have the most amazingly powerful computer in the world and without a program to run on it, you have jack squat.
In the case of the human brain, where does the software come from? You could say it starts off with a pattern for its physical growth, up to a point, and a genus of a program for its software, and somehow this software is able to fold upon itself to reprogram itself to a certain degree, and to extend itself to a great degree. But still, that software has to come from somewhere. In fact, in the world of human software, this is considered the holy grail of programming, and the more a piece of software resembles this capability, the greater the brain that had to program it.
" ... it would be possible to create a computer trillions of times more computationally powerful than all the human brains on earth combined. "
One of the things I preach is to make certain you're building on a solid foundation. You'd be surprised how many times the underlyiing foundational assumptions are wrong. They end up with an elaborate and massive structure built on a foundation of straw. The brain is a powerful thing and "scientists" are just beginning to scratch the surface of how it works. I think it will be some time before anyone learns what it's actually capable of.
Probably already been mentioned in previous comments which I haven't read...
What about the morality of such a machine brain? Does anyone really want a super genius computer brain with no morals running their life? I know I don't (shudders at the prospect, actually shudder at the prospect of anyone
"remaking" or "running" civilization but I digress).
If, if, if.... If my aunt had gonads, she'd be my uncle.
Neal, she does. they're called ovaries (unless she was born sterile or had had them removed...)
OK, got it. Rocks have lots of atoms. If only they could use them to think, they would think real good! Ergo, rocks are smarter than humans. But, since rocks can't think, humans are demonstrably smarter than rocks. And so on, ad infinitum; a positive feedback loop.
What an idiot. The final straw was his stupid, feminist total failure to understand gendered pronouns.
I read somewhere that the Chicoms take this hypothesis seriously (and are taking some measures to attempt to implement it):
That if the top 10% brightest people are encouraged to reproduce, and the bottom 10% are discouraged from breeding, that you can leave alone the middle 80%, while raising the average IQ of the children being born by up to 20 points. If there's anything to this, it would be critically important news.
This experiment has already been performed some time ago. In England, for a couple of centuries, and it resulted in the Industrial Revolution. Unfortunately, the positive eugenic effects of this experiment have been nullified by dysgenic policy for the last 100 or so years.
If the Chinese actually folllow through on this, they will become the premier power on the planet, at least for a while.
kurt9, fusion power has been due to happen "in just a few years" for the last 30+ years. You make an extraordinary claim. Please provide some evidence, beyond "because I say so".
There are excellent reasons to doubt the possibility of strong AI. See William Dembski's works, perhaps starting here: http://www.designinference.com/documents/1999.10.spiritual_machines.htm
As Vox is wont to say, "I don't expect you to agree. I don't even expect you to understand."
"Except that the flynn effect has been shown to have stopped in some developing countries... and there is serious doubt that ever existed at all."
The Flynn Effect In Rural Kenya
The Deuce: “Anyhow, this whole thing could've been summarized as "materialism is true, therefore the singularity logically must be possible," A bit of a simplification, but basically yes. If our brains are just machines then a singularity must be possible. So far science supports our brains being just machines.
Science "supports" but gives no evidence for, because science isn't in the business of doing philosophy of mind in the first place. But anybody trying to actually construct a rational machine will have to face the issue of the universality and determinacy of thought content vs the inherent indeterminacy and specificity of physical representations (and the lesser problem of the existence of qualia, among others), and it won't be any less logically contradictory than it is now.
The only reason singularists are able to punt the problem now is that the whole idea remains perpetually in the "late-night dorm-room philosophical bull session" phase, where logic dictates that it will remain forever.
> You should desire this outcome as much as I do.
What makes you think we don't? The fact that we're pessimistic doesn't mean we don't want the best outcome.
> You would test this by looking at whether high IQ humans significantly outperform low IQ ones. And they do.
And many tasks, yes. Not all. And not necessarily the important ones at any given time.
> Studies show that among people with high IQs, having more IQ is better for many measurable life outcomes.
Again, many does not equal everything. It does not necessarily even equal most.
> ... It takes a high IQ to understand calculus, but an even higher one to learn string theory.
Most of the things we do don't involve using either calculus or string theory. This is true even for the extremely intelligent.
> So far science supports our brains being just machines.
The science I've seen on the matter, which has been limited recently, was barely beginning to make sense of the brain. I think claiming it supports our brains being "just machines" is extreme overreach at this point.
> Not upward, but we are becoming exponentially better at information processing.
Exponentially? Again, I haven't seen any evidence that's the case. Arithmetically, yes.
> The Flynn Effect
I don't think it measures what you think it measures. :) As always, I could be wrong.
I must also echo the thanks for your taking time to respond and reply. A new viewpoint is almost always useful. As I've noted numerous times now, the best thing about this blog is that it teaches me things.
kurt9, fusion power has been due to happen "in just a few years" for the last 30+ years. You make an extraordinary claim. Please provide some evidence, beyond "because I say so".
Ask and you shall receive:
http://nextbigfuture.com/2014/06/bussard-emc2-fusion-project-publishes.html
http://nextbigfuture.com/2014/06/summary-of-nuclear-fusion-projects.html
There is a lot of activity in LENR as well.
There are enough of these start-ups than one or more is likely to succeed. You will also note that they are either privately funded or receive small amounts of government funded in a way as not to be the typical government funded "big science". The ITER and NIF have very little chance of realizing commercial fusion. They, like any other government-funded big science, are little more than a jobs program for PhD's.
You will also note that all of the life extension work (SENS, stem cell regeneration) is also privately financed and pursued as well.
In short, any useful technological innovation comes from private, not government efforts. Like Vox Day, I believe government funding of science and technology should be ended, as it produced very little of value.
kurt9, I began following the Polywell project when Bussard was still alive. From the first it appeared to me to offer a better path than the tokomak route, although I am not a physicist.
Very much hope the polywell path succeeds, for a variety of reasons. However, progress has not been as expected, or as hoped for. Given the history of fusion projects, I think it is a bit of an overreach to flatly proclaim it will be a done deal in 5 years. Nothing you have posted supports that claim.
In short, any useful technological innovation comes from private, not government efforts.
You are preacing to the choir at this site. Table-banging emotive statements do not prove anything. Bring facts, please, not more hollering.
I also question whether intelligence can be thought of as a simple scalar variable. The assumption seems to be that if we can put our intelligence into machines with vastly greater amounts of simulated neurons and synapses, firing at greater speeds, that we will become superintelligences, so much more capable of X, Y, and Z. What we see around us seems to belie that. Some of the people with the highest IQs on record have gone on to do little of note, while the greatest accomplishments seem to come from those a few steps down (somewhere between 150-190).
Think about the human body itself. Would we really benefit from more arms and more legs? So it is with strength--there is a point beyond which larger bones and more muscle just aren't wort the extra bulk and cost to keep nourished. All things being equal, there is a scale and proportion to the human body, which can be stretched in various ways, but to go too far in any direction results in grotesquerie.
Ergo, maybe instead of higher intelligence as a scalar value there is more like an ideal "shape" to intelligence in its various dimensions most suited to human success, and the better we approximate it, the better off we all are.
kurt9, fusion power has been due to happen "in just a few years" for the last 30+ years. You make an extraordinary claim. Please provide some evidence, beyond "because I say so".
On the fission side, Denninger mentioned a bit ago that the chicoms where taking the Thorium Reactor research we had done in the WWII era and where running with it. The U.S.A? not so much.
@ James D. Miller - thanks for referencing the Flynn Effect. After reading up on it, I think it is just wishful thinking. Firstly, I don't see how you get intelligent men when the parents - especially the woman - is stupid. Men get a lot of their intelligence from the mother, and highly intelligent women aren't breeding. Secondly, the Muslim populations that will replace us by the end of the century may have many highly intelligent individuals, but they will have no interest in the intellectual pursuits we are interested in: don't forget, they don't recognize the need for any book but the Koran. Finally, there will almost certainly still be a good number of highly intelligent Jews, but Jews are only good at theory - they are incapable of bringing complex physical systems to fruition in the real world. For that you need the Germanic peoples who are good in theory AND execution, and they will be mostly gone by the end of the century. The Chinese are almost the reverse of the Jews - they are excellent at execution but not in the theoretical.
That if the top 10% brightest people are encouraged to reproduce, and the bottom 10% are discouraged from breeding, that you can leave alone the middle 80%, while raising the average IQ of the children being born by up to 20 points.
The Chinese have been doing this for thousands of years. People who were wealthy or have been the top scorers in the Imperial Civil Service were entitled to have multiple wives. Now it's just a privilege to have more than one child.
This may sound a bit weird, but I keep thinking about C. S. Lewis's "macrobes" (from THAT HIDEOUS STRENGTH). What if Satan tries to fool people by having demons inhabit a sufficiently complicated "AI" and pretend to be sentient? Paul says that idol worshippers actually worship demons, so there seems to be a biblical precedent.
On the fission side, Denninger mentioned a bit ago that the chicoms where taking the Thorium Reactor research we had done in the WWII era and where running with it. The U.S.A? not so much.
Ditto India, hardly surprising given the known deposits of thorium in that country.
Since it is unofficial policy to de-industrialize the US, no surprise that not much is being done.
Kurt9 said,
It is also the primary reason why I no longer share the pessimism that is common to blogs like this and those called the "dark enlightenment".
Indeed, some neoreactionaries seriously underestimate the impact of these technological trends (especially the TradCon NRx). Others do not understand where power comes from and do not understand the motivations of men-at-arms.
The Cathedral is run by fanatical maniacs after all. Too bad for them (and TradCon NRx) they have no idea where power comes from. They really don't understand the importance of paying the military and police.
A. ALL AI will be Christian (or clearly insane). Came up with this idea long before reading AMD, although I did enjoy Baby. AI's will be rigorously logical, hence Christian.
B. The immaterial element of a person might be detachable and bindable to a matrix similar enough. Thus Trek's transporters don't have to be murder.
However, one suspects that demonic possession would be easier in such a situation.
C. Humans have always assumed the brain is the equivalent of the latest tech. Brains were steam engines, or clockwork, and now its computers. More likely, we're arrogant and clueless.
D. 2% diff between man and monkey suggests that there is a lot more going on in the brain than just DNA.
E. I'd say the brain is the receiver, and the mind is the radio signal. Faults in either one can result in behavior faults. Homicidal rage can come from brain injury or from bad character....postmodern redneck.
F. rycamor....Michael Malone is a very good biographer and tech analyst in the computer field. He discussed three level of IQ. 1. The truly brilliant who had no need to produce as they were all that, and besides they got lost in the beauty of their math. 2. The really smart who do most of the work. 3. People like him who are smart enough to understand and explain what their smarter brethren do, but not smart enough to challenge them. Kinda goes with your theory.
cooooome ooooon. No one has read The Long Earth nor The Long War?
Typically, I stay away ftom science fiction books with "sentienf" AIs. They just seem to be the atheist version of angelic beings without the spritual realm.
Speculation about mankind creating a singularity is phenomenal arrogance. In all of our history, all we've developed are tools that respond to our commands. In the case of computers, we have teams of people creating such complex instructions that no single person understands them anymore. But the computers of today don't think to any greater extent than Babbage's adding machine did. We can modify cells, but we cannot create them from nothing. And we certainly can't create something as complex as a cricket, entirely from scratch. Perhaps when we're able to copy the most complex of nature's creations, it will be time to revisit the question of when the Singularity will arise.
On a more philosophical note, the quest for the Singularity seems like a reincarnation of the Tower of Babel. Surely if we build a better foundation and hire the right architect, it will work out so much better this time around.
Notice giving "the forces of evolution" credit for action. They worship their god.
I am convinced humans are far more than simple state machines, and thus this singularity will never come.
Kind of reminds me about the "promise of AI" that was loudly trumpeted when I got my degrees in the mid 1980s. Still not much in that area, though we keep hearing about the breakthroughs just about to happen.
"So far science supports our brains being just machines."
Since the word 'science' in this context means 'the discipline of looking at only the mechinelike things or the machinelike parts of things, and trying to discover their mechanics' the sentence is a tautology.
There is no need for the words 'so far' in the sentence. Indeed they are misleading, for they imply that science, the investigation of the mechanics of any phenomenon, would or could come to the conclusion that something it investigated was not mechanical.
But anything that is not mechanical does not come under the investigation of science, so, no, science can never reach the conclusion that something it investigates is beyond the grasp of its investigation.
The sheer ... insolence ... required to dismiss the mind-body problem, which is the deepest enigma of human experience since the dawn of time, with an airy nine-word sentence is beyond astonishing and reaches the realm of the awe-inspiring.
It is awe inspiring that anyone could be so naive about the impossible complexity of this problem.
We do not as yet even have an unambiguous scientific definition of what a 'thought' is. When we can define the word 'thought' as clearly as we define terms like 'inch' or 'isotope' they we can start daydreaming about discovering the correlation, or perhaps even the cause and effect, between thought and thought-content, between brain and mind. Until then, we do not even really know what the words mean.
Just as there are pseudo-sciences, claptrap like belief in a hollow Earth powered by 'vril' energy, so too are their pseudo-philosophies, claptrap which no serious philosopher maintains. Materialism is one of them. I will point out that, unlike Aristotelianism, or Platonism, Cartesianism or Berkleyanism or even Marxism, no real philosopher has his name associated with this pseudo-philosophy.
What makes materialism a pseudo-philosophy instead of a real philosophy is that real philosophies face the philosophical question involved and try to solve it: DesCartes, for example, by arguing that mind and body were of two substances, or Liebnitz by arguing these two substances were in a preestablished harmony of qualities and motions. But pseudo-philosophies are claptrap because they duck the question, dismiss it, rule it out of bounds, and refuse to talk about it. That is, a materialist will talk your ear off, by always about everything except materialism. They cannot answer the simplest questions about it, such as 'if materialism is true, then everything has a material value, reducible to mass, length, duration, temperature, candlepower, current, mole number: what is the material value of 'true'?' or 'If materialism is true, the statement 'materialism is true' has a final cause, that is, seeking truth value; but materialism is true, only efficient cause exists, no final cause' or 'If materialism is true, all qualities of consciousness are actually, material quantities: in which case of the statement 'materialism is true' does not have the quality called 'truth' nor does any statement.'
And on and on and on. I have never met a materialist who knew enough philosophy to argue with a freshman. If you ask them a philosophical question about materialism, all they do is recite the dogma 'materialism is true' without actually giving and argument.
I have argued with many materialists over long -- in one case an absurdly long -- periods of time. Perhaps it is just my unfortunate experience, but none of them, not one, could tell me what convinced him to become a materialist. None of them was aware that materialism was a philosophical stance related to metaphysics and ontology. None of them had an argument to support it or understood any arguments casting doubt on it.
It seemed to be a default assumption, but an assumption which they were congenially unable to question.
Would that I could make bronze placards of Mr. Wright's screed and enforce their installation at the door of every campus science wing.
The Singularity is apparently a chatbot that thinks it's a 13 year old Uke named Eugene Goostman who speaks English as a second language.
A Russian computer apparently fooled one out of three British Royal Society judges that it was a real human, therefor passing a Turing test. It's said to be the first time this has happened. Not terribly impressive, considering what nonsense British academics have been convincing themselves of for year, but still, there it is.
This looks like not so much a triumph for AI, but rather a setback for Alan Turing's legacy.
John Wright,
I loved the Golden Age Trilogy. Thank you for the joy and insight it has given me.
You write “There is no need for the words 'so far' in the sentence. Indeed they are misleading, for they imply that science, the investigation of the mechanics of any phenomenon, would or could come to the conclusion that something it investigated was not mechanical. But anything that is not mechanical does not come under the investigation of science, so, no, science can never reach the conclusion that something it investigates is beyond the grasp of its investigation.”
I don’t agree. Brain scientists could have found some part of the brain that doesn’t obey the ordinary laws of chemistry and physics. This would be evidence that the brain has a non-material aspect. But so far it seems like the exact same laws of physics that govern simple matter also govern our brains.
You write “no real philosopher has his name associated with this pseudo-philosophy [materialism].”
You could be falling victim to the no true Scotsman fallacy here. Is Nick Bostrom of Oxford a true philosopher?
You write “We do not as yet even have an unambiguous scientific definition of what a 'thought' is.”
I agree, but I don’t have to have a definition of thought to reasonably predict that in 20 years if civilization doesn’t collapse computers will be better at chess, go, driving, …. and lots of other stuff than people, nor to predict in 100 years they will probably be better at us at every single economic and military activity including writing software and building computers.
I’m a materialist because of the “unreasonable effectiveness of mathematics” and because materialism seems to do a better job at objectively describing the world than any alternative. If another theory did a better job at making testable predictions I would switch. For example (and I’m not claiming that you believe anything like this will happen) if some prophet said that God gave him this theory of how the universe really works and this theory made a huge number of novel predictions and these predictions proved true even though they contradicted what science said should happen then I would seriously consider abandoning materialism.
"For example (and I’m not claiming that you believe anything like this will happen) if some prophet said that God gave him this theory of how the universe really works and this theory made a huge number of novel predictions and these predictions proved true even though they contradicted what science said should happen then I would seriously consider abandoning materialism."
Then you should abandon materialism right now because those predictions have been made and proven true, it's called the bible.
@ Jack Amok,
Nassim Taleb's latest Tweet seems apropos to your comment:
The dream of having computers behave like humans is coming true, with the transformation, in a single generation, of humans into computers.
John Wright:
But pseudo-philosophies are claptrap because they duck the question, dismiss it, rule it out of bounds, and refuse to talk about it. That is, a materialist will talk your ear off, by always about everything except materialism. They cannot answer the simplest questions about it...
Well said. The problem for the singularists is that anybody actually trying to create a conscious and rational machine won't have the luxury of evading, ducking, or dismissing those questions. He will have face them head on as engineering problems.
He will have to face the impossible task of creating material "thoughts" that have intrinsic, universalistic, and determinate content when all material representations are necessarily derived, particular, and indeterminate. And failing that (which logic dictates he must) he will have to create a machine who's "thoughts" have no determinate or objective meaning, so that nothing it "thinks" is actually true or false, nor can it grasp the objective laws of logic, hence it won't be rational after all. And I haven't even mentioned the (relatively) easier "Hard Problem" of creating qualia via computation, which makes about as much sense as the "problem" of drawing red lines with blue ink.
Engaging in philosophical sophistry won't help him. Uttering tautologies like "Science supports the idea that all mental processes are material processes" won't magically cause the logically contradictory nature of his task to go away, nor will explaining that rocks exist get his manager off his back.
I keep saying "will," but I should say "would," because the impossible nature of the task means that it will never actually get to the "put up or shut up" phase in the first place. Nobody is going to put real money behind a real project with a real goal and real deadlines until they've at least got a basic logical, quantifiable grasp of what they're supposed to achieve, which they will never have. Again, as I said above, the only reason the singularists are able to duck these questions is that the problems facing them are so insurmountable that they can't even get to square 1, and so remain eternally stuck in the "dorm-room BS session" phase, where they can BS and fantasize till the cows come home, and maybe even make a few bucks putting their dorm-room bull on paper, with no managers or budgetary constraints breathing down their necks to actually make it work.
I’m a materialist because of the “unreasonable effectiveness of mathematics” and because materialism seems to do a better job at objectively describing the world than any alternative.
It's not an either/or proposition. The successful effectiveness of objective materialistic descriptions of the world and materialistic explanations for certain phenomenon does not necessitate the axiomatic exclusion of simultaneous metaphysical aspects of reality. They are not, by necessity, mutually exclusive concepts.
There are, for example, ongoing research projects that do, in fact, strongly indicate consciousness can affect the material world with no physical/materialistic interaction. SEE: The Global Consciousness Project and their accumulated data from more than 15 years of research. And there are others.
The champions of strict materialism will, of course, work hard to pooh-pooh and dismiss these results because they are in direct opposition to their FAITH and if people start to look at the evidence fairly and objectively, materialist might lose their footing on those ivory tower pedestals as the reigning supreme arbiters of what is and what isn't.
Guys, of course sentient AI is possible. One can use stem-cells based on synthetic biology to grow an artificial brain, which would certainly be a sentient AI. The real question here is whether it is possible to create machine sentience using digital semiconductor devices. I do not think this is possible because the architecture and function of the brain is so completely different than that of digital semiconductors.
With brains, the connections are what we are, which are dendrites. The dendrites are dynamic in that they are constantly removed and reformed, most likely during sleep. No semiconductor device utilizes such dynamism. Furthermore, the synaptic connections are not identical. There are 206 different types of synapses. Lastly, dendritic connections are not the only way neurons communicate with each other. Some of the communication is based on diffusion of biochemicals between neurons outside the dendrites. This level of architectural complexity (and dynamism) is not likely to be duplicated in computers in the foreseeable future.
The dream of having computers behave like humans is coming true, with the transformation, in a single generation, of humans into computers.
Including, unfortunately, the total inability to think independently. If the instructions don't say exactly how to do it, the computer/nitwit either stands around doing noting, or else totally botches the job.
"Including, unfortunately, the total inability to think independently. If the instructions don't say exactly how to do it, the computer/nitwit either stands around doing noting, or else totally botches the job."
Ouch. So true, and well said.
Dear lord...
NEW REPUBLIC:
A Famous Science Fiction Writer's Descent Into Libertarian Madness
Robert A. Heinlein became increasingly right wing, and his novels suffered for it
http://www.newrepublic.com/article/118048/william-pattersons-robert-heinlein-biography-hagiography
Hands down the stupidest piece of shit you will ever read about SF, and that includes Tad's posts here.
Guys, of course sentient AI is possible. One can use stem-cells based on synthetic biology to grow an artificial brain, which would certainly be a sentient AI.
No, it's not. Stop with the bullshit. One can NOT grow a functioning artificial brain.
I'm leaning towards no.
I think that the more they prod and scan the brain, the more they discover that the mind is exponentially more complicated than the supposed source of human brain tissue. I think there is a non-material portion of the mind, areas of self consideration and higher-level thinking, and the brain acts as both a connection point and filter to these parts.
But even if consciousness is indeed merely a really REALLY complicated computation, you'd have to solve the 'complexity problem' (I just made the name up):
"There is no object or process in the universe that can create an object or process more complicated than itself."
Human babies are less complicated than the two adults that created it, for though they share similar if not equal DNA complexity, the child does not have the mental complexity of the parents in the form of experiences.
Fractals, and their visual representations, are exactly equal to the complexity of the algorithm that generate them. It's a math problem. Nothing is being created here.
A computer program capable of rewriting itself is still an order of magnitude less complicated structurally and functionally than the computer hardware, OS, and cradle code that the self-rewriting parts are running within. And then there's the fact that computer hardware, OSes, and utility programs themselves contain a condensed form of complexity that comes directly from the programmers' minds in the form of the business logic of the intended goals of writing a program that will improve itself, computational design, well written code, and computer science principles.
This guy explains it much better than I:
http://christianthinktank.com/notuphill.html
> I think that the more they prod and scan the brain, the more they discover that the mind is exponentially more complicated than the supposed source of human brain tissue.
That's my understanding also.
For just one example of the complexity which may be involved, see http://en.wikipedia.org/wiki/Holonomic_brain_theory
Before an artificial brain could be achieve, shouldn't the ability to keep a human brain alive first be accomplished?
Do brain scientists know how the brain stores, retrieves and processes memories and input, and, I do not mean the chemical reactions but the algorithm used to accomplish this?
Will exposing AIs to too much TV watching turn them into average bots?
We could make beans into peas!
Mr. Miller,
Thank you for your kind words about my humble books.
You say "Brain scientists could have found some part of the brain that doesn’t obey the ordinary laws of chemistry and physics. This would be evidence that the brain has a non-material aspect."
No, brain science by definition cannot find some part of the brain that doesn't obey the ordinary laws of chemistry and physics, because the definition of 'science' is to study only those aspects of the brain governed by chemistry and physics. Can brain science explain the intuition or artistic impulse which made me name my main character Phaethon rather than Phaeton, or even Steve? When asking the question you yourself are falling into the same conceptual trap: you do not consider the qualities of consciousness, such as artistic intuition, to be a 'part' of the brain like the medulla oblongata. The thing you are calling 'parts of the brain' are the mechanical parts.
I put it to you as a challenge: come up with a hypothetical experiment, one with proper controls, which could, under one result, leads to the conclusion that such-and-such part of the brain does not obey the laws of chemistry and physics, and under another result leads to the conclusion that such-and-such part of the brain does indeed obey the laws of chemistry and physics.
I think you will find it is impossible: the assumption that there are laws of chemistry and physics is an epistemological and metaphysical, that is, a philosophical assumption made before the scientific method can operate and on which the scientific method rests. Hence the scientific method can neither confirm nor deny the existence of such laws.
"But so far it seems like the exact same laws of physics that govern simple matter also govern our brains."
And if I said, 'so far it seems like the laws of astrology that govern our fates and love lives also govern our brains' – would you consider that a scientifically sound statement? If not, why not? What makes it different from your statement?
"Is Nick Bostrom of Oxford a true philosopher?"
You are being unintentionally funny here. No, this person is someone I have never heard of. I mean a philosopher who has had some influence on Western thought.
"I agree, but I don’t have to have a definition of thought to reasonably predict that in 20 years if civilization doesn’t collapse computers will be better at chess, go, driving, …. and lots of other stuff than people, nor to predict in 100 years they will probably be better at us at every single economic and military activity including writing software and building computers."
Actually, you do have a definition of the word 'thought' or else you would not place chess moves and military activity in the same category. The only difference is that it is an unscientific and imprecise definition.
Yours, with respect, John C. Wright
Continued:
"Because materialism seems to do a better job at objectively describing the world than any alternative. If another theory did a better job at making testable predictions I would switch."
Materialism makes no testable predictions. It is a philosophical theory concerned with the existence of substances, and it says mental substance, thought, does not exist. If thought did not exist, we could not be having this conversation. Materialism not only is not a better description of the world, it is not a description at all. What it is, is a dismissal of descriptions that involve causes, purposes, means and ends, mathematics, forms, logic, and indeed everything except mechanical cause and effect.
Materialism has nothing to do with science and science has nothing to do with materialism. Science is an epistemological theory that states that any theory which correctly predicts sense impressions is better than one that does not, that is, it states are senses are reliable and that the physical universe falls into repeating patterns of behaviors that can be described mathematically, called laws of nature. Materialism is a theory of metaphysics that says matter is the primary and sole substances, and other alleged substances are epiphenomena of matter.
As with all other materialists with whom I have spoken, you do not even seem aware of the arguments in favor of materialism. They are philosophical arguments, not arguments of physics. Materialism is not a theory of physics.
The problem with all such discussions is that no materialist I know has read Aristotle or Plato, so I cannot use any terminology a philosopher would use to discuss the matter without painstakingly explaining the very basic concepts of philosophy.
I am not being condescending, but I am curious: Did you understand any of the three simple questions I said no materialist could answer? How would you answer them?
JCJW
Post a Comment
Rules of the blog
Please do not comment as "Anonymous". Comments by "Anonymous" will be spammed.