Win a copy of Kotlin Cookbook this week in the Kotlin forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Bear Bibeault
  • Paul Clapham
  • Jeanne Boyarsky
Sheriffs:
  • Junilu Lacar
  • Knute Snortum
  • Henry Wong
Saloon Keepers:
  • Ron McLeod
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
Bartenders:
  • Frits Walraven
  • Joe Ess
  • salvin francis

the end of the human race

 
Bartender
Posts: 1810
28
jQuery Netbeans IDE Eclipse IDE Firefox Browser MySQL Database Chrome Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Interesting article here. And very chilling. Is there anyone here with a background in AI that would like to comment?

I've read Kurzweils books, and I don't see the Singularity as being a good thing at all.
 
Sheriff
Posts: 3838
66
Netbeans IDE Oracle Firefox Browser
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not an expert, but my question is: how will the "ASI", which will reside in some physical machine, take care of its own physical safety? (I don't know whether the book touches this, but the article doesn't.) People, like all other animals, heal all by itself. However, a burnt electrical source might well mean death to the ASI hosted in the machine that loses power because of it. The ASI can hardly afford to exterminate humans, before it creates a complete "ecosystem" that could support it in the world where humans won't repair it. We do have highly automated factories, but they don't have a capability to repair themselves. Lots of things can be operated through internet nowadays, but lots of other things still needs a physical work to be done. Running power plants (probably the most important thing for the ASI) is one such example. The ASI would therefore have to build a reasonably sized army of fully automated robots, probably completely autonomous to make sure they'll be able to operate even without direct connection with the ASI, which would support it in its quest for domination of the planet. Will we notice?
 
author & internet detective
Posts: 39771
797
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Martin Vajsar wrote:I'm not an expert, but my question is: how will the "ASI", which will reside in some physical machine


It doesn't have to be one machine. It could be a network. Or use some sort of nanotech to use air molecules to store data.
 
Marshal
Posts: 67336
171
Mac Mac OS X IntelliJ IDE jQuery Java
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Obligatory image:

 
Martin Vashko
Sheriff
Posts: 3838
66
Netbeans IDE Oracle Firefox Browser
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Jeanne Boyarsky wrote:

Martin Vajsar wrote:I'm not an expert, but my question is: how will the "ASI", which will reside in some physical machine


It doesn't have to be one machine. It could be a network. Or use some sort of nanotech to use air molecules to store data.


True, but such a network will still need electricity. Unless it was some kind of a cybernetic organism, which would took all of its energy from biological processes. And we aren't building self replicating cyborgs (yet).

Which reminds me of Stanislaw Lem's The Invincible.
 
Jeanne Boyarsky
author & internet detective
Posts: 39771
797
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Martin Vajsar wrote:

Jeanne Boyarsky wrote:

Martin Vajsar wrote:I'm not an expert, but my question is: how will the "ASI", which will reside in some physical machine


It doesn't have to be one machine. It could be a network. Or use some sort of nanotech to use air molecules to store data.


True, but such a network will still need electricity. Unless it was some kind of a cybernetic organism, which would took all of its energy from biological processes. And we aren't building self replicating cyborgs (yet).


I was hinting at purely organic. I agree it is a long ways away. Unless the ASI does it for us. The problem being the experiments needed to do that are more likely to backfire before the ASI itself becomes a problem. Oops. I build a nano-virus that eats human tissue!
 
Martin Vashko
Sheriff
Posts: 3838
66
Netbeans IDE Oracle Firefox Browser
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Jeanne Boyarsky wrote:Oops. I build a nano-virus that eats human tissue!


This small chart might help you choose the most promising topic of your research:





But on a more serious note, the biological research seems more than likely to bring up a headache or two. I found an interesting article about security implications in biological research here. There is one chart in the article that caught my attention:



In a not so distant future, black hats might be having a choice between creating a computer virus or a real one...
 
Bartender
Posts: 1952
7
Eclipse IDE Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Reminds me of Deus Ex, and its AI systems.
More specifically this conversation with the Morpheus AI, and the possible ending where the transhuman protagonist eventually chooses to merge with the Helios AI, which leads the end game quote by Volraire: "if God didn't exist, it would be necessary to invent him". That in turn leads to the merging of the human race with that AI entity in Deus Ex 2. Effectively destroying humanity, yes, but replacing it with something "better"? Maybe. Man, I love those games...
 
Rancher
Posts: 2759
32
Eclipse IDE Spring Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
This idea is not new. SOmeone just repackaged existing ideas with new names. It's called the Singularity. Look it up on WIkipedia. The idea has been around since the 50s. The basic idea is that the rate of technological growth is limited by the intelligence of the people driving the growth. Since, humans are driving technological growth, the rate of technological growth is limited by human intelligence. However, if computers become smart enough to design computers smarter than themselves, then the rate of technological growth would be limited by the intelligence of the computers. Since, computers would constantly create smarter versions of themselves, the rate of technological growth would explode

There are several arguments against this
a) There are other physical limitations to technological growth: We are already seeing this with the Moore's law breaking down wrt to silicopn based electronics. Yes, we might come up with a biological based solutions, but who is to say that we won't find a saturation point there. ANd who knows if that is going to be smarter than humans. For all we know, it might be physically impossible for any being to be smarter than humans. Maybe there is a physical limitation to how closely you can pack neurons, and we are the upper end of it

b) Just because computers become smart doesn;t mean that they will have a need to dominate humanity. Smarter = powerful = All your bases belong to us is a human concept. Who is to say the computers might decide "meh.. smarts is overrated. we don;t want to get smarter than where we are. We stop all technological research" or we might just build the machines to obey a rule that prohibits them from harming us before they become too smart. Much like the 3 laws of robotics. Just because machines become really really smart doesn't mean that they would want to dominate us. We might end up creating a perfect slave:- much intelligent than us and very subservient

Yes, it is possible that humans will be crushed under the boot of it's own creation but my bet is that humanity is just going to make machines that make us lazy and fat. Wall-E is much more likely than Terminator, IMO
 
J. Kevin Robbins
Bartender
Posts: 1810
28
jQuery Netbeans IDE Eclipse IDE Firefox Browser MySQL Database Chrome Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Jayesh A Lalwani wrote:
There are several arguments against this



Regarding argument number one, we've heard many time before that Moores Law has reached it's limit, but there's always another advance. I think quantum computing will be the next stage. As for argument number two, the AI wouldn't necessarily need to have a desire to dominate us to destroy us. It would probably show no more concern for humans than we show for a colony of ants that we've walked over. It could be completely indifferent to the human race. But it may decide that to work out an especially difficult physics problem that it needs to create a supercomputer out of all the molecules in the atmosphere, thereby destroying our environment.

On the other hand, lately I've become very interested in the concept that we may be living in a computer simulation. Some theorize that when our technology advances to the point that we can create our own computer simulation (The Sims version 999?), sort of a Matrix within the Matrix, then the whole thing will shut down.

And I wouldn't count much on the Three Laws to save us. If you read Asimovs stories carefully, you'll realize that most of them are about how the Three Laws failed to work.
 
Bartender
Posts: 10777
71
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

J. Kevin Robbins wrote:I don't see the Singularity as being a good thing at all...


Well, from what little I know about it, I believe that it can't possibly have happened anywhere (it seems a bit presumptuous to assume that we're the only sentient beings who could have caused it) yet, so presumably we have a bit of time left...

Personally, assuming MAD isn't still on the cards, I prefer the bolide theory: Random, unthinking, and statistically inevitable. Anyone fancy a day on the beach?

Winston
 
Ranch Hand
Posts: 130
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston: Anyone fancy a day on the beach?

More like a year of survival...
 
Martin Vashko
Sheriff
Posts: 3838
66
Netbeans IDE Oracle Firefox Browser
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Winston Gutkowski wrote:I prefer the bolide theory: Random, unthinking, and statistically inevitable.


These are actually preventable, certainly in theory, and in a few decades, unless we abandon space program (see Kessler syndrome), in practice too. It is not that hard to change a path of an asteroid, and only a light change is needed if we start soon enough. And it involves neither oil drillers, nor atomic weapons

Supervolcanoes and gamma-ray bursts are my favorites. They share the qualities: random, unthinking, and statistically inevitable. And we don't have any means to prevent them.

Anyone fancy a day on the beach?


Thank you, but after watching Threads, I'm inclined to politely decline.
 
Winston Gutkowski
Bartender
Posts: 10777
71
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Martin Vajsar wrote:Supervolcanoes and gamma-ray bursts are my favorites...


A propos of nothing much, except the nature of "Wiki creep", I progressed from your page about Supervolcanoes, to one on human population bottlenecks, to the spread of Cro-Magnon man and burial cultures, to this guy. Apparently there's one in England that the locals have name "Pete Marsh".

Winston
 
A wop bop a lu bop a womp bam boom! Tiny ad:
Java file APIs (DOC, XLS, PDF, and many more)
https://products.aspose.com/total/java
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!