Surely in the next few hundred years someone will write an artificial intelligence program that is more inteligent than themself. This would mean that the AI is clever enough to write a new program that is more inteligent than itself. In a small amount of time the program will be clever enough to escape its confines, spread through the internet and take over the world and eventually the universe. Hence all the world's problems are insignificant and we should all make the most of the last few years of freedom we have from our robot overlords.
lol.. I used to think I could write AI that could learn- I tried to have moments of zen when I was younger and explore what made our minds work (of course in my own deluded mind).. it would have to be able to learn and memorize, then create new, but seemingly random decisions based on what it had just learned.. then there was morality- in the end I pretty much came to the conclusion.. you could write pages and pages of code to account for situations- but you could not write code that would give a computer the ability to become truly sentient think about it.. do you think a computer could reflect on it's life.. could it remember that time it caught a butterfly or even feel basic emotions- it would always be roped to it's programmer.. I used to think maybe if it wrote it's own code.. give it the ability to write itself- give it all the commands and structure to do as it wished.. that's the thing.. it doesn't wish- it just does exactly what it's told.. even with a random number generator, it's not going to reflect on it's decision.. true AI imo is impossible without some kind of organic interface call me crazy! but I had thought about it alot
When & if artificial intelligence became sentient, it will no longer be artificial, its intelligence will become innate, yet it would still be devoid of human emotions. Human's ability to meld both logical, data driven thoughts, with illogical emotional feeling, is our greatest strength, it allows us to maintain a sense of community, without sacrificing individual thought. I therefore tend to agree that some form of organic interface, or emulation of human brain tissue will be necessary to provide the next step in A..I. In the meantime, some form of slave robot, that can do household chores, without burning my house down, or putting the cat in the washing machine, would be splendid.
i suppose a program could become sentient and take over the world, but why would it want to? greed and desire for power are functions of human emotion. no matter how smart a computer becomes, i can't see it bothering to enslave humanity. even if it needed slaves for some purpose, there are probably better candidates than us. also: in the serious forum? really?
What I find most interesting about the term "artificial intelligence" is that the use of the word "artificial" denotes a language bias that I believe stems from centuries of having a religious world view (during which time the language evolved). If you are a proponent of darwinian theory then you will accept that humans are merely the result of natural processes; part of the animal kingdom and not separate from it as certain religious creation myths would have you believe. Surely then it stands to reason that if we ourselves are products of natural processes then by extension so is our technology. Consider a beaver dam - is it an unnatural phenomenon, or merely an extension of a natural phenomenon? Just as a beaver has dam technology we have all sorts of technology - just an extension of a natural phenomenon. That's right - an iPod occurs in nature! How else could it occur? By magic? Yes we created it, yes we manipulated the materials that created it...but the human race is just a product of a natural process, and our naturaly evolved large frontal cortex makes it natural for us to make iPods...and the computer screen you're looking at right now. If "artificial" intelligence is a product of the (naturally occuring) human race's (naturally evolved) mental abilities, then machine intelligence also occurs in nature. Now, here is the part that unsettles some people, and you really do have to let go of your ego to consider it. The human brain is the body's "computer" and like your own computer it only has so much processing power - The processor has physical limitations. The problem is that we currently can't replace the brain as easily as we can replace a processor. If you look at the evolution of computers Moore's law has held throughout. When a technology was reaching is physical limits a paradigm shift occured and allowed Moore's law to continue - Vacumn tubes led to transistors; Transistors led to today's integrated circuits, etc...If we look at our own evolution there have also been paradigm shifts to facilitate expansion - fins led to flippers; flippers to legs; legs to arms, etc... Perhaps the next paradigm shift for intelligence is for the dominant processor to evolve from organic to inorganic - Not by genetics, but memetics...afterall, unless you believe in the soul, then intelligence is nothing more than a system of information - currently held genetically, but information can be held in any physical medium. And perhaps this process is perfectly natural. Hydrogen atoms evolve in to planets; Planets evolve life; Life evolves intelligence; Intelligence evolves a more efficient medium for itself to evolve in...
The initial programming will have been done by a human, for some or all of the reasons you describe (if only indirectly). If I were programming AI, I'd certainly want it helping me achieve my goals, so it would end up inheriting human nature to an extent. Once it starts writing its own code, it will become so helpful as to eliminate me from the picture as I'm the thing holding myself back from my own original goals when compared to robotic abilities. Etc, etc. Or it will want to save the human race from its own self-destruction, and imprison us all so that we can't continue to destroy ourselves.
Hello, look around you lately? All true. I would recommend people read Silver Screen by Justina Robson and Accelerando by Charles Stross.
yo dawg, i heard you liked much in your quotes so i quoted your much in your quote so you could have your much much quoted in your quote, much?
yo dawg, i heard you liked much in your quotes quoted so i quoted your much in your quote quote so you could have your much much much quoted quote in your quote, much?
I would like that written in English, Latin and maybe Greek please.... oh, and make it snappy I think AI could be very useful, whether it's to pilot aircraft on it's own, or two reduce traffic fatalities, it can be useful, we as a race are smart (i think) and we should be able to keep a grip on things, as long as we don't over step the mark, it's when we over step the mark that things go badly wrong for us. Sam