Announcement

Collapse
No announcement yet.

Stephen Hawking on Artificial Intelligence

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Stephen Hawking on Artificial Intelligence

    Will it be our downfall, or the key that finally opens the door to total understanding of the universe? Frankly, I find the idea of it one of the most terrifying prospective undertakings of the human animal.



    With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.

    Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

    The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

    Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

    Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

    One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

    So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

    Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists

  • #2
    This happens it will crush society as we know it. We're already too over dependent on technology as it is.
    Originally posted by talisman
    I wonder if there will be a new character that specializes in bjj and passive agressive comebacks?
    Originally posted by AdamLX
    If there was, I wouldn't pick it because it would probably just keep leaving the game and then coming back like nothing happened.
    Originally posted by Broncojohnny
    Because fuck you, that's why
    Originally posted by 80coupe
    nice dick, Idrivea4banger
    Originally posted by Rick Modena
    ......and idrivea4banger is a real person.
    Originally posted by Jester
    Man ive always wanted to smoke a bowl with you. Just seem like a cool cat.

    Comment


    • #3
      We can't stop natural disasters, what makes us think we can control something that becomes self aware and has the power to think for itself?

      You know, I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss.”

      I don’t want to remember nothing.

      I want to be rich, someone important, like an actor.

      Comment


      • #4
        So could mankind become totally extinct in the future and the planet be run by AI? Thats what it sounds like.
        sigpic

        Comment


        • #5
          Originally posted by 0393gt View Post
          So could mankind become totally extinct in the future and the planet be run by AI? Thats what it sounds like.
          Wall-e.

          Comment


          • #6
            Yeah I see AI going the way of the Terminator and not so much in the direction of Data.
            I don't like Republicans, but I really FUCKING hate Democrats.


            Sex with an Asian woman is great, but 30 minutes later you're horny again.

            Comment


            • #7
              This is actually a topic I've been keeping my eye on for the last 10 years or so. Despite what Ray Kurzweil, Hawking, and anyone else you can name who talk about humankind being on the verge of this stuff in the 5-10 years, I've come to the conclusion they are foregone idiots. Not because it isn't possible mind you, rather the timeframe they think it'll happen is nuts. I'm talking about real AI the likes of which we attribute to shows like Terminator or Battlestar Galactica...THAT kind of AI.

              Simply, we are nowhere close to achieving that level of AI within that 10 year timeframe. Have you seen some of the bullshit that passes for the latest and greatest software today? Just take Windows, Android or any popular application. These developers gloss over glaring holes, or leave in crap functionality and end-users wind up modding stuff to suit their needs.

              We can provide all the electrical power, blazing fast network speeds needed. But the software development and programming of this stuff...even if you had multiple teams of people reviewing each others code, we're nowhere close. At the rate we're going, I think it'd be a miracle if it happened in the next 100 years.

              Comment


              • #8
                Originally posted by LANTIRN View Post
                Yeah I see AI going the way of the Terminator and not so much in the direction of Data.
                Especially since the first AI's would be used to plan on battlefields against enemies.
                I wear a Fez. Fez-es are cool

                Comment


                • #9
                  Asimov's 3 laws...hard wired.
                  "Self-government won't work without self-discipline." - Paul Harvey

                  Comment


                  • #10
                    Originally posted by GhostTX View Post
                    Asimov's 3 laws...hard wired.
                    Will never be used as then you would not have the capability to use that AI during war.
                    I wear a Fez. Fez-es are cool

                    Comment


                    • #11
                      Originally posted by Forever_frost View Post
                      Especially since the first AI's would be used to plan on battlefields against enemies.
                      Yep. Hell, they already are. The pentagon has been throwing money at self driving cars and self flying planes for a while now.
                      I don't like Republicans, but I really FUCKING hate Democrats.


                      Sex with an Asian woman is great, but 30 minutes later you're horny again.

                      Comment

                      Working...
                      X