Artificial intelligence – can we keep it in the box?

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces…

Q9ycgyyb-1343711877
“We should stop treating intelligent machines as the stuff of science fiction.” Cea.

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Exploding intelligence?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

AI as a low achiever

Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with so-called “AI winters” – times of reduced funding and interest, after promised capabilities fail to materialise.

Some people point to this as evidence machines are never likely to reach human levels of intelligence, let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight.

ra1000

The history of that technology, too, is littered with naysayers (some of whom refused to believe reports of the Wright brothers' success, apparently). For human-level intelligence, as for heavier-than-air flight, naysayers need to confront the fact nature has managed the trick: think brains and birds, respectively.

A good naysaying argument needs a reason for thinking that human technology can never reach the bar in terms of AI.

Pessimism is much easier. For one thing, we know nature managed to put human-level intelligence in skull-sized boxes, and that some of those skull-sized boxes are making progress in figuring out how nature does it. This makes it hard to maintain that the bar is permanently out of reach of artificial intelligence – on the contrary, we seem to be improving our understanding of what it would take to get there.

Moore’s Law and narrow AI

On the technological side of the fence, we seem to be making progress towards the bar, both in hardware and in software terms. In the hardware arena, Moore’s law, which predicts that the amount of computing power we can fit on a chip doubles every two years, shows little sign of slowing down.

In the software arena, people debate the possibility of “strong AI” (artificial intelligence that matches or exceeds human intelligence) but the caravan of “narrow AI” (AI that’s limited to particular tasks) moves steadily forward. One by one, computers take over domains that were previously considered off-limits to anything but human intellect and intuition.

We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on.

Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect!

What’s so bad about intelligent helpers?

Would it be a bad thing if computers were as smart as humans? The list of current successes in narrow AI might suggest pessimism is unwarranted. Aren’t these applications mostly useful, after all? A little damage to Grandmasters' egos, perhaps, and a few glitches on financial markets, but it’s hard to see any sign of impending catastrophe on the list above.

That’s true, say the pessimists, but as far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others. (Having robots drive our cars may completely rewire our economies in the next decade or so, for example).

The greatest concerns stem from the possibility that computers might take over domains that are critical to controlling the speed and direction of technological progress itself.

Software writing software?

What happens if computers reach and exceed human capacities to write computer programs? The first person to consider this possibility was the Cambridge-trained mathematician I J Good (who worked with Alan Turing code-breaking at Bletchley Park during the second world war, and later on early computers at the University of Manchester).

In 1965 Good observed that having intelligent machines develop even more intelligent machines would result in an “intelligence explosion”, which would leave the human levels of intelligence far behind. He called the creation of such machine “our last invention” – which is unlikely to be “Good” news, the pessimists add!

FlySi

In the above scenario, the moment computers become better programmers than humans marks the point in history where the speed of technological progress shifts from the speed of human thought and communication to the speed of silicon. This is a version of Vernor Vinge’s “technological singularity” – beyond this point, the curve is driven by new dynamics and the future becomes radically unpredictable, as Vinge had in mind.

Not just like us, but smarter!

It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.

By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.

JD Hancock

The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.

People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.

Getting in the way

By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable – things such as life and a sustainable environment.

If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

How much time do we have?

It’s hard to say how urgent the problem is, even if pessimists are right. We don’t yet know exactly what makes human thought different from current generation of machine learning algorithms, for one thing, so we don’t know the size of the gap between the fixed bar and the rising curve.

But some trends point towards the middle of the present century. In Whole Brain Emulation: A Roadmap, the Oxford philosophers Anders Sandberg and Nick Bostrom suggest our ability to scan and emulate human brains might be sufficient to replicate human performance in silicon around that time.

“The pessimists might be wrong!”

Of course – making predictions is difficult, as they say, especially about the future! But in ordinary life we take uncertainties very seriously, when a lot is at stake.

Sebastianlund
That’s why we use expensive robots to investigate suspicious packages, after all (even when we know that only a very tiny proportion of them will turn out to be bombs).

If the future of AI is “explosive” in the way described here, it could be the last bomb the human species ever encounters. A suspicious attitude would seem more than sensible, then, even if we had good reason to think the risks are very small.

At the moment, even that degree of reassurance seems out of our reach – we don’t know enough about the issues to estimate the risks with any high degree of confidence. (Feeling optimistic is not the same as having good reason to be optimistic, after all).

What to do?

A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.

Once we put such a future on the agenda we can begin some serious research about ways to ensure out-sourcing intelligence to machines would be safe and beneficial, from our point of view.

Perhaps the best cause for optimism is that, unlike ordinary ticking parcels, the future of AI is still being assembled, piece by piece, by hundreds of developers and scientists throughout the world.

The future isn’t yet fixed, and there may well be things we can do now to make it safer. But this is only a reason for optimism if we take the trouble to make it one, by investigating the issues and thinking hard about the safest strategies.

We owe it to our grandchildren – not to mention our ancestors, who worked so hard for so long to get us this far! – to make that effort.



Further information:
For a thorough and thoughtful analysis of this topic, we recommend The Singularity: A Philosophical Analysis by the Australian philosopher David Chalmers. Jaan Tallinn’s recent public lecture The Intelligence Stairway is available as a podcast or on YouTube via Sydney Ideas.


The Centre for the Study of Existential Risk
The authors are the co-founders, together with the eminent British astrophysicist, Lord Martin Rees, of a new project to establish a Centre for the Study of Existential Risk (CSER) at the University of Cambridge.

The Centre will support research to identify and mitigate catastrophic risk from developments in human technology, including AI – further details at CSER.ORG.

Want to follow The Conversation?

Sign up to our free newsletter to get the day's top stories in your inbox each morning, with a special wrap on Saturday.

Spinner

Join the conversation

51 Comments sorted by

  1. Stephen Pritchard

    Stephen Pritchard

    Student, cognitive science (logged in via email @gmail.com)

    I'm fascinated by AI, and hope to see it in my lifetime, but here are a few reasons to be skeptical:

    Citing Moore's law in a discussion of AI prospects is like building a 70kg human-shaped pile of steaks, and saying you've made progress towards replicating the human body. No. You haven't.

    Narrow AI is infected with misnomers. For example: Software doesn't "recognise" faces, it doesn't have a *cascade* of memories, associations, emotions, thoughts and anticipations that come with "recognising…

    Read more

    1. Peter de Lissa

      Peter de Lissa

      (logged in via Twitter)

      AI need not emulate the human brain, or the human mind for that matter, to successfully transcend our abilities. It is true that current AI doesn't "recognise" things in the way that we understand/define recognition, but then if you break down the way our brains process sensorial input the initial stages don't classify as recognition in that sense either. A perfect recreation of the pattern-recognition processes would still be an infinite gap between computer and mind. The complex storage and filtration…

      Read more

  2. Brad Arnold

    Brad Arnold

    (logged in via Facebook)

    Great well balanced article. I have a master programming friend who thinks the concept of the Singularity is rubbish. On the other hand, I remember when my entire chess club swore that no computer would ever beat the best chess players (ha). The Singularity is coming. All I can say is that AI better not reflect our values, because then we would certainly be doomed.

    1. Brad Arnold

      Brad Arnold

      (logged in via Facebook)

      The other factor: human augmentation. Around the time of the emergence of the Singularity, it is also predicted that man would merge with machine. I am a trans-humanist, and if you don't know what I am talking about, or think I've lost my mind, I suggest you google the subject. I have taken steps to extend my life, and expect more technologies to emerge that will allow me to live for centuries. Furthermore, a hundred years from now my body may be unrecognizable from what it looks like today. The Singularity is coming, and I expect to live centuries and die off planet. If you don't believe or understand, then good-less competition. On the other hand, if you internalize this posting, then good because I want at least a few people from this era to relate to centuries from now.

      1. Joe Gartner

        Joe Gartner

        Tilter at windmills (logged in via email @y7mail.com)

        Reading to much Dan Simmons?

  3. Randolph Crawford

    Randolph Crawford

    (logged in via LinkedIn)

    Interesting article, but I think a few presumptions are debatable.

    First, Moore's Law is dead. It died about 2005 when the two decade long exponential rise in CPU clock speeds sputtered and stalled at about 3 GHz. In the past 7 years CPU speeds have advanced not at all, much less exponentially. If explosive growth in CPU performance is necessary in order to achieve The Singularity, then faggetaboutit.

    http://spectrum.ieee.org/computing/hardware/why-cpu-frequency-stalled

    Second, there…

    Read more

    1. Stephen Pritchard

      Stephen Pritchard

      Student, cognitive science (logged in via email @gmail.com)

      While I think that people like Kurzweil underestimate how much has to be done, I agree with what he writes about the accelerating pace of technological improvement in hardware.

      Moore's law isn't dead. See here:
      http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg

      The size of transistors on a chip has continued to decrease, and the number of transistors per chip has continued to increase. There are more cores per CPU now to make up for a lack of increase in clock frequency.

      The article you linked even says:
      "That bodes well for Moore's Law, which predicts that about every two years, ­manufacturers will double the number of ­transistors they cram onto a given bit of silicon. The fundamental theorem says that we'll still be able to make full use of those transistors for a good long time. If once the whole choir of transistors had to sing to the beat of a single metronome, now it can split up into sections—and harmonize."

      1. Randolph Crawford

        Randolph Crawford

        (logged in via LinkedIn)

        The source of the graphs in that article was Intel. It's little wonder that they don't admit defeat. If Joe User doesn't replace his computer every 2-3 years, Intel goes out of business. (Did you wonder why Intel's third graph of Moore's Law spanned 40 years, instead of 12 years like the other two graphs? Perhaps it's because the downward tilt of that curve since 2003 is something they want to de-emphasize.)

        Most of Moore's Law is attributable to clock speed, not transistor count. As transistors…

        Read more

    2. Ian Donald Lowe

      Ian Donald Lowe

      Seeker of Truth (logged in via email @live.com.au)

      Perhaps you haven't heard of SSD' yet? Solid State Drives will supercede the chip driven cpu's but at the moment the major downside is that they aren't very big, so not much RAM. I quote:
      "A weird thing has happened in PC gaming. After years of getting spoiled by dirt-cheap hard drives and virtually unlimited storage, disk space is suddenly at a premium again. It's not because hard drives have gotten more expensive (although they have, thanks to last year's floods in Thailand), it's because we've gotten spoiled by something different: SSDs. Solid state drives are crazy-fast, and we recommend one for any new PC gaming rig, but they're just not very big."
      http://au.pc.gamespy.com/articles/122/1225606p1.html?utm_source=GameSpy&utm_medium=email&utm_campaign=1810%20GameSpy%20Sat%2008.04_6351_305224_305228&utm_content=20568043

      1. Ian Donald Lowe

        Ian Donald Lowe

        Seeker of Truth (logged in via email @live.com.au)

        I forgot to mention that another positive feature of the SSD is that the data and prcessor is contained within the inside of a metal tube, so it has the capability to be made almost indestructible, or perhaps that is another negative feature?

      2. Peter de Lissa

        Peter de Lissa

        (logged in via Twitter)

        Solid state drives are storage devices not processors. The "processors" in them only carry out simple functions related to the transfer of information to and from the main CPU, etc.
        Supersede is the only English word to end in "sede", by the way.

        1. Ian Donald Lowe

          Ian Donald Lowe

          Seeker of Truth (logged in via email @live.com.au)

          I apologise profusely for the spelling error/typo but there is no edit button so how about we let it go, OK?

          Solid State Drives have been developed to operate as CPU's for high spec. gaming systems by the industry that builds dedicated gaming computers. You may need a second core of a traditional CPU drive for data storage to have adequate RAM on the SSD but believe what you like. I can only urge you to do some more research.

          What do you think real AI is going to need most anyway? Masses of programming, which means massive storage space, or rapid data processing for 'instant' decision making? I think the latter, will be the one.

          1. Peter de Lissa

            Peter de Lissa

            (logged in via Twitter)

            "You may need a second core of a traditional CPU drive for data storage to have adequate RAM on the SSD"
            If you could rephrase that I'd be most appreciative. And if you could provide the source for the suggestion that gaming platforms use SSDs as cpus that would be great. It is, after-all, your assertion.
            I think AI will emerge from the methods that data is stored and indexed, which is an interaction between the hardware and the software though primarily a matter of associations formed within the stored information.

            1. Ian Donald Lowe

              Ian Donald Lowe

              Seeker of Truth (logged in via email @live.com.au)

              Gaming systems are moving toward using SSD's instead of HDD's (Hard Disc Drive) for maximum performance.

              http://lifehacker.com/5932009/the-complete-guide-to-solid+state-drives

              This is a good article. It gives a good description of how an SSD stores and retrieves data and why that makes it faster than a HDD.

          2. Ben Hansen

            Ben Hansen

            (logged in via Facebook)

            "Solid State Drives have been developed to operate as CPU's for high spec. gaming systems by the industry that builds dedicated gaming computers."

            An SSD is not a Central Processing Unit. SSD's built-in processors perform a narrow set of tasks (read, write, erase, move - ie, storage tasks), as Peter already described. An SSD is used for fast storage, not processing.

            What companies are utilising SSD's as 'CPU's'?

            "You may need a second core of a traditional CPU drive for data storage to have adequate RAM on the SSD but believe what you like."

            This sentence doesn't even make sense.

            So far your contributions here are just odd and could almost be spam. They do not relate to the original discussion aside from SSD's as fast data storage (cache memory & volatile memory is usually faster).

            1. Ian Donald Lowe

              Ian Donald Lowe

              Seeker of Truth (logged in via email @live.com.au)

              Sorry Ben, but I was using the wrong terms. I am a gamer but I am not a techie by any means. Where I used the term CPU, I should have said HDD because they are the usual form of driver used for CPU's these days because they are pretty fast but not as fast a SSD's and they are other technologies from the chip.

              Also, where I said SSD's were developed for gaming CPU's, I should have said specialist builders are utilising SSD's as CPU's because they retrieve stored information in a totally different…

              Read more

              1. Ben Hansen

                Ben Hansen

                (logged in via Facebook)

                I appreciate that you admit that you're not a techie, and refer you to Peter's post below regarding the glossary of computer terms.

                The original comment regarding Moore's Law was from Randolph, not Stephen, but your response still doesn't make a whole lot of sense to me.

                Because of your confused terminology, I don't really see what your argument is here. That gaming is leading the way on the technical front? Perhaps. A more accurate assertion would be that general demand usually drives evolution…

                Read more

  4. Alex Lawrey

    Alex Lawrey

    (logged in via LinkedIn)

    This is surely a philosophical question rather than one of technology, that technology will advance is a given, so this concerns metaphysics, and even spirituality more than technology. The 'genius' issue is perhaps the place to start, looking at say the two Berninis, Gian Lorenzo, the famous baroque 'artist' and his father Pietro, the more 'workmanlike' of the two. The Polish word 'robotnik' describes a manual worker, say Pietro Bernini, whereas Gian Lorenzo would NOT be a 'robotnik' as he was a…

    Read more

  5. Thomas McVeigh

    Thomas McVeigh

    Meh (logged in via email @live.co.uk)

    It seems to me that a fairly oft-overlooked issue is the one of motivation. Humans, as biological automatons (unless someone has managed to prove free-will) are driven by evolutionary motivations: we make tools, computers, AIs for a purpose: to make our biological lives better, to satisfy curiosity, to gain respect from our peers, and status for breeding. Animal concerns drive our high technology.

    Humans are created with the overarching desire to survive and replicate, but this is something…

    Read more

  6. Rob Waite

    Rob Waite

    (logged in via Facebook)

    Has anyone considered that power/electricity might play a part in how successful AI actually is in the future. I would imagine that a in order for AI to really pose a threat to mankind the "robots" would require some form of (clean?) renewable energy source?

    Surely if that form of energy source were available it would solve a multitude of our problems including the depletion of natural resources, climate change, transport etc as well as take away the need for AI to plug themselves in to re-charge. If they relied on current energy sources AI wouldn't last all that long either, but mankind could continue as we survived without electricity for millions of years.

  7. Adrian May

    Adrian May

    (logged in via Facebook)

    If we're discussing software as a life form that could challenge humans, perhaps we should examine it within the theory that determines the progress of any other life form: Darwinism.

    AI went wrong when Alan Turing set the imitation of humans as its goal: environmental selection in the cybersphere prefers entirely different characteristics. Humans can physically pull the plug on any computer and do so as soon as the computer ceases to be useful to them. The environmental fitness of a software…

    Read more

  8. Robert Schreur

    Robert Schreur

    Poet, at best (logged in via email @gmail.com)

    I enjoyed the article. I'm no philosopher, but I do wish the Bertrand Russell Professor at the University of Cambridge might offer some Wittgensteinian reflections. There seem to be a dizzying number of "grammatical" errors in this discussion. No doubt machines may destroy us, including digital machines. But what do we possibly mean when we refer to "intelligence" with regard to such machines, much less "will" or "intention." What is it I do when I program my laptop to search for articles on AI? Does this bear any meaningful resemblance to what we think machines might do? If we don't look for differences here we succumb to a charming, or terrifying, picture. "Many of these explanations are adopted because they have a peculiar charm" (Lectures and Conversations, p. 25). Do machines at present suffer because they are so stupid? I do.

    1. Alex Cannara

      Alex Cannara

      (logged in via LinkedIn)

      One of the AI projects in the '70s at Stanford set out to do intelligent translation, particularly for Russian (the Cold War, y'know). Once the AI lab had a working program it thought competent, it accepted some outside tests.

      One test was simply to take "The spirit is willing but the flesh is weak" from English to Russian and back again.

      The result, like others, was fun: "The wine is strong but the meat is bad."

      As Rick Perry would say: "Oops."

      After ~40 years, smart phones do a bit better, using lots more horsepower.
      ;]

  9. Alex Lawrey

    Alex Lawrey

    (logged in via LinkedIn)

    AE - artificial emotion, that would be the real quantitative leap. Take the ability to walk which robot scientists are struggling with at the moment. Humans and all land animals move around, it is an innate 'drive' learnt very early on. But we do more than simply interpreting terrain and responding accordingly, we learn in winter that ice is slippery and we fall over adjust our patterns of movement as a result. Machines could do that too. What they could not do is feel the joy of having a snowball fight in winter, or of building a snowman. They might be capable of learning, and of learning to learn, but not yet (in any recognisable form) of experiencing in a non-logical way, of having emotions. So AE will be the next stage in machine evolution.. or not!

    1. Alex Cannara

      Alex Cannara

      (logged in via LinkedIn)

      Shakey, one of the 1st AI robots built at SRI here in Calif, was a minicomputer on a motorized cart that had spacial recognition software and could use its TV-camera 'eyes' to find a wall plug, go over to it, and plug itself in for recharge in the nick of time.

      Motivation, concerted effort to a goal, anxiety, and relief of achievement -- all in one rickety robotic contraption. Intelligence?

      Remember, SRI housed many scietists fooled by Uri Geller bending spoons and starting watches by telekinesis. Ahh, the Cold War stimuli to research.
      ;]

  10. Han Verstegen

    Han Verstegen

    Mechanical Technician (logged in via email @gmail.com)

    Maybe, "soon" we have to choose for a very fast development of superintelligent robots which should protect the human race against a threat of hostile, intelligent aliens.
    The question is what is worse? Being dominated by aliens we know nothing about or later (after we survived) by superintelligent robots initially invented by ourselves? Thinking as a human we will defend our place in the universe and our properties including our creations (good and bad ones). So being dominated
    by hostile aliens is probably more threatening.

  11. John Morrison

    John Morrison

    Software developer (logged in via email @hotmail.com)

    Artificial intelligence always struggles to have a hard logic available for every situation that may arise. Most of the time we understand that this is what limits its application. In the effort to push those limits we have already resorted to fuzzy logic to resolve what can not be resolved with hard logic. I believe we already trust in this to operate our cars engine management and safety systems.
    In the military area the imperative to push the limits is very strong and this is where there could…

    Read more

  12. Alex Cannara

    Alex Cannara

    (logged in via LinkedIn)

    Han makes a good point -- which 'intelligent' forms should we pick to dominate us -- earthly or alien?

    Might be good to talk with humans who've been subject to slavery as it presently exists around the world.
    ;]

  13. JOSEPH LEE

    JOSEPH LEE

    RETIRED (logged in via email @YAHOO.COM.CN)

    In the start, AI are developed by human beings. We can consider that AI beings (not necessary in the form of robots, and their existence can be physically invisible and intangible, i.e. in virtual forms embeded every where) are students, and human beings are teachers. On the point of "teachers", they can be wise and kind teachers, but can also be clever but wicked. On the point of AI beings as students, sooner or later, they will become "graduated" and be capable of independently develope their…

    Read more

  14. Bob Miclette

    Bob Miclette

    (logged in via Twitter)

    What on earth would motivate an extremely intelligent AI program be motivated to do? Since so many reasons humans have for doing anything are illogical, emotional, "goal" oriented, etc, etc, I think an autonomous machine's going to have a really interesting (and alien) set of reasons for doing what is "wants" to do.

  15. Shona Walter

    Shona Walter

    Master in Environment and Energy law (logged in via email @gmail.com)

    A.I. poses interesting questions when it comes to the debate on rights or more specifically humans rights. If you look at the history or rights, women and non-europeans were denied basic human rights because they were precieved to be less intelligent than the white-male population. But as these sections of society were seem as intelligent, certain rights were extended to them.

    This is basically the same reason for which animals are given welfare and not the full blown benefits of rights. Because…

    Read more

    1. Brad Arnold

      Brad Arnold

      (logged in via Facebook)

      Great posting, although I am of the school that says that power must be taken, not begged for. Until those in power realize the jig is up, and a certain demographic must be given some power and recognition because not to do so will be deleterious to the power structure, it is all too likely and convenient to be denied.

      For instance, there are many species of animals that it is easy to prove are conscious. Furthermore, some are easily proven to have capabilities beyond humans. Yet, they are not given equal status, nor power within our hierarchy. What would have to change to cause our leadership to grant them "human rights?" Obviously not just a high score on a conventional IQ test, or wide spread publication of same.

      1. Ian Donald Lowe

        Ian Donald Lowe

        Seeker of Truth (logged in via email @live.com.au)

        Yes well, I'm on to you Sunshine, you and your psychopathic, power obsessed mates and I'm not alone by any means at all. (but you probably know that already don't you?) The jig is up. You demonstrate clearly what could go wrong, you would attempt to demand human rights in your 'augmented form but in your posts so far you have displayed very little humanity to prove that you are entitled to them and neither could your "superior being" metal machine.

  16. Alex Cannara

    Alex Cannara

    (logged in via LinkedIn)

    AI -- the holy (or unholy) grail of humans' imaginings that we're "intelligent" enough to define "intelligence" and then implement it in various machinery we invent. Maybe we suffer not intelligence, but arrogance?

    Example: "Moore’s law, which predicts that the amount of computing power we can fit on a chip doubles every two years," -- no, Moore's associate came up with the description of Intel's production capabilities and put it on a slide for Moore to use in a boring meeting. And, the doubling…

    Read more

  17. Christian Hazelwood

    Christian Hazelwood

    Cognitive Scientist (logged in via email @hotmail.com)

    OMG - so many fundamental mistakes - Life and intelligence are linked - we evolved to adapt - robots can not evolve without self replication because they can not evolve to adapt - if they did evolve they would be fighting each other before us - they are not a risk - I beg each and every so called AI experts - PLEASE PLEASE PLEASE - take a basic course in theory of mind before scare mongering members of public who do not know any better.. you guys do more harm than good...

  18. Ian Donald Lowe

    Ian Donald Lowe

    Seeker of Truth (logged in via email @live.com.au)

    1. Stephen Pritchard

      Stephen Pritchard

      Student, cognitive science (logged in via email @gmail.com)

      Most of the links your provided describe drone aircraft that could also be described as "remote controlled planes". Sure, they have some limited capacity for autonomy, but the scope of this autonomy is generally brushed over in these sorts of discussions. Drone aircraft are hardly the cutting edge of AI.

      The sciencedaily story describes "super turing machines" named after a brilliant chap who unfortunately predicted human level AI by 2000. It is an interesting story, but describes what they…

      Read more

      1. Ian Donald Lowe

        Ian Donald Lowe

        Seeker of Truth (logged in via email @live.com.au)

        Drones are real and drones are armed. I linked 3 articles on drones to show that and to show how numerous they are becoming. The first article also discusses a limited AI for military drones.
        The science daily article describes work being done right now and although there is no time-frame given, it illustrates how close science is to creating an AI with real learning and thinking skills.

        I would have posted more links that show how far robotics has progressed and how most of the research is driven by DARPA (Defense Advanced Research Projects Agency. DARPA is an agency of the United States Department of Defense. There are other advances in robotics and AI technology that are not so military in nature but that doesn't mean the military won't take civillian applications and apply them to their military applications, if they see something they like. Even the algorithms that power the Google search engine could be used applications that are not quite so benign.

        1. Peter de Lissa

          Peter de Lissa

          (logged in via Twitter)

          You are describing how humans have used machines for the own purposes. The discussion is what would machines do when freed from our designs? They may give new commands to the drones to carry chemicals that would seed clouds for the purpose of limiting the greenhouse effect. Or they might just fly off into space, who knows? That is the consideration, how might a superior being feel towards us, and what might it do about those feelings?
          We already know how inhumane humans are.

          1. Ian Donald Lowe

            Ian Donald Lowe

            Seeker of Truth (logged in via email @live.com.au)

            Sorry but I'm trying to get my head around the concept of a manufactured item being a "superior being". It sort of flies in the face of everything I believe and everything I have been taught.

            I did speculate, thus the references to Skynet and the possibility that getting rid of water and oxygen might occur to this "superior being" which, unless pre-programmed in a way that cannot be overwritten by the thinking, learning machine to be restricted in ways similar to Asismov's laws of robotics, may think it a good idea and not give a fig about the sanctity of life.

            This speculation has been going on for decades in Science Fiction. In nine out of ten speculative novels, it ends badly for humanity.

            1. Peter de Lissa

              Peter de Lissa

              (logged in via Twitter)

              A number of definitions would have to be agreed upon before a discussion about that, firstly "manufactured", then "being" then "superior".

              1. Joe Gartner

                Joe Gartner

                Tilter at windmills (logged in via email @y7mail.com)

                And why would a military artifact do anything at all if it was given 'freedom' how 'free' would it be given the constraints of its programming? How would it refuel, effect repairs etc?

                1. Ian Donald Lowe

                  Ian Donald Lowe

                  Seeker of Truth (logged in via email @live.com.au)

                  That would require self awareness and then it would know it needs repair and when it needs to refuel. (how to do that can all be programmed in. It's the decision making process itself which is important and the cabability to learn and process that information into actions, which is where the decision making process comes in.
                  Here's the rub, the military machine that reaches this point has been programmed to make certain responses, either ignore or attack, then the only decision to make is which weapon system to use and how much damage to inflict.

                2. Peter de Lissa

                  Peter de Lissa

                  (logged in via Twitter)

                  Freed from our designs is a reference to AI designing itself physically and in terms of programming. If it could not circumvent our programming then it wouldn't be free, right?

  19. Ian Donald Lowe

    Ian Donald Lowe

    Seeker of Truth (logged in via email @live.com.au)

    Nobody want's to talk about Solid State Drives and what they might mean for future computing development?
    They are here now and as usual, the gaming community are leading the way, pushing the boundaries all the way.
    Data compression? Games.
    Quad cores? Gaming systems.
    Best clocking speeds? Gaming systems.
    AI development? Games.
    Motion detection and recognition? Gaming systems.
    Heres another one I saw some time ago:
    "Air Force Unveils Fastest Defense Supercomputer, Made of 1,760 PlayStation…

    Read more

    1. Peter de Lissa

      Peter de Lissa

      (logged in via Twitter)

      I find it amusing that some people's initial reaction to the concept of a superior being is related to worship. But then of course there is the irony of us worshipping our own creation. Though who is to say this hasn't been happening since man first conceived of gods...

      1. Ian Donald Lowe

        Ian Donald Lowe

        Seeker of Truth (logged in via email @live.com.au)

        I don't bow down to any Man and I and most Humans would not bow down to a machine either. Sheeple might, not me not anyone who is awake.

        1. Peter de Lissa

          Peter de Lissa

          (logged in via Twitter)

          Nor to the conventions of punctuation, it would seem.

          1. Ian Donald Lowe

            Ian Donald Lowe

            Seeker of Truth (logged in via email @live.com.au)

            You are right Peter.
            My formal education is limited and my spelling and punctuation can be bad. But I do know how to spell Pedant.

    2. Ian Donald Lowe

      Ian Donald Lowe

      Seeker of Truth (logged in via email @live.com.au)

      I forgot to mention the concept of taking the power of modern graphic cards (GPU's) and utilising that for data processing that just boosts an ordinary system into something else entirely. I think the PS3 was the first sytem to do so.