Can we replace politicians with robots?

 A robot for an MP – who’d vote for that? Shutterstock/Mombo

Jonathan Roberts, Queensland University of Technology and Frank Mols, The University of Queensland

If you had the opportunity to vote for a politician you totally trusted, who you were sure had no hidden agendas and who would truly represent the electorate’s views, you would, right?

What if that politician was a robot? Not a human with a robotic personality but a real artificially intelligent robot.

Futures like this have been the stuff of science fiction for decades. But can it be done? And, if so, should we pursue this?

Lost trust

Recent opinion polls show that trust in politicians has declined rapidly in Western societies and voters increasingly use elections to cast a protest vote.

This is not to say that people have lost interest in politics and policy-making. On the contrary, there is evidence of growing engagement in non-traditional politics, suggesting people remain politically engaged but have lost faith in traditional party politics.

More specifically, voters increasingly feel the established political parties are too similar and that politicians are preoccupied with point-scoring and politicking. Disgruntled voters typically feel the big parties are beholden to powerful vested interests, are in cahoots with big business or trade unions, and hence their vote will not make any difference.

Another symptom of changing political engagement (rather than disengagement) is the rise of populist parties with a radical anti-establishment agenda and growing interest in conspiracy theories, theories which confirm people’s hunch that the system is rigged.

The idea of self-serving politicians and civil-servants is not new. This cynical view has been popularised by television series such as the BBC’s Yes Minister and the more recent US series House of Cards (and the original BBC series).

We may have lost faith in traditional politics but what alternatives do we have? Can we replace politicians with something better?

Machine thinking

One alternative is to design policy-making systems in such a way that policy-makers are sheltered from undue outside influence. In so doing, so the argument goes, a space will be created within which objective scientific evidence, rather than vested interests, can inform policy-making.

At first glance this seems worth aspiring to. But what of the many policy issues over which political opinion remains deeply divided, such as climate change, same sex marriage or asylum policy?

Policy-making is and will remain inherently political and policies are at best evidence-informed rather than evidence-based. But can some issues be depoliticised and should we consider deploying robots to perform this task?

Those focusing on technological advances may be inclined to answer “yes”. After all, complex calculations that would have taken years to complete by hand can now be solved in seconds using the latest advances in information technology.

Such innovations have proven extremely valuable in certain policy areas. For example, urban planners examining the feasibility of new infrastructure projects now use powerful traffic modelling software to predict future traffic flows.

Those focusing on social and ethical aspects, on the other hand, will have reservations. Technological advances are of limited use in policy issues involving competing beliefs and value judgements.

A fitting example would be euthanasia legislation, which is inherently bound up religious beliefs and questions about self-determination. We may be inclined to dismiss the issue as exceptional, but this would be to overlook that most policy issues involve competing beliefs and value judgements, and from that perspective robot politicians are of little use.

Moral codes

A supercomputer may be able to make accurate predictions of numbers of road users on a proposed ring road. But what would this supercomputer do when faced with a moral dilemma?

Most people will agree that it is our ability to make value judgements that sets us apart from machines and makes us superior. But what if we could program agreed ethical standards into computers and have them take decisions on the basis of predefined normative guidelines and the consequences arising from these choices?

If that were possible, and some believe it is, could we replace our fallible politicians with infallible artificially intelligent robots after all?

The idea may sound far-fetched, but is it?

Robots may well become part of everyday life sooner than we think. For example, robots may soon be used to perform routine tasks in aged-care facilities, to keep elderly or disabled people company and some have suggested robots could be used in prostitution. Whatever opinion we may have about robot politicians, the groundwork for this is already being laid.

A recent paper showcased a system that automatically writes political speeches. Some of these speeches are believable and it would be hard for most of us to tell if a human or machine had written them.

Politicians already use human speech writers so it may only be a small step for them to start using a robot speech writer instead.

The same applies to policy-makers responsible for, say, urban planning or flood mitigation, who make use of sophisticated modelling software. We may soon be able to take out humans altogether and replace them with robots with the modelling software built into itself.

We could think up many more scenarios, but the underlying issue will remain the same: the robot would need to be programmed with an agreed set of ethical standards allowing it to make judgements on the basis of agreed morals.

The human input

So even if we had a parliament full of robots, we would still need an agency staffed by humans charged with defining the ethical standards to be programmed into the robots.

And who gets to decide on those ethical standards? Well we’d probably have to put that to the vote between various interested and competing parties.

This bring us full circle, back to the problem of how to prevent undue influence.

Advocates of deliberative democracy, who believe democracy should be more than the occasional stroll to a polling booth, will shudder at the prospect of robot politicians.

But free market advocates, who are more interested in lean government, austerity measures and cutting red-tape, may be more inclined to give it a go.

The latter appear to have gained the upper hand, so the next time you hear a commentator refer to a politician as being robotic, remember that maybe one day some of them really will be robots!

Frank Mols, Lecturer in Political Science, The University of Queensland and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

My robot Valentine: could you fall in love with a robot?

Can a robot really feel and express emotions such as love? Shutterstock/Charles Taylor

Kate Letheren, Queensland University of Technology and Jonathan Roberts, Queensland University of Technology

Imagine it’s Valentine’s Day and you’re sitting in a restaurant across the table from your significant other, about to start a romantic dinner.

As you gaze into each other’s eyes, you wonder how it can possibly be true that as well as not eating, your sweetheart does not – cannot – love you. Impossible, you think, as you squeeze its synthetic hand.

Could this be the future of Valentine’s Day for some? Recent opinion indicates that yes, we might just fall in love with our robot companions one day.

Already, robots are entering our homes at increasing rates with many households now owning a robot vacuum cleaner.

Robotic toys are becoming more affordable and are interacting with our children. Some robots are even helping rehabilitate special needs children or teach refugee children the language of their new home.

Robot romance

Will these appliances and toys continue to develop into something more sophisticated and more human-like, to the point where we might start to see them as possible romantic partners?

While some may compare this to objectophilia (falling in love with objects), we must ask whether this can truly be the case when the object is a robot that appears and acts like a human.

It is already the norm to love and welcome our pets as family members. This shows us that some varieties of love needn’t be a purely human, nor even a sexual phenomenon. There is even evidence that some pets such as dogs experience very similar emotions to humans, including grief when their owner dies.

Surveys in Japan over the past few years have shown a decline in young people either in a relationship or even wanting to enter a relationship. In 2015, for instance, it was reported that 74% of Japanese in their 20s were not in a relationship, and 40% of this age group were not looking for one. Academics in Japan are considering that young people are turning to digital substitutes for relationships, for example falling in love with Anime and Manga characters.

What is love?

If we are to develop robots that can mirror our feelings and express their digital love for us, we will first need to define love.

Pointing to a set of common markers that define love is difficult, whether it be human-to-human or human-to-technology. The answer to “what is love?” is something that humans have been seeking for centuries, but a start suggests it is related to strong attachment, kindness and common understanding.

We already have the immensely popular Pepper, a robot designed to read and respond to emotions and described as a “social companion for humans”.

How close are we to feeling for a robot what we might feel for a human? Recent studies show that we feel a similar amount of empathy for robot pain as we do human pain.

We also prefer our robots to be relatable by showing their “imperfect” side through boredom or over-excitement.

According to researchers in the US, when we anthropomorphise something – that is, see it as having human characteristics – we start to think of it as worthy of moral care and consideration. We also see it as more responsible for its actions – a freethinking and feeling entity.

There are certainly benefits for those who anthropomorphise the world around them. The same US researchers found that those who are lonely may use anthropomorphism as a way to seek social connection.

Robots are already being programmed to learn our patterns and preferences, hence making them more agreeable to us. So perhaps it will not be long before we are gazing into the eyes of a robot Valentine.

Society’s acceptance

Human-robot relationships could be challenging for society to accept, and there may be repercussions. It would not be the first time in history that people have fallen in love in a way that society at the time deemed “inappropriate”.

The advent of robot Valentines may also have a harmful effect on human relationships. Initially, there is likely to be a heavy stigma attached to robot relationships, perhaps leading to discrimination, or even exclusion from some aspects of society (in some cases, the isolation may even be self-imposed).

Friends and family may react negatively, to say nothing of human husbands or wives who discover their human partner is cheating on them with a robot.

Robot love in return

One question that needs to be answered is whether robots should be programmed to have consciousness and real emotions so they can truly love us back?

When love is returned by a robot.
Shutterstock/KEG

Experts such as the British theoretical physicist Stephen Hawking have warned against such complete artificial intelligence, noting that robots may evolve autonomously and supersede humanity.

Even if evolution were not an issue, allowing robots to experience pain or emotions raises moral questions for the well-being of robots as well as humans.

So if “real” emotions are out of the question, is it moral to program robots with simulated emotional intelligence? This might have either positive or negative consequences for the mental health of the human partner. Would the simulated social support compensate for knowing that none of the experience was real or requited?

Importantly, digital-love may be the catalyst for the granting of human rights to robots. Such rights would fundamentally alter the world we live in – for better or for worse.

But would any of this really matter to you and your robot Valentine, or would love indeed conquer all?

The Conversation

Kate Letheren, Postdoctoral research fellow, Queensland University of Technology and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Robots in health care could lead to a doctorless hospital

Would you trust your child’s health to a robot surgeon? Shutterstock/magicinfoto

Anjali Jaiprakash, Queensland University of Technology; Jonathan Roberts, Queensland University of Technology, and Ross Crawford, Queensland University of Technology

Imagine your child requires a life-saving operation. You enter the hospital and are confronted with a stark choice.

Do you take the traditional path with human medical staff, including doctors and nurses, where long-term trials have shown a 90% chance that they will save your child’s life?

Or do you choose the robotic track, in the factory-like wing of the hospital, tended to by technical specialists and an array of robots, but where similar long-term trials have shown that your child has a 95% chance of survival?

Most rational people would opt for the course of action that is more likely to save their child. But are we really ready to let machines take over from a human in delivering patient care?

Of course, machines will not always get it right. But like autopilots in aircraft, and the driverless cars that are just around the corner, medical robots do not need to be perfect, they just have to be better than humans.

So how long before robots are shown to perform better than humans at surgery and other patient care? It may be sooner, or it may be later, but it will happen one day.

But what does this mean for our hospitals? Are the new hospitals being built now ready for a robotic future? Are we planning for large-scale role changes for the humans in our future robotic factory-like hospitals?

Our future hospitals

Hospitals globally have been slow to adopt robotics and artificial intelligence into patient care, although both have been widely used and tested in other industries.

Medicine has traditionally been slow to change, as safety is at its core. Financial pressures will inevitably force industry and governments to recognise that when robots can do something better and for the same price as humans, the robot way will be the only way.

What some hospitals have done in the past 10 years is recognise the potential to be more factory-like, and hence more efficient. The term “focused factories” has been used to describe some of these new hospitals that specialise in a few key procedures and that organise the workflow in a more streamlined and industrial way.

They have even tried “lean processing” methods borrowed from the car manufacturing industry. One idea is to free up the humans in hospitals so that they can carry out more complex cases.

Some people are nervous about turning hospitals into factories. There are fears that “lean” means cutting money and hence employment. But if the motivation for going lean is to do more with the same, then it is likely that employment will change rather than reduce.

Medicine has long been segmented into many specialised fields but the doctor has been expected to travel with the patient through the full treatment pathway.

A surgeon, for example, is expected to be compassionate, and good at many tasks, such as diagnosing, interpreting tests, such as X-rays and MRIs, performing a procedure and post-operative care.

As in numerous other industries, new technology will be one of the drivers that will change this traditional method of delivery. We can see that one day, each of the stages of care through the hospital could be largely achieved by a computer, machine or robot.

Some senior doctors are already seeing a change and they are worried about the de-humanising of medicine but this is a change for the better.

Safety first but some AI already here

Our future robot-factory hospital example is the end game, but many of its components already exist. We are simply waiting for them to be tested enough to satisfy us all that they can be used safely.

There are programs to make diagnoses based on a series of questions, and algorithms inform many treatments used now by doctors.

Surgeons are already using robots in the operating theatre to assist with surgery. Currently, the surgeon remains in control with the machine being more of a slave than a master. As the machines improve, it will be possible for a trained technician to oversee the surgery and ultimately for the robot to be fully in charge.

Hospitals will be very different places in 20 years. Beds will be able to move autonomously transporting patients from the emergency room to the operating theatre, via X-ray if needed.

Triage will be done with the assistance of an AI device. Many decisions on treatment will be made with the assistance of, or by, intelligent machines.

Your medical information, including medications, will be read from a chip under your skin or in your phone. No more waiting for medical records or chasing information when an unconscious patient presents to the emergency room.

Robots will be able to dispense medication safely and rehabilitation will be robotically assisted. Only our imaginations can limit how health care will be delivered.

Who is responsible when things go wrong?

The hospital of the future may not require many doctors, but the numbers employed are unlikely to change at first.

Doctors in the near future are going to need many different skills than the doctors of today. An understanding of technology will be imperative. They will need to learn programming and computer skills well before the start of medical school. Programming will become the fourth literacy along with reading, writing (which may vanish) and arithmetic.

But who will people sue if something goes wrong? This is, sadly, one of the first questions many people ask.

Robots will be performing tasks and many of the diagnoses will be made by a machine, but at least in the near future there will be a human involved in the decision-making process.

Insurance costs and litigation will hopefully reduce as machines perform procedures more precisely and with fewer complications. But who do you sue if your medical treatment goes tragically wrong and no human has touched you? That’s a question that still needs to be answered.

So too is the question of whether people will really trust a machine to make a diagnosis, give out tablets or do an operation?

Perhaps we have to accept that humans are far from perfect and mistakes are inevitable in health care, just as they are when we put humans behind the wheel of a car. So if driverless cars are going to reduce traffic accidents and congestion then maybe doctorless hospitals will one day save more lives and reduce the cost of health care?

The Conversation

Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology; Jonathan Roberts, Professor in Robotics, Queensland University of Technology, and Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Digital diagnosis: intelligent machines do a better job than humans

It takes time for a human to become good at diagnosing ailments, but that learning is lost when they retire. Shutterstock/Poproskiy Alexey

Ross Crawford, Queensland University of Technology; Anjali Jaiprakash, Queensland University of Technology, and Jonathan Roberts, Queensland University of Technology

Until now, medicine has been a prestigious and often extremely lucrative career choice. But in the near future, will we need as many doctors as we have now? Are we going to see significant medical unemployment in the coming decade?

Dr Saxon Smith, president of the Australian Medical Association NSW branch, said in a report late last year that the most common concerns he hears from doctors-in-training and medical students are, “what is the future of medicine?” and “will I have a job?”. The answers, he said, continue to elude him.

As Australian, British and American universities continue to graduate increasing numbers of medical students, the obvious question is where will these new doctors work in the future?

Will there be an expanded role for medical professionals due to our ageing populations? Or is pressure to reduce costs while improving outcomes likely to force the adoption of new technology, which will then likely erode the number of roles currently performed by doctors?

Driving down the costs

All governments, patients and doctors around the world know that healthcare costs will need to reduce if we are to treat more people. Some propose making patients pay more, but however we pay for it, it’s clear that driving the cost down is what needs to happen.

The use of medical robots to assist human surgeons is becoming more widespread but, so far, they are being used to try and improve patient outcomes and not to reduce the cost of surgery. Cost savings may come later when this robotic technology matures.

It is in the area of medical diagnostics where many people see possible significant cost reduction while improving accuracy by using technology instead of human doctors.

It is already common for blood tests and genetic testing (genomics) to be carried out automatically and very cost effectively by machines. They analyse the blood specimen and automatically produce a report.

The tests can be as simple as a haemoglobin level (blood count) through to tests of diabetes such as insulin or glucose levels. They can also be used for far more complicated tests such as looking at a person’s genetic makeup.

A good example is Thyrocare Technologies Ltd in Mumbai, India, where more than 100,000 diagnostic tests from around the country are done every evening, and the reports delivered within 24 hours of blood being taken from a patient.

Machines vs humans

If machines can read blood tests, what else can they do? Though many doctors will not like this thought, any test that requires pattern recognition will ultimately be done better by a machine than a human.

Many diseases need a pathological diagnosis, where a doctor looks at a sample of blood or tissue, to establish the exact disease: a blood test to diagnose an infection, a skin biopsy to determine if a lesion is a cancer or not and a tissue sample taken by a surgeon looking to make a diagnosis.

All of these examples, and in fact all pathological diagnoses are made by a doctor using pattern recognition to determine the diagnosis.

Artificial intelligence techniques using deep neural networks, which are a type of machine learning, can be used to train these diagnostic machines. Machines learn fast and we are not talking about a single machine, but a network of machines linked globally via the internet, using their pooled data to continue to improve.

It will not happen overnight – it will take some time to learn – but once trained the machine will only continue to get better. With time, an appropriately trained machine will be superior at pattern recognition than any human could ever be.

Pathology is now a matter of multi-million dollar laboratories relying on economies of scale. It takes around 15 years from leaving high school to train a pathologist to function independently. It probably takes another 15 years for the pathologist to be as good as they will ever be.

Some years after that, they will retire and all that knowledge and experience is lost. Surely, it would be better if that knowledge could be captured and used by future generations? A robotic pathologist would be able to do just that.

Radiology, X-rays and beyond

Radiological tests account for over AUS$2 billion of the annual Medicare spend. In a 2013 report, it was estimated that in the 2014-15 period, 33,600,000 radiological investigations would be performed in Australia. A radiologist would have to study every one of these and write a report.

Radiologists are already reading, on average, more than seven times the number of studies per day than they were five years ago. These reports, like those written by pathologists, are based on pattern recognition.

Currently, many radiological tests performed in Australia are being read by radiologists in other countries, such as the UK. Rather than having an expert in Australia get out of bed at 3am to read a brain scan of an injured patient, the image can be digitally sent to a doctor in any appropriate time zone and be reported on almost instantly.

What if machines were taught to read X-rays working at first with, and ultimately instead of, human radiologists? Would we still need human radiologists? Probably. Improved imaging, such as MRI and CT scans, will allow radiologists to perform some procedures that surgeons now undertake.

The field of diagnostic radiology is rapidly expanding. In this field, radiologists are able to diagnose and treat conditions such as bleeding blood vessels. This is done using minimally invasive techniques, passing wires through larger vessels to reach the point of bleeding.

So the radiologists may end up doing procedures that are currently done by vascular and cardiac surgeons. The increased use of robotic assisted surgery will mean this is more likely than not.

There is a lot more to diagnosing a skin lesion, rash or growth than simply looking at it. But much of the diagnosis is based on the dermatologist recognising the lesion (again, pattern recognition).

If the diagnosis remains unclear then some tissue (a biopsy) is sent to the laboratory for a pathological diagnosis. We have already established that a machine can read the latter. The same principle applies to the recognition of the skin lesion.

Once recognised and learnt, the lesion will be able to be recognised again. Mobile phones with high-quality cameras will be able to link to a global database that will, like any other database with learning capability, continue to improve.

It’s not if, but when

These changes will not happen overnight, but they are inevitable. Though many doctors will see these changes as a threat, the chance for global good is unprecedented.

An X-ray taken in equatorial Africa could be read with the same reliability as one taken in an Australian centre of excellence. An infectious rash could be uploaded to a phone and the diagnosis given instantly. Many lives will be saved and the cost of health care to the world’s poor can be minimal and, in many cases, free.

For this to become a reality, it will take experts to work with machines and help them learn. Initially, the machines may be asked to do more straightforward tests but gradually they will be taught, just as humans learn most things in life.

The medical profession should grasp these opportunities for change, and our future young doctors should think carefully where the medical jobs of the future will lie. It is almost certain that the medical employment landscape in 15 years will not look like the one we see today.

The Conversation

Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology; Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology, and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

How do robots ‘see’ the world?

Disney’s WALL.E needed to see all the rubbish on Earth so it could clean it up. AAP Image/Tracey Nearmy

Jonathan Roberts, Queensland University of Technology

The world has gone mad for robots with articles talking almost every day about the coming of the robot revolution. But is all the hype, excitement and sometimes fear justified? Is the robot revolution really coming?

The answer is probably that in some areas of our lives we will see more robots soon. But realistically, we are not going to see dozens of robots out and about in our streets or wandering around our offices in the very near future.

One of the main reasons is simply that robots do not yet have the ability to really see the world. But before talking about how robots of the future might see, first we should consider what we actually mean by seeing.

I see you

Most of us have two eyes and we use those eyes to collect light that reflects off the objects around us. Our eyes convert that light it into electrical signals that are sent down our optic nerves, which are immediately processed by our brain.

Our brain somehow works out what is around us from all of those electrical impulses and from our experiences. It builds up a representation of the world and we use that to navigate, to help us pick things up, to enable us to see one another’s faces, and to do a million other things we take for granted.

That whole activity, from collecting the light in our eyes, to having an understanding of the world around us, is what is meant by seeing.

Researchers have estimated that up to 50% of our brain is involved in the process of seeing. Nearly all of the world’s animals have eyes and can see in some way. Most of these animals, insects in particular, have far simpler brains than humans and they function well.

This shows that some forms of seeing can be achieved without the massive computer power of our mammal brains. Seeing has clearly been determined to be quite useful by evolution.

Robot vision

It is therefore unsurprising that many robotics researchers predict that if a robot can see, we are likely to actually see a boom in robotics and robots may finally become the helpers of humans that so many people have desired.

Early days: A vacuum cleaner that can ‘see’ where it needs to clean.

How then do we get a robot to see? The first part is straightforward. We use a video camera, just like the one in your smart phone, to collect a constant stream of images. Camera technology for robots is a large research field in itself but for now just think of a standard video camera. We pass those images to a computer and then we have options.

Since the 1970s, robot vision engineers have thought about features in images. These might be lines, or interesting points like corners or certain textures. The engineers write algorithms to find these features and track them from image frame to image frame in the video stream.

This step is essentially reducing the amount of data from the millions of pixels in an image to a few hundred or thousand features.

In the recent past when computing power was limited, this was an essential step in the process. The engineers then think about what the robot is likely to see and what it will need to do. They write software that will recognise patterns in the world to help the robot understand what is around it.

The local environment

The software may create a very basic map of the environment as the robot operates or it may try to match the features that it finds with a library of features that the software is looking for.

In essence the robots are being programmed by a human to see things that a human thinks the robot is going to need to see. There have been many successful examples of this type of robot vision system, but practically no robot that you find today is capable of navigating in the world using vision alone.

Such systems are not yet reliable enough to keep a robot from bumping or falling long enough to give the robot a practical use. The driverless cars that are talked about in the media either use lasers or radar to supplement their vision systems.

In the past five to ten years a new robot vision research community has started to take shape. These researchers have demonstrated systems that are not programmed as such but instead learn how to see.

They have developed robot vision systems whose structure is inspired by how scientists think animals see. That is they use the concept of layers of neurons, just like in an animal brain. The engineers program the structure of the system but they do not develop the algorithm that runs on that system. That is left to the robot to work out for itself.

This technique is known as machine learning and because we now have easy access to significant computer power at a reasonable cost, these techniques are beginning to work! Investment in these technologies is accelerating fast.

The hive mind

The significance of having robots learn is that they can easily share their learning. One robot will not have to learn from scratch like a newborn animal. A new robot can be given the experiences of other robots and can build upon those.

One robot may learn what a cat looks like and transfer that knowledge to thousands of other robots. More significantly, one robot may solve a complex task such as navigating its way around a part of a city and instantly share that with all the other robots.

Equally important is that robots which share experiences may learn together. For example, one thousand robots may each observe a different cat, share that data with one another via the internet and together learn to classify all cats. This is an example of distributed learning.

The fact that robots of the future will be capable of shared and distributed learning has profound implications and is scaring some, while exciting others.

It is quite possible that your credit card transactions are being checked for fraud right now by a data centre self-learning machine. These systems can spot possible fraud that no human could ever detect. A hive mind being used for good.

The real robot revolution

There are numerous applications for robots that can see. It’s hard not to think of a part of our life where such a robot could not help.

The first uses of robots that can see are likely to be in industries that either have labour shortage issues, such as agriculture, or are inherently unattractive to humans and maybe hazardous.

Examples include searching through rubble after disasters, evacuating people from dangerous situations or working in confined and difficult to access spaces.

Applications that require very long period of attention, something humans find hard, will also be ripe to be done by a robot that can see. Our future home-based robot companions will be far more useful if they can see us.

And in an operating theatre near you, it is soon likely that a seeing robot will be assisting surgeons. The robot’s superior vision and super precise and steady arms and hands will allow surgeons to focus on what they are best at – deciding what to do.

Even that decision-making ability may be superseded by a hive mind of robot doctors. The robots will have it all stitched up!

The Conversation

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Star Wars: these could be the droids we’re looking for in real life

BB-8 (left) is a new droid addition to the Star Wars universe. Disney

Jonathan Roberts, Queensland University of Technology

The latest episode of Star Wars is now upon us and has unleashed a new era of science fantasy robots, or “droids” as they are known.

One of the heroes of the new movie The Force Awakens is BB-8, a cute but capable spherical droid that is at the centre of the story (sorry, no spoilers).

But droids have been at the heart of the epic science fantasy saga since the original Star Wars movie back in 1977, when C-3PO uttered the immortal words:

I am C-3PO, human-cyborg relations. And this is my counterpart R2-D2.

Star Wars has always been a droid story, just as much as a story about the Skywalker family.

The old and the new: C-3PO (left), BB-8 (centre) and R2-D2 (right) from the Star Wars universe.
Reuters/Carlo Allegri

Even though we all know that Star Wars happened a long time ago, in a galaxy far, far away, just how good has it been a predicting the usefulness and development of robots on our own planet today?

Is that you R2?

For those non-Star Wars experts reading this, R2-D2 is an R-series astromech droid. Such droids work on spaceships and are particularly good at working outside in the vacuum of space. They are the mechanics of space travel and are packed with tools and know-how to fix things. They also seem to be fully waterproof, can fly short distances using deployable thrusters and somehow possess a cheeky character.

But did you know that working in orbit around Earth right now is NASA’s Robonaut 2, also known as R2. It is one of the International Space Station’s test bed droids, having a humanoid shape and proportions so that it can undertake maintenance tasks originally designed for human astronauts.

Robonaut2 – or R2 for short – from NASA and General Motors, is a robot designed to work side-by-side with people in difficult or dangerous places on Earth and in space.
NASA

Perhaps in the future, when all spaceship maintenance will be performed by droids, this real R2 unit will replace the humanoid form.

The diplomatic droid

The golden humanoid C-3PO is a protocol droid fluent in more than six million forms of communication. A protocol droid’s primary purpose in Star Wars is to help non-droids, creatures of all kinds, communicate with one another and generally avoid potentially dangerous misunderstandings.

If there were protocol droids in Mos Eisley’s Cantina then maybe no-one would have shot first! But as the bartender said of R2-D2 and C-3PO: “We don’t serve their kind here.”

We have human diplomats in our world to negotiate and attempt to head off conflict, and there seems no need for a mechanical interface such as a protocol droid.

But we are seeing translation apps on our phones and their accuracy is improving to the point where it is conceivable that live language translation between two people speaking to one another may not be too far away. Until we find non-human sentient equals then there will be few diplomatic jobs for C-3PO-like droids here on earth.

A place that we are likely to see humanoid robots like C-3PO is as artificial companions and carers. The advantage of a humanoid robot is that it should be able to cope in our homes or care facilities as they have all been designed for humans.

This is one of the great advantages of the humanoid robot form, although there is the so-called “uncanny valley” to deal with and the feeling by some that we should always ensure people have a human touch.

Best Star Wars Droids

A way of thinking about the dozens of droids of Star Wars is to classify them by how they are used. We have seen them being used in applications as diverse as farming, medicine, war, torture and space exploration.

Farming robots

When R2-D2 and C-3PO escape Darth Vader and land in their escape pod on the sand plant of Tatooine, they are picked up by the Jawas scavenging for droids to sell to local moisture farmers. The lack of labour on Tatooine results in droids being critical for the functioning of the farms.

Note to non-Star Wars experts: Darth Vader himself, or the least a young Anakin Skywalker, built C-3PO on Tatooine from spare parts.

In the past year alone, very capable agricultural robots have been demonstrated by Queensland University of Technology, The University of Sydney and by Swarm Farm Robotics.

Robots down on the farm.

Many other research organisations and companies are developing agricultural robotics as a way of overcoming labour availability issues, reducing the cost of inputs such as diesel and herbicide, and enabling the use of smaller machines that compact the soil less than the large tractors we see commonly used today.

Medical robots

In the Star Wars movies, medical droids appear at critical moments. The medical droids 2-1B and FX-7 twice patched up Luke in The Empire Strikes Back. Once when he survived the Wampa attack on Hoth and then again at the end when they grafted on a robotic hand to Luke after his father sliced it off.

Similar Imperial DD-13 medical droids created the droid-like Darth Vader from his battered body following his light sabre duel with Obi Wan on the volcanic plant Mustafar in Revenge of the Sith.

An EW-3 midwife droid even helped Padmé give birth to the twins Luke and Leia just prior to her tragic death.

Here on Earth, Google has been talking about its plans for new medical robots. It’s teaming up with medical device companies to develop new robotic assistants for minimally invasive surgery.

Medical robotic assistants have already become a common sight in well-equipped modern hospitals and are being used to help surgeons during urology procedures and more recently for knee replacements. New research is also showing how novel tentacle-like robot arms may be used to get to difficult to reach places.

The hope is that medical robotics will enable shorter training times for surgeons, lengthen a surgeon’s career and improve outcomes for patients. All these benefits could drive the cost of these procedures down, giving access to more people around the world.

Killer robots

Unsurprisingly, there are many droids in the Star Wars universe dedicated to killing. In Episodes I-III, the Trade Federation used droid starfighters. These were spaceships that were droids themselves and the droid command ships housed thousands of them.

The Trade Federation were also fans of deploying thousands of humanoid shaped B1 Battle Droids. Although they were relatively well equipped, they seemed stupid and were even worse shots than Stormtroopers. The far more capable Destroyer Droids had deflector shields and rapid fire laser cannons.

Killer robots and their development is a hot topic right now on Earth. A campaign has been started with the aim of developing arms controls and some killer robots have already been deployed.

In the Middle East, drones are routinely used to deliver missiles. These are human controlled and are not autonomous but they are changing the face of conflict.

In the DMZ between the Koreas you will find fully autonomous robots equipped with heavy duty, long-range machine guns. If they spot movement in the DMZ they are capable of firing. There is no need for a human in the command chain. They are real Destroyer Droids.

Are these the utility droids you’re looking for?
Flickr/donsolo, CC BY-NC-SA

What is missing?

Even though we can see many examples of how the droids of Star Wars may have inspired the design of the robots of today, there is one major missing piece of technology that means our robots are nothing like a Star Wars droid. And that is the almost complete lack of reliable and capable artificial intelligence in our robots.

Nearly all of the human created robots that I have mentioned rely entirely on a human expert to either control them remotely or program them to do a small range of very specific tasks. The robots that we have today are not very autonomous.

Most of them cannot see, and even if they could, engineers have yet to develop artificial intelligence to the point where a robot by itself could solve a meaningful problem it may encounter in the world.

Really smart robots are coming and many people are working hard to tackle the challenges but we are not likely to see general-purpose droids in the near future. We have a long time to go, and are far, far away from welcoming cute robot companions such as R2-D2 and BB-8 into our homes and workplaces. Until then, let’s just all enjoy Star Wars.

The Conversation

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Your questions answered on artificial intelligence

Have questions about robots and artificial intelligence? Shutterstock

Toby Walsh, Data61; David Dowe, Monash University; Gary Lea, Australian National University; Jai Galliott, UNSW Australia; Jonathan Roberts, Queensland University of Technology; Katina Michael, University of Wollongong; Kevin Korb, Monash University; Robert Sparrow, Monash University, and Sean Welsh, University of Canterbury

Artificial intelligence and robotics have enjoyed a resurgence of interest, and there is renewed optimism about their place in our future. But what do they mean for us?

You submitted your questions about artificial intelligence and robotics, and we put them – and some of our own – to The Conversation’s experts.

Here are your questions answered (scroll down or click on the links below):

  1. How plausible is human-like artificial intelligence, such as the kind often seen in films and TV?
  2. Automation is already replacing many jobs, from bank tellers to taxi drivers in the near future. Is it time to think about making laws to protect some of these industries?
  3. Where will AI be in five-to-ten years?
  4. Should we be concerned about military and other armed robots?
  5. How plausible is super-intelligent artificial intelligence?
  6. Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?
  7. How do cyborgs differ (technically or conceptually) from A.I.?
  8. Are you generally optimistic or pessimistic about the long term future of artificial intelligence and its benefits for humanity?

 

Q1. How plausible is human-like artificial intelligence?

A. Toby Walsh, Professor of AI:

It is 100% plausible that we’ll have human-like artificial intelligence.

I say this even though the human brain is the most complex system in the universe that we know of. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities.

A. Kevin Korb, Reader in Computer Science

Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn’t address is: when will it be plausible?

Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.

What I find less plausible than the AI in fiction is the emotional and moral lives of robots. They seem to be either unrealistically empty, such as the emotion-less Data in Star Trek, or unrealistically human-identical or superior, such as the AI in Spike Jonze’s Her.

All three – emotion, ethics and intelligence – travel together, and are not genuinely possible in some form without the others, but fiction writers tend to treat them as separate. Plato’s Socrates made a similar mistake.

A. Gary Lea, Researcher in Artificial Intelligence Regulation

AI is not impossible, but the real issue is: “how like is like?” The answer probably lies in applied tests: the Turing test was already (arguably) passed in 2014 but there is also the coffee test (can an embodied AI walk into an unfamiliar house and make a cup of coffee?), the college degree test and the job test.

If AI systems could progressively pass all of those tests (plus whatever else the psychologists might think of), then we would be getting very close. Perhaps the ultimate challenge would be whether a suitably embodied AI could live among us as J. Average and go undetected for five years or so before declaring itself.

Back to top


 

Q2. Automation is already replacing many jobs. Is it time make laws to protect some of these industries?

A. Jonathan Roberts, Professor of Robotics

Researchers at the University of Oxford published a now well cited paper in 2013 that ranked jobs in order of how feasible it was to computerise or automate them. They found that nearly half of jobs in the USA could be at risk from computerisation within 20 years.

This research was followed in 2014 by the viral video hit, Humans Need Not Apply, which argued that many jobs will be replaced by robots or automated systems and that employment would be a major issue for humans in the future.

Of course, it is difficult to predict what will happen, as the reasons for replacing people with machines are not simply based around available technology. The major factor is actually the business case and the social attitudes and behaviour of people in particular markets.

A. Rob Sparrow, Professor of Philosophy

Advances in computing and robotic technologies are undoubtedly going to lead to the replacement of many jobs currently done by humans. I’m not convinced that we should be making laws to protect particular industries though. Rather, I think we should be doing two things.

First, we should be making sure that people are assured of a good standard of living and an opportunity to pursue meaningful projects even in a world in which many more jobs are being done by machines. After all, the idea that, in the future, machines would work so that human beings didn’t have to toil used to be a common theme in utopian thought.

When we accept that machines putting people out of work is bad, what we are really accepting is the idea that whether ordinary people have an income and access to activities that can give their lives meaning should be up to the wealthy, who may choose to employ them or not. Instead, we should be looking to redistribute the wealth generated by machines in order to reduce the need for people to work without thereby reducing the opportunities available to them to be doing things that they care about and gain value from.

Second, we should be protecting vulnerable people in our society from being treated worse by machines than they would be treated by human beings. With my mother, Linda Sparrow, I have argued that introducing robots into the aged care setting will most likely result in older people receiving a worse standard of treatment than they already do in the aged care sector. Prisoners and children are also groups who are vulnerable to suffering at the hands of robots introduced without their consent.

A. Toby Walsh, Professor of AI:

There are some big changes about to happen. The #1 job in the US today is truck driver. In 30 years time, most trucks will be autonomous.

How we cope with this change is a question not for technologists like myself but for society as a whole. History would suggest that protectionism is unlikely to work. We would, for instance, need every country in the world to sign up.

But there are other ways we can adjust to this brave new world. My vote would be to ensure we have an educated workforce that can adapt to the new jobs that technology create.

We need people to enter the workforce with skills for jobs that will exist in a couple of decades time when the technologies for these jobs have been invented.

We need to ensure that everyone benefits from the rising tide of technology, not just the owners of the robots. Perhaps we can all work less and share the economic benefits of automation? This is likely to require fundamental changes to our taxation and welfare system informed by the ideas of people like the economist Thomas Piketty.

A. Kevin Korb, Reader in Computer Science

Industrial protection and restriction are the wrong way to go. I’d rather we develop our technology so as to help solve some of our very real problems. That’s bound to bring with it economic dislocation, so a caring society will accommodate those who lose out because of it.

But there’s no reason we can’t address that with improving technology as long as we keep the oligarchs under control. And if we educate people for flexibility rather than to fit into a particular job, intelligent people will be able to cope with the dislocation.

A. Jai Galliot, Defence Analyst

The standard argument is that workers displaced by automation go on to find more meaningful work. However, this does not hold in all cases.

Think about someone who signed up with the Air Force to fly jets. These pilots may have spent their whole social, physical and psychological lives preparing or maintaining readiness to defend their nation and its people.

For service personnel, there are few higher-value jobs than serving one’s nation through rendering active military service on the battlefield, so this assurance of finding alternative and meaningful work in a more passive role is likely to be of little consolation to a displaced soldier.

Thinking beyond the military, we need to be concerned that the Foundation for Young Australians indicates that as many as 60% of today’s young people are being trained for jobs that will soon be transformed due to automation.

The sad fact of the matter is that one robot can replace many workers. The future of developed economies therefore depends on youth adapting to globalised and/or shared jobs that are increasingly complemented by automation within what will inevitably be an innovation and knowledge economy.

Back to top

Shutterstock


 

Q3. Where will AI be in five-to-ten years?

A. Toby Walsh, Professor of AI:

AI will become the operating system of all our connected devices. Apps like Siri and Cortana will morph into the way we interact with the connected world.

AI will be the way we interact with our smarthphones, cars, fridges, central heating system and front door. We will be living in an always-on world.

A. Jonathan Roberts, Professor of Robotics

It is likely that in the next five to ten years we will see machine learning systems interact with us in the form of robots. The next large technology hurdle that must be overcome in robotics is to give them the power of sight.

This is a grand challenge and one that has filled the research careers of many thousands of robotics researchers over the past four or five decades. There is a growing feeling in the robotics community that machine learning using large datasets will finally crack some of the problems in enabling a robot to actually see.

Four universities have recently teamed up in Australia in an ARC funded Centre of Excellence in Robotic Vision. Their mission is to solve many of the problems that prevent robots seeing.

Back to top


 

Q4. Should we be concerned about military and other armed robots?

A. Rob Sparrow, Professor of Philosophy

The last thing humanity needs now is for many of its most talented engineers and roboticists to be working on machines for killing people.

Robotic weapons will greatly lower the threshold of conflict. They will make it easier for governments to start wars because they will hold out the illusion of being able to fight without taking any casualties. They will increase the risk of accidental war because militaries will deploy unmanned systems in high threat environments, where it would be too risky to place a human being, such as just outside a potential enemy’s airspace or deep sea ports.

In these circumstances, robots may even start wars without any human being having the chance to veto the decision. The use of autonomous robots to kill people threatens to further erode respect for human life.

It was for these reasons that, with several colleagues overseas, I co-founded the International Committee for Robot Arms Control, which has in turn supported the Campaign to Stop Killer Robots.

A. Toby Walsh, Professor of AI:

“Killer robots” are the next revolution in warfare, after gunpowder and nuclear bombs. If we act now, we can perhaps get a ban in place and prevent an arms race to develop better and better killer robots.

A ban won’t uninvent the technology. It’s much the same technology that will go, for instance, into our autonomous cars. And autonomous cars will prevent the 1,000 or so deaths on the roads of Australia each year.

But a ban will associate enough stigma with the technology that arms companies won’t sell them, that arms companies won’t develop them to be better and better at killing humans. This has worked with a number of other weapon types in the past like blinding lasers. If we don’t put a ban in place, you can be sure that terrorists and rogue nations will use killer robots against us.

For those who argue that killer robots are already covered by existing humanitarian law, I profoundly disagree. We cannot correctly engineer them today not to cause excessive collateral damage. And in the future, when we can, there is little stopping them being hacked and made to behave unethically. Even used lawfully, they will be weapons of terror.

You can learn more about these issues by watching my TEDx talk on this topic.

A. Sean Welsh, Researcher in Robot Ethics

We should be concerned about military robots. However, we should not be under the illusion that there is no existing legislation that regulates weaponised robots.

There is no specific law that bans murdering with piano wire. There is simply a general law against murder. We do not need to ban piano wire to stop murders. Similarly, existing laws already forbid the use of any weapons to commit murder in peacetime and to cause unlawful deaths in wartime.

There is no need to ban autonomous weapons as a result of fears that they may be used unlawfully any more than there is a need to ban autonomous cars for fear they might be used illegally (as car bombs). The use of any weapon that is indiscriminate, disproportionate and causes unnecessary suffering is already unlawful under international humanitarian law.

Some advocate that autonomous weapons should be put in the same category as biological and chemical weapons. However, the main reason for bans on chemical and biological weapons is that they are inherently indiscriminate (cannot tell friend from foe from civilian) and cause unnecessary suffering (slow painful deaths). They have no humanitarian positives.

By contrast, there is no suggestion that “killer robots” (even in the examples given by opponents) will necessarily be indiscriminate or cause painful deaths. The increased precision and accuracy of robotic weapons systems compared to human operated ones is a key point in their favour.

If correctly engineered, they would be less likely to cause collateral damage to innocents than human operated weapons. Indeed robot weapons might be engineered so as to be more likely to capture rather than kill. Autonomous weapons do have potential humanitarian positives.

Back to top


 

Q5. How plausible is super-intelligent AI?

A. David Dowe, Associate Professor in Machine Learning and Artificial Intelligence

We can look at the progress made at various tasks once said to be impossible for machines to do, and see them one by one gradually being achieved. For example: beating the human world chess champion (1997); winning at Jeopardy! (2011); driverless vehicles, which are now somewhat standard on mining sites; automated translation, etc.

And, insofar as intelligence test problems are a measure of intelligence, I’ve recently looked at how computers are performing on these tests.

A. Rob Sparrow, Professor of Philosophy

If there can be artificial intelligence then there can be super-intelligent artificial intelligences. There doesn’t seem to be any reason why entities other than human beings could not be intelligent. Nor does there seem to be any reason to think that highest human IQ represents the upper limit on intelligence.

If there is any danger of human beings creating such machines in the near future, we should be very scared. Think about how human beings treat rats. Why should machines that were as many times more intelligent than us, as we are more intelligent than rats, treat us any better?

Back to top


 

Q6. Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?

A. Kevin Korb, Reader in Computer Science

As a believer in functionalism, I believe it is possible to create artificial consciousness. It doesn’t follow that we can “expect” to do it, but only that we might.

John Searle’s arguments against the possibility of artificial consciousness seem to confuse functional realisability with computational realisability. That is, it may well be (logically) impossible to “compute” consciousness, but that doesn’t mean that an embedded, functional computer cannot be conscious.

A. Rob Sparrow, Professor of Philosophy

A number of engineers, computer scientists, and science fiction authors argue that we are on the verge of creating artificial consciousness. They usually proceed by estimating the number of neurons in the human brain and pointing out that we will soon be able to build computers with a similar number of logic gates.

If you ask a psychologist or a psychiatrist, whose job it is to actually “fix” minds, I think you will likely get a very different answer. After all, the state-of-the-art treatment for severe depression still consists in shocking the brain with electricity, which looks remarkably like trying to fix a stalled car by pouring petrol over the top of the engine. So I’m sceptical that we understand enough about the mind to design one.

Back to top


 

Q7. How do cyborgs differ (technically or conceptually) from A.I.?

A. Katina Michael, Associate Professor in Information Systems

A cyborg is a human-machine combination. By definition, a cyborg is any human who adds parts, or enhances his or her abilities by using technology. As we have advanced our technological capabilities, we have discovered that we can merge technology onto and into the human body for prosthesis and/or amplification. Thus, technology is no longer an extension of us, but “becomes” a part of us if we opt into that design.

In contrast, artificial intelligence is the capability of a computer system to learn from its experiences and simulate human intelligence in decision-making. A cyborg usually begins as a human and may undergo a transformational process, whereas artificial intelligence is imbued into a computer system itself predominantly in the form of software.

Some researchers have claimed that a cyborg can also begin in a humanoid robot and incorporate the living tissue of a human or other organism. Regardless, whether it is a human-to-machine or machine-to-organism coalescence, when AI is applied via silicon microchips or nanotechnology embedded into prosthetic forms like a dependent limb, a vital organ, or a replacement/additional sensory input, a human or piece of machinery is said to be a cyborg.

There are already early experiments with such cybernetics. In 1998 Professor Kevin Warwick named his first experiment Cyborg 1.0, surgically implanting a silicon chip transponder into his forearm. In 2002 in project Cyborg 2.0, Warwick had a one hundred electrode array surgically implanted into the median nerve fibres of his left arm.

Ultimately we need to be extremely careful that any artificial intelligence we invite into our bodies does not submerge the human consciousness and, in doing so, rule over it.

Back to top

Cybernetics is already with us.
Shutterstock


 

Q8. Are you generally optimistic or pessimistic about future of artificial intelligence and its benefits for humanity?

A. Toby Walsh, Professor of AI:

I am both optimistic and pessimistic. AI is one of humankind’s truly revolutionary endeavours. It will transform our economies, our society and our position in the centre of this world. If we get this right, the world will be a much better place. We’ll all be healthier, wealthier and happier.

Of course, as with any technology, there are also bad paths we might end up following instead of the good ones. And unfortunately, humankind has a track record of late of following the bad paths.

We know global warming is coming but we seem unable not to follow this path. We know that terrorism is fracturing the world but we seem unable to prevent this. AI will also challenge our society in deep and fundamental ways. It will, for instance, completely change the nature of work. Science fiction will soon be science fact.

A. Rob Sparrow, Professor of Philosophy

I am generally pessimistic about the long term impact of artificial intelligence research on humanity.

I don’t want to deny that artificial intelligence has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach. Investigating how brains work by trying to build machines that can do what they do is an interesting and worthwhile project in its own right.

However, there is a real danger that the systems that AI researchers come up with will mainly be used to further enrich the wealthy and to entrench the power of the powerful.

I also think there is a risk that the prospect of AI will allow people to delude themselves that we don’t need to do something about climate change now. It may also distract them from the fact that we already know what to do, but we lack the political will to do it.

Finally, even though I don’t think we’ve currently got much of a clue of how this might happen, if engineers do eventually succeed in creating genuine AIs that are smarter than we are, this might well be a species-level extinction threat.

A. Jonathan Roberts, Professor in Robotics

I am generally optimistic about the long-term future of AI to humanity. I think that AI has the potential to radically change humanity and hence, if you don’t like change, you are not going to like the future.

I think that AI will revolutionise health care, especially diagnosis, and will enable the customisation of medicine to the individual. It is very possible that AI GPs and robot doctors will share their knowledge as they acquire it, creating a super doctor that will have access to all the medical data of the world.

I am also optimistic because humans tend to recognise when technology is having major negative consequences, and we eventually deal with it. Humans are in control and will naturally try and use technology to make a better world.

A. Kevin Korb, Reader in Computer Science

I’m pessimistic about the medium-term future of humanity. I think climate change and attendant dislocations, wars etc. may well massively disrupt science and technology. In that case progress on AI may stop.

If that doesn’t happen, then I think progress will continue and we’ll achieve AI in the long-term. Along the way, AI research will produce spin-offs that help economy and society, so I think as long as it exists AI tech will be important.

A. Gary Lea, Researcher in Artificial Intelligence Regulation

I suspect the long-term future for AI will turn out to be the usual mixed bag: some good, some bad. If scientists and engineers think sensibly about safety and public welfare when making their research, design and build choices (and provided there are suitable regulatory frameworks in place as a backstop), I think we should be okay.

So, on balance, I am cautiously optimistic on this front – but there are many other long-term existential risks for humanity.

Back to top

The Conversation

Toby Walsh, Professor of AI, Research Group Leader, Optimisation Research Group , Data61; David Dowe, Associate Professor, Clayton School of Information Technology, Monash University; Gary Lea, Visiting Researcher in Artificial Intelligence Regulation, Australian National University; Jai Galliott, Research Fellow in Indo-Pacific Defence, UNSW Australia; Jonathan Roberts, Professor in Robotics, Queensland University of Technology; Katina Michael, Associate Professor, School of Information Systems and Technology, University of Wollongong; Kevin Korb, Reader in Computer Science, Monash University; Robert Sparrow, Professor, Department of Philosophy; Adjunct Professor, Centre for Human Bioethics, Monash University, and Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.

Rise of the humans: intelligence amplification will make us as smart as the machines

Augmented reality technology could soon boost our intelligence. COM SALUD Agencia de comunicación/Flickr, CC BY

Alvin DMello, Queensland University of Technology

In January this year Microsoft announced the HoloLens, a technology based on virtual and augmented reality (AR).

HoloLens supplements what you see with overlaid 3D images. It also uses artificial intelligence (AI) to generate relevant information depending on the situation the wearer is in. The information is then augmented to the your normal vision using virtual reality (VR).


Microsoft’s HoloLens in action.

It left a lot of us imagining its potential, from video games to medical sciences. But HoloLens might also give us insight into an idea that goes beyond conventional artificial intelligence: that technology could complement our intelligence, rather than replacing it, as is often the case when people talk about AI.

From AI to IA

Around the same time that AI was first defined, there was another concept that emerged: intelligence amplification (IA), which was also variously known as cognitive augmentation or machine augmented intelligence.

In contrast to AI, which is a standalone system capable of processing information as well as or better than a human, IA is actually designed to complement and amplify human intelligence. IA has one big edge over AI: it builds on human intelligence that has evolved over millions of years, while AI attempts to build intelligence from scratch.

IA has been around from the time humans first began to communicate, at least in a very broad sense. Writing was among the first technologies that might be considered as IA, and it enabled us to enhance our creativity, understanding, efficiency and, ultimately, intelligence.

For instance, our ancestors built tools and structures based on trial and error methods assisted by knowledge passed on verbally and through demonstration by their forebears. But there is only so much information that any one individual can retain in their mind without external assistance.

Today we build complex structures with the help of hi-tech survey tools and highly accurate software. Our knowledge has also much improved thanks to the recorded experiences of countless others who have come before us. More knowledge than any one person could remember is now readily accessible through external devices at the push of a button.

Although IA has been around for many years in principle, it has not been a widely recognised subject. But with systems such as HoloLens, IA can now be explicitly developed to be faster than was possible in the past.

From AR to IA

Augmented reality is just the latest technology to enable IA, supplementing our intelligence and improving it.

The leap that Microsoft has taken with HoloLens is using AI to boost IA. Although this has also been done in various disparate systems before, Microsoft has managed to bring all the smaller components together and present it on a large scale with a rich experience.

Augmented Reality experience on HoloLens
Microsoft

For example, law enforcement agencies could use HoloLens to access information on demand. It could rapidly access a suspect’s record to determine whether they’re likely to be dangerous. It could anticipate the routes the suspect is likely to take in a pursuit. This would effectively make the officer more “intelligent” in the field.

Surgeons are already making use of 3D printing technology to pre-model surgery procedures enabling them to conduct some very intricate surgeries that were never before possible. Similar simulations could be done by projecting the model through an AR device, like HoloLens.

Blurred lines

Lately there has been some major speculation about the threat posed by superintelligent AI. Philosophers such as Nick Bostrom have explored many issues in this realm.

AI today is far behind the intelligence possessed by any individual human. However, that might change. Yet the fear of superintelligent AI is predicated on there being a clear distinction between the AI and us. With IA, that distinction is blurred, and so too is the possibility of there being a conflict between us and AI.

Intelligence amplification is an old concept, but is coming to the fore with the development of new augmented reality devices. It may not be long before your own thinking might be enhanced to superhuman levels thanks to a seamless interface with technology.

The Conversation

Alvin DMello, PhD Candidate, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

What human emotions do we really want of artificial intelligence?

The challenge in making AI machines appear more human. Flickr/Rene Passet, CC BY-NC-ND

David Lovell, Queensland University of Technology

Forget the Turing and Lovelace tests on artificial intelligence: I want to see a robot pass the Frampton Test.

Let me explain why rock legend Peter Frampton enters the debate on AI.

For many centuries, much thought was given to what distinguishes humans from animals. These days thoughts turn to what distinguishes humans from machines.

The British code breaker and computing pioneer, Alan Turing, proposed “the imitation game” (also known as the Turing test) as a way to evaluate whether a machine can do something we humans love to do: have a good conversation.

If a human judge cannot consistently distinguish a machine from another human by conversation alone, the machine is deemed to have passed the Turing Test.

Initially, Turing proposed to consider whether machines can think, but realised that, thoughtful as we may be, humans don’t really have a clear definition of what thinking is.

Tricking the Turing test

Maybe it says something of another human quality – deviousness – that the Turing Test came to encourage computer programmers to devise machines to trick the human judges, rather than embody sufficient intelligence to hold a realistic conversation.

This trickery climaxed on June 7, 2014, when Eugene Goostman convinced about a third of the judges in the Turing Test competition at the Royal Society that “he” was a 13-year-old Ukrainian schoolboy.

Eugene was a chatbot: a computer program designed to chat with humans. Or, chat with other chatbots, for somewhat surreal effect (see the video, below).


And critics were quick to point out the artificial setting in which this deception occurred.

The creative mind

Chatbots like Eugene led researchers to throw down a more challenging gauntlet to machines: be creative!

In 2001, researchers Selmer Bringsjord, Paul Bello and David Ferrucci proposed the Lovelace Test – named after 19th century mathematician and programmer Ada, Countess of Lovelace – that asked for a computer to create something, such as a story or poem.

Computer generated poems and stories have been around for a while, but to pass the Lovelace Test, the person who designed the program must not be able to account for how it produces its creative works.

Mark Riedl, from the School of Interactive Computing at Georgia Tech, has since proposed an upgrade (Lovelace 2.0) that scores a computer in a series of progressively more demanding creative challenges.

This is how he describes being creative:

In my test, we have a human judge sitting at a computer. They know they’re interacting with an AI, and they give it a task with two components. First, they ask for a creative artifact such as a story, poem, or picture. And secondly, they provide a criterion. For example: “Tell me a story about a cat that saves the day,” or “Draw me a picture of a man holding a penguin.”

But what’s so great about creativity?

Challenging as Lovelace 2.0 may be, it’s argued that we should not place creativity above other human qualities.

This (very creative) insight from Dr Jared Donovan arose in a panel discussion with roboticist Associate Professor Michael Milford and choreographer Prof Kim Vincs at Robotronica 2015 earlier this month.

Amid all the recent warnings that AI could one day lead to the end of humankind, the panel’s aim was to discuss the current state of creativity and robots. Discussion led to questions about the sort of emotions we would want intelligent machines to express.

Empathy – the ability to understand and share feelings of another – was top of the list of desirable human qualities that day, perhaps because it goes beyond mere recognition (“I see you are angry”) and demands a response that demonstrates an appreciation of emotional impact.

Hence, I propose the Frampton Test, after the critical question posed by rock legend Peter Frampton in the 1973 song “Do you feel like we do?

True, this is slightly tongue in cheek, but I imagine that to pass the Frampton Test an artificial system would have to give a convincing and emotionally appropriate response to a situation that would arouse feelings in most humans. I say most because our species has a spread of emotional intelligence levels.

I second that emotion

Noting that others have explored this territory and that the field of “affective computing” strives to imbue machines with the ability to simulate empathy, it is still fascinating to contemplate the implications of emotional machines.

This July, AI and robotics researchers released an open letter on the peril of autonomous weapons. If machines could have even a shred of empathy, would we fear these developments in the same way?

This reminds us, too, that human emotions are not all positive: hate, anger, resentment and so on. Perhaps we should be more grateful that the machines in our lives don’t display these feelings. (Can you imagine a grumpy Siri?)

Still, there are contexts where our nobler emotions would be welcome: sympathy and understanding in health care for instance.

As with all questions worthy of serious consideration, the Robotronica panellists did not resolve whether robots could perhaps one day be creative, or whether indeed we would want that to pass.

As for machine emotion, I think the Frampton Test will be even longer in the passing. At the moment the strongest emotions I see around robots are those of their creators.



Acknowledgement: This article were inspired by discussion and debate at the Robotronica 2015 panel session The Lovelace Test: Can Robots be Creative? and I gratefully acknowledge the creative insights of panellists Dr Jared Donovan (QUT), Associate Professor Michael Milford (QUT) and Professor Kim Vincs (Deakin).

The Conversation

David Lovell, Head of the School of Electrical Engineering and Computer Science, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.