Can we replace politicians with robots?

 A robot for an MP – who’d vote for that? Shutterstock/Mombo

Jonathan Roberts, Queensland University of Technology and Frank Mols, The University of Queensland

If you had the opportunity to vote for a politician you totally trusted, who you were sure had no hidden agendas and who would truly represent the electorate’s views, you would, right?

What if that politician was a robot? Not a human with a robotic personality but a real artificially intelligent robot.

Futures like this have been the stuff of science fiction for decades. But can it be done? And, if so, should we pursue this?

Lost trust

Recent opinion polls show that trust in politicians has declined rapidly in Western societies and voters increasingly use elections to cast a protest vote.

This is not to say that people have lost interest in politics and policy-making. On the contrary, there is evidence of growing engagement in non-traditional politics, suggesting people remain politically engaged but have lost faith in traditional party politics.

More specifically, voters increasingly feel the established political parties are too similar and that politicians are preoccupied with point-scoring and politicking. Disgruntled voters typically feel the big parties are beholden to powerful vested interests, are in cahoots with big business or trade unions, and hence their vote will not make any difference.

Another symptom of changing political engagement (rather than disengagement) is the rise of populist parties with a radical anti-establishment agenda and growing interest in conspiracy theories, theories which confirm people’s hunch that the system is rigged.

The idea of self-serving politicians and civil-servants is not new. This cynical view has been popularised by television series such as the BBC’s Yes Minister and the more recent US series House of Cards (and the original BBC series).

We may have lost faith in traditional politics but what alternatives do we have? Can we replace politicians with something better?

Machine thinking

One alternative is to design policy-making systems in such a way that policy-makers are sheltered from undue outside influence. In so doing, so the argument goes, a space will be created within which objective scientific evidence, rather than vested interests, can inform policy-making.

At first glance this seems worth aspiring to. But what of the many policy issues over which political opinion remains deeply divided, such as climate change, same sex marriage or asylum policy?

Policy-making is and will remain inherently political and policies are at best evidence-informed rather than evidence-based. But can some issues be depoliticised and should we consider deploying robots to perform this task?

Those focusing on technological advances may be inclined to answer “yes”. After all, complex calculations that would have taken years to complete by hand can now be solved in seconds using the latest advances in information technology.

Such innovations have proven extremely valuable in certain policy areas. For example, urban planners examining the feasibility of new infrastructure projects now use powerful traffic modelling software to predict future traffic flows.

Those focusing on social and ethical aspects, on the other hand, will have reservations. Technological advances are of limited use in policy issues involving competing beliefs and value judgements.

A fitting example would be euthanasia legislation, which is inherently bound up religious beliefs and questions about self-determination. We may be inclined to dismiss the issue as exceptional, but this would be to overlook that most policy issues involve competing beliefs and value judgements, and from that perspective robot politicians are of little use.

Moral codes

A supercomputer may be able to make accurate predictions of numbers of road users on a proposed ring road. But what would this supercomputer do when faced with a moral dilemma?

Most people will agree that it is our ability to make value judgements that sets us apart from machines and makes us superior. But what if we could program agreed ethical standards into computers and have them take decisions on the basis of predefined normative guidelines and the consequences arising from these choices?

If that were possible, and some believe it is, could we replace our fallible politicians with infallible artificially intelligent robots after all?

The idea may sound far-fetched, but is it?

Robots may well become part of everyday life sooner than we think. For example, robots may soon be used to perform routine tasks in aged-care facilities, to keep elderly or disabled people company and some have suggested robots could be used in prostitution. Whatever opinion we may have about robot politicians, the groundwork for this is already being laid.

A recent paper showcased a system that automatically writes political speeches. Some of these speeches are believable and it would be hard for most of us to tell if a human or machine had written them.

Politicians already use human speech writers so it may only be a small step for them to start using a robot speech writer instead.

The same applies to policy-makers responsible for, say, urban planning or flood mitigation, who make use of sophisticated modelling software. We may soon be able to take out humans altogether and replace them with robots with the modelling software built into itself.

We could think up many more scenarios, but the underlying issue will remain the same: the robot would need to be programmed with an agreed set of ethical standards allowing it to make judgements on the basis of agreed morals.

The human input

So even if we had a parliament full of robots, we would still need an agency staffed by humans charged with defining the ethical standards to be programmed into the robots.

And who gets to decide on those ethical standards? Well we’d probably have to put that to the vote between various interested and competing parties.

This bring us full circle, back to the problem of how to prevent undue influence.

Advocates of deliberative democracy, who believe democracy should be more than the occasional stroll to a polling booth, will shudder at the prospect of robot politicians.

But free market advocates, who are more interested in lean government, austerity measures and cutting red-tape, may be more inclined to give it a go.

The latter appear to have gained the upper hand, so the next time you hear a commentator refer to a politician as being robotic, remember that maybe one day some of them really will be robots!

Frank Mols, Lecturer in Political Science, The University of Queensland and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Digital diagnosis: intelligent machines do a better job than humans

It takes time for a human to become good at diagnosing ailments, but that learning is lost when they retire. Shutterstock/Poproskiy Alexey

Ross Crawford, Queensland University of Technology; Anjali Jaiprakash, Queensland University of Technology, and Jonathan Roberts, Queensland University of Technology

Until now, medicine has been a prestigious and often extremely lucrative career choice. But in the near future, will we need as many doctors as we have now? Are we going to see significant medical unemployment in the coming decade?

Dr Saxon Smith, president of the Australian Medical Association NSW branch, said in a report late last year that the most common concerns he hears from doctors-in-training and medical students are, “what is the future of medicine?” and “will I have a job?”. The answers, he said, continue to elude him.

As Australian, British and American universities continue to graduate increasing numbers of medical students, the obvious question is where will these new doctors work in the future?

Will there be an expanded role for medical professionals due to our ageing populations? Or is pressure to reduce costs while improving outcomes likely to force the adoption of new technology, which will then likely erode the number of roles currently performed by doctors?

Driving down the costs

All governments, patients and doctors around the world know that healthcare costs will need to reduce if we are to treat more people. Some propose making patients pay more, but however we pay for it, it’s clear that driving the cost down is what needs to happen.

The use of medical robots to assist human surgeons is becoming more widespread but, so far, they are being used to try and improve patient outcomes and not to reduce the cost of surgery. Cost savings may come later when this robotic technology matures.

It is in the area of medical diagnostics where many people see possible significant cost reduction while improving accuracy by using technology instead of human doctors.

It is already common for blood tests and genetic testing (genomics) to be carried out automatically and very cost effectively by machines. They analyse the blood specimen and automatically produce a report.

The tests can be as simple as a haemoglobin level (blood count) through to tests of diabetes such as insulin or glucose levels. They can also be used for far more complicated tests such as looking at a person’s genetic makeup.

A good example is Thyrocare Technologies Ltd in Mumbai, India, where more than 100,000 diagnostic tests from around the country are done every evening, and the reports delivered within 24 hours of blood being taken from a patient.

Machines vs humans

If machines can read blood tests, what else can they do? Though many doctors will not like this thought, any test that requires pattern recognition will ultimately be done better by a machine than a human.

Many diseases need a pathological diagnosis, where a doctor looks at a sample of blood or tissue, to establish the exact disease: a blood test to diagnose an infection, a skin biopsy to determine if a lesion is a cancer or not and a tissue sample taken by a surgeon looking to make a diagnosis.

All of these examples, and in fact all pathological diagnoses are made by a doctor using pattern recognition to determine the diagnosis.

Artificial intelligence techniques using deep neural networks, which are a type of machine learning, can be used to train these diagnostic machines. Machines learn fast and we are not talking about a single machine, but a network of machines linked globally via the internet, using their pooled data to continue to improve.

It will not happen overnight – it will take some time to learn – but once trained the machine will only continue to get better. With time, an appropriately trained machine will be superior at pattern recognition than any human could ever be.

Pathology is now a matter of multi-million dollar laboratories relying on economies of scale. It takes around 15 years from leaving high school to train a pathologist to function independently. It probably takes another 15 years for the pathologist to be as good as they will ever be.

Some years after that, they will retire and all that knowledge and experience is lost. Surely, it would be better if that knowledge could be captured and used by future generations? A robotic pathologist would be able to do just that.

Radiology, X-rays and beyond

Radiological tests account for over AUS$2 billion of the annual Medicare spend. In a 2013 report, it was estimated that in the 2014-15 period, 33,600,000 radiological investigations would be performed in Australia. A radiologist would have to study every one of these and write a report.

Radiologists are already reading, on average, more than seven times the number of studies per day than they were five years ago. These reports, like those written by pathologists, are based on pattern recognition.

Currently, many radiological tests performed in Australia are being read by radiologists in other countries, such as the UK. Rather than having an expert in Australia get out of bed at 3am to read a brain scan of an injured patient, the image can be digitally sent to a doctor in any appropriate time zone and be reported on almost instantly.

What if machines were taught to read X-rays working at first with, and ultimately instead of, human radiologists? Would we still need human radiologists? Probably. Improved imaging, such as MRI and CT scans, will allow radiologists to perform some procedures that surgeons now undertake.

The field of diagnostic radiology is rapidly expanding. In this field, radiologists are able to diagnose and treat conditions such as bleeding blood vessels. This is done using minimally invasive techniques, passing wires through larger vessels to reach the point of bleeding.

So the radiologists may end up doing procedures that are currently done by vascular and cardiac surgeons. The increased use of robotic assisted surgery will mean this is more likely than not.

There is a lot more to diagnosing a skin lesion, rash or growth than simply looking at it. But much of the diagnosis is based on the dermatologist recognising the lesion (again, pattern recognition).

If the diagnosis remains unclear then some tissue (a biopsy) is sent to the laboratory for a pathological diagnosis. We have already established that a machine can read the latter. The same principle applies to the recognition of the skin lesion.

Once recognised and learnt, the lesion will be able to be recognised again. Mobile phones with high-quality cameras will be able to link to a global database that will, like any other database with learning capability, continue to improve.

It’s not if, but when

These changes will not happen overnight, but they are inevitable. Though many doctors will see these changes as a threat, the chance for global good is unprecedented.

An X-ray taken in equatorial Africa could be read with the same reliability as one taken in an Australian centre of excellence. An infectious rash could be uploaded to a phone and the diagnosis given instantly. Many lives will be saved and the cost of health care to the world’s poor can be minimal and, in many cases, free.

For this to become a reality, it will take experts to work with machines and help them learn. Initially, the machines may be asked to do more straightforward tests but gradually they will be taught, just as humans learn most things in life.

The medical profession should grasp these opportunities for change, and our future young doctors should think carefully where the medical jobs of the future will lie. It is almost certain that the medical employment landscape in 15 years will not look like the one we see today.

The Conversation

Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology; Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology, and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

How do robots ‘see’ the world?

Disney’s WALL.E needed to see all the rubbish on Earth so it could clean it up. AAP Image/Tracey Nearmy

Jonathan Roberts, Queensland University of Technology

The world has gone mad for robots with articles talking almost every day about the coming of the robot revolution. But is all the hype, excitement and sometimes fear justified? Is the robot revolution really coming?

The answer is probably that in some areas of our lives we will see more robots soon. But realistically, we are not going to see dozens of robots out and about in our streets or wandering around our offices in the very near future.

One of the main reasons is simply that robots do not yet have the ability to really see the world. But before talking about how robots of the future might see, first we should consider what we actually mean by seeing.

I see you

Most of us have two eyes and we use those eyes to collect light that reflects off the objects around us. Our eyes convert that light it into electrical signals that are sent down our optic nerves, which are immediately processed by our brain.

Our brain somehow works out what is around us from all of those electrical impulses and from our experiences. It builds up a representation of the world and we use that to navigate, to help us pick things up, to enable us to see one another’s faces, and to do a million other things we take for granted.

That whole activity, from collecting the light in our eyes, to having an understanding of the world around us, is what is meant by seeing.

Researchers have estimated that up to 50% of our brain is involved in the process of seeing. Nearly all of the world’s animals have eyes and can see in some way. Most of these animals, insects in particular, have far simpler brains than humans and they function well.

This shows that some forms of seeing can be achieved without the massive computer power of our mammal brains. Seeing has clearly been determined to be quite useful by evolution.

Robot vision

It is therefore unsurprising that many robotics researchers predict that if a robot can see, we are likely to actually see a boom in robotics and robots may finally become the helpers of humans that so many people have desired.

Early days: A vacuum cleaner that can ‘see’ where it needs to clean.

How then do we get a robot to see? The first part is straightforward. We use a video camera, just like the one in your smart phone, to collect a constant stream of images. Camera technology for robots is a large research field in itself but for now just think of a standard video camera. We pass those images to a computer and then we have options.

Since the 1970s, robot vision engineers have thought about features in images. These might be lines, or interesting points like corners or certain textures. The engineers write algorithms to find these features and track them from image frame to image frame in the video stream.

This step is essentially reducing the amount of data from the millions of pixels in an image to a few hundred or thousand features.

In the recent past when computing power was limited, this was an essential step in the process. The engineers then think about what the robot is likely to see and what it will need to do. They write software that will recognise patterns in the world to help the robot understand what is around it.

The local environment

The software may create a very basic map of the environment as the robot operates or it may try to match the features that it finds with a library of features that the software is looking for.

In essence the robots are being programmed by a human to see things that a human thinks the robot is going to need to see. There have been many successful examples of this type of robot vision system, but practically no robot that you find today is capable of navigating in the world using vision alone.

Such systems are not yet reliable enough to keep a robot from bumping or falling long enough to give the robot a practical use. The driverless cars that are talked about in the media either use lasers or radar to supplement their vision systems.

In the past five to ten years a new robot vision research community has started to take shape. These researchers have demonstrated systems that are not programmed as such but instead learn how to see.

They have developed robot vision systems whose structure is inspired by how scientists think animals see. That is they use the concept of layers of neurons, just like in an animal brain. The engineers program the structure of the system but they do not develop the algorithm that runs on that system. That is left to the robot to work out for itself.

This technique is known as machine learning and because we now have easy access to significant computer power at a reasonable cost, these techniques are beginning to work! Investment in these technologies is accelerating fast.

The hive mind

The significance of having robots learn is that they can easily share their learning. One robot will not have to learn from scratch like a newborn animal. A new robot can be given the experiences of other robots and can build upon those.

One robot may learn what a cat looks like and transfer that knowledge to thousands of other robots. More significantly, one robot may solve a complex task such as navigating its way around a part of a city and instantly share that with all the other robots.

Equally important is that robots which share experiences may learn together. For example, one thousand robots may each observe a different cat, share that data with one another via the internet and together learn to classify all cats. This is an example of distributed learning.

The fact that robots of the future will be capable of shared and distributed learning has profound implications and is scaring some, while exciting others.

It is quite possible that your credit card transactions are being checked for fraud right now by a data centre self-learning machine. These systems can spot possible fraud that no human could ever detect. A hive mind being used for good.

The real robot revolution

There are numerous applications for robots that can see. It’s hard not to think of a part of our life where such a robot could not help.

The first uses of robots that can see are likely to be in industries that either have labour shortage issues, such as agriculture, or are inherently unattractive to humans and maybe hazardous.

Examples include searching through rubble after disasters, evacuating people from dangerous situations or working in confined and difficult to access spaces.

Applications that require very long period of attention, something humans find hard, will also be ripe to be done by a robot that can see. Our future home-based robot companions will be far more useful if they can see us.

And in an operating theatre near you, it is soon likely that a seeing robot will be assisting surgeons. The robot’s superior vision and super precise and steady arms and hands will allow surgeons to focus on what they are best at – deciding what to do.

Even that decision-making ability may be superseded by a hive mind of robot doctors. The robots will have it all stitched up!

The Conversation

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.