Digital diagnosis: intelligent machines do a better job than humans

It takes time for a human to become good at diagnosing ailments, but that learning is lost when they retire. Shutterstock/Poproskiy Alexey

Ross Crawford, Queensland University of Technology; Anjali Jaiprakash, Queensland University of Technology, and Jonathan Roberts, Queensland University of Technology

Until now, medicine has been a prestigious and often extremely lucrative career choice. But in the near future, will we need as many doctors as we have now? Are we going to see significant medical unemployment in the coming decade?

Dr Saxon Smith, president of the Australian Medical Association NSW branch, said in a report late last year that the most common concerns he hears from doctors-in-training and medical students are, “what is the future of medicine?” and “will I have a job?”. The answers, he said, continue to elude him.

As Australian, British and American universities continue to graduate increasing numbers of medical students, the obvious question is where will these new doctors work in the future?

Will there be an expanded role for medical professionals due to our ageing populations? Or is pressure to reduce costs while improving outcomes likely to force the adoption of new technology, which will then likely erode the number of roles currently performed by doctors?

Driving down the costs

All governments, patients and doctors around the world know that healthcare costs will need to reduce if we are to treat more people. Some propose making patients pay more, but however we pay for it, it’s clear that driving the cost down is what needs to happen.

The use of medical robots to assist human surgeons is becoming more widespread but, so far, they are being used to try and improve patient outcomes and not to reduce the cost of surgery. Cost savings may come later when this robotic technology matures.

It is in the area of medical diagnostics where many people see possible significant cost reduction while improving accuracy by using technology instead of human doctors.

It is already common for blood tests and genetic testing (genomics) to be carried out automatically and very cost effectively by machines. They analyse the blood specimen and automatically produce a report.

The tests can be as simple as a haemoglobin level (blood count) through to tests of diabetes such as insulin or glucose levels. They can also be used for far more complicated tests such as looking at a person’s genetic makeup.

A good example is Thyrocare Technologies Ltd in Mumbai, India, where more than 100,000 diagnostic tests from around the country are done every evening, and the reports delivered within 24 hours of blood being taken from a patient.

Machines vs humans

If machines can read blood tests, what else can they do? Though many doctors will not like this thought, any test that requires pattern recognition will ultimately be done better by a machine than a human.

Many diseases need a pathological diagnosis, where a doctor looks at a sample of blood or tissue, to establish the exact disease: a blood test to diagnose an infection, a skin biopsy to determine if a lesion is a cancer or not and a tissue sample taken by a surgeon looking to make a diagnosis.

All of these examples, and in fact all pathological diagnoses are made by a doctor using pattern recognition to determine the diagnosis.

Artificial intelligence techniques using deep neural networks, which are a type of machine learning, can be used to train these diagnostic machines. Machines learn fast and we are not talking about a single machine, but a network of machines linked globally via the internet, using their pooled data to continue to improve.

It will not happen overnight – it will take some time to learn – but once trained the machine will only continue to get better. With time, an appropriately trained machine will be superior at pattern recognition than any human could ever be.

Pathology is now a matter of multi-million dollar laboratories relying on economies of scale. It takes around 15 years from leaving high school to train a pathologist to function independently. It probably takes another 15 years for the pathologist to be as good as they will ever be.

Some years after that, they will retire and all that knowledge and experience is lost. Surely, it would be better if that knowledge could be captured and used by future generations? A robotic pathologist would be able to do just that.

Radiology, X-rays and beyond

Radiological tests account for over AUS$2 billion of the annual Medicare spend. In a 2013 report, it was estimated that in the 2014-15 period, 33,600,000 radiological investigations would be performed in Australia. A radiologist would have to study every one of these and write a report.

Radiologists are already reading, on average, more than seven times the number of studies per day than they were five years ago. These reports, like those written by pathologists, are based on pattern recognition.

Currently, many radiological tests performed in Australia are being read by radiologists in other countries, such as the UK. Rather than having an expert in Australia get out of bed at 3am to read a brain scan of an injured patient, the image can be digitally sent to a doctor in any appropriate time zone and be reported on almost instantly.

What if machines were taught to read X-rays working at first with, and ultimately instead of, human radiologists? Would we still need human radiologists? Probably. Improved imaging, such as MRI and CT scans, will allow radiologists to perform some procedures that surgeons now undertake.

The field of diagnostic radiology is rapidly expanding. In this field, radiologists are able to diagnose and treat conditions such as bleeding blood vessels. This is done using minimally invasive techniques, passing wires through larger vessels to reach the point of bleeding.

So the radiologists may end up doing procedures that are currently done by vascular and cardiac surgeons. The increased use of robotic assisted surgery will mean this is more likely than not.

There is a lot more to diagnosing a skin lesion, rash or growth than simply looking at it. But much of the diagnosis is based on the dermatologist recognising the lesion (again, pattern recognition).

If the diagnosis remains unclear then some tissue (a biopsy) is sent to the laboratory for a pathological diagnosis. We have already established that a machine can read the latter. The same principle applies to the recognition of the skin lesion.

Once recognised and learnt, the lesion will be able to be recognised again. Mobile phones with high-quality cameras will be able to link to a global database that will, like any other database with learning capability, continue to improve.

It’s not if, but when

These changes will not happen overnight, but they are inevitable. Though many doctors will see these changes as a threat, the chance for global good is unprecedented.

An X-ray taken in equatorial Africa could be read with the same reliability as one taken in an Australian centre of excellence. An infectious rash could be uploaded to a phone and the diagnosis given instantly. Many lives will be saved and the cost of health care to the world’s poor can be minimal and, in many cases, free.

For this to become a reality, it will take experts to work with machines and help them learn. Initially, the machines may be asked to do more straightforward tests but gradually they will be taught, just as humans learn most things in life.

The medical profession should grasp these opportunities for change, and our future young doctors should think carefully where the medical jobs of the future will lie. It is almost certain that the medical employment landscape in 15 years will not look like the one we see today.

The Conversation

Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology; Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology, and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

A fine balance: saving Australia’s unique wildlife in a contested land

A golden-tailed gecko – one of the inhabitants of the Brigalow Belt. Eric Vanderduys, Author provided.

Rocio Ponce-Reyes, CSIRO; Danial Stratford, CSIRO; Iadine Chadès, CSIRO; Jennifer Firn, Queensland University of Technology; Josie Carwardine, CSIRO; Sam Nicol, CSIRO; Stuart Whitten, CSIRO, and Tara Martin, CSIRO

The Brigalow Belt in Queensland is a national hotspot for wildlife, especially for birds and reptiles. Many of these, such as the black-throated finch, golden-tailed gecko and brigalow scaly-foot are found nowhere else in the world.

But the region is also one of the most transformed and contested areas in Australia. People want to use the Brigalow for many different things: conservation, grazing, agricultural production, mineral and gas extraction. This region also overlaps with the country’s largest reserves of coal and coal seam gas.

Together, the economic activities in the region bring land clearing, changes to water sources, invasion of exotic species and changed fire patterns, which threaten the region’s unique biodiversity.

Currently, at least 179 species of plants and animals are known to be threatened in the region. In research published today we look at the best way to conserve these species, attempting to balance the competing uses of this region.

The Brigalow Belt in Queensland.

Meet the locals

The Brigalow Belt bioregion takes its name from the Aboriginal word “brigalow” that describes the region’s dominant tree species (Acacia harpophylla). Brigalow trees can grow up to 25 metres in height and are characterised by their silver foliage.

Brigalow trees – a relative of the golden wattle, Australia’s national floral emblem.
Rocio Ponce-Reyes, Author provided

Brigalow ecosystems once formed extensive open-forest woodlands that covered 30% of the region, but since the mid-19th century about 95% of their original extent has been cleared, mostly for farming. The remaining 600,000 hectares of relatively small, isolated and fragmented remnants of brigalow forest are now protected as an endangered ecological community. The Semi-Evergreen Vine Thicket, or bottle tree scrub, is also listed.

Mammals are the most threatened group of the region. Eight species are already extinct, some of them locally (such as the eastern quoll and northern bettong) and others globally (such as the Darling Downs hopping mouse).

Other iconic mammals in the region include the bridled nail-tail wallaby and the northern hairy-nosed wombat. Both are listed as endangered at federal and state levels.

Long history of transformation

Traditional owners managed the region, including through burning practices, until the arrival of the first European settlers in the 1840s. Since then, management practices have changed markedly, especially with the establishment of the Brigalow and Other Lands Development Scheme in the 1950s.

This scheme provided new settlers, including many soldiers returning from the second world war, with infrastructure, financial assistance and a block of bushland. In return, they were expected to clear their land and establish a farm within 15 years to support the growing Queensland population.

Since then, the rate of clearing of Brigalow has varied in response to changes in legislation through time.

The black-throated finches of the Brigalow are regarded as endangered.
Eric Vanderduys, Author provided

The Brigalow’s silver lining

There are many ways of dealing with the threats facing the Brigalow’s biodiversity. But which gives us the most bang for our buck?

We worked with 40 key stakeholders from the region to answer this question.

You might think there’s a simple answer: stop development. However, native plants and animals in the Brigalow region are threatened by an accumulation of past, current and future land uses, and all need to be addressed to save these species.

Stakeholders focused on the strategies they believed to be the most feasible and achievable for minimising negative impacts and managing threats arising from all land uses in the region. The strategies, listed below, target several threats posed by industries in the region, such as agriculture, grazing, coal mining and coal seam gas.

  1. Protect remnant vegetation
  2. Protect important regrowth vegetation
  3. Establish key biodiversity areas, such as identify and manage areas of critical habitat
  4. Restore key habitats
  5. Manage pest animals such as feral cats, pigs and noisy miners
  6. Manage invasive plants
  7. Manage fire
  8. Manage grazing
  9. Manage water
  10. Manage pollution
  11. Build a common vision

The stakeholders included a strategy to “build a common vision” because they saw this as vital to achieving the other strategies. This common vision would be built by stakeholders to identify shared goals that balance environmental, social and economic considerations, such as the extent and nature of future developments.

Not a snake, but a legless lizard: the Brigalow scaly foot.
Eric Vanderduys, Author provided

We discovered that managing fire and invasive plant species would provide the best bang for our buck in terms of protecting the Brigalow Belt’s threatened plants and animals. Protecting remaining stands of vegetation offered high benefits to native wildlife, but came at high economic costs. We also discovered that building a common vision will improve the effectiveness of the other management strategies.

Experts estimated that it would cost about A$57.5 million each year to implement all 11 proposed management strategies in the Brigalow Belt. This is around A$1.60 per hectare each year.

If we don’t make this investment, it’s likely 21 species will disappear from the region over the next 50 years. But if we implement the 11 strategies, 12 of these species will likely survive (including the regent honeyeater, northern quoll and bridled nail-tail wallaby) and the outlook of many other species will improve. Species-specific recovery plans may help stop the other nine species (such as the northern-hairy nosed wombat and the swift parrot) from being lost from the region.

When it comes to saving species, working together with a common vision to balance the needs of wildlife and people will deliver the best outcomes in this contested region.

The Conversation

Rocio Ponce-Reyes, Postdoctoral Research Fellow, CSIRO; Danial Stratford, Senior experimental scientist, CSIRO; Iadine Chadès, Senior research scientist, CSIRO; Jennifer Firn, Associate professor, Queensland University of Technology; Josie Carwardine, Research Scientist, Ecosystem Sciences, CSIRO; Sam Nicol, Postdoctoral Researcher, Ecosystem Sciences, CSIRO; Stuart Whitten, Group Leader, Economics and Future Pathways, CSIRO, and Tara Martin, Principal Research Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.

How do robots ‘see’ the world?

Disney’s WALL.E needed to see all the rubbish on Earth so it could clean it up. AAP Image/Tracey Nearmy

Jonathan Roberts, Queensland University of Technology

The world has gone mad for robots with articles talking almost every day about the coming of the robot revolution. But is all the hype, excitement and sometimes fear justified? Is the robot revolution really coming?

The answer is probably that in some areas of our lives we will see more robots soon. But realistically, we are not going to see dozens of robots out and about in our streets or wandering around our offices in the very near future.

One of the main reasons is simply that robots do not yet have the ability to really see the world. But before talking about how robots of the future might see, first we should consider what we actually mean by seeing.

I see you

Most of us have two eyes and we use those eyes to collect light that reflects off the objects around us. Our eyes convert that light it into electrical signals that are sent down our optic nerves, which are immediately processed by our brain.

Our brain somehow works out what is around us from all of those electrical impulses and from our experiences. It builds up a representation of the world and we use that to navigate, to help us pick things up, to enable us to see one another’s faces, and to do a million other things we take for granted.

That whole activity, from collecting the light in our eyes, to having an understanding of the world around us, is what is meant by seeing.

Researchers have estimated that up to 50% of our brain is involved in the process of seeing. Nearly all of the world’s animals have eyes and can see in some way. Most of these animals, insects in particular, have far simpler brains than humans and they function well.

This shows that some forms of seeing can be achieved without the massive computer power of our mammal brains. Seeing has clearly been determined to be quite useful by evolution.

Robot vision

It is therefore unsurprising that many robotics researchers predict that if a robot can see, we are likely to actually see a boom in robotics and robots may finally become the helpers of humans that so many people have desired.

Early days: A vacuum cleaner that can ‘see’ where it needs to clean.

How then do we get a robot to see? The first part is straightforward. We use a video camera, just like the one in your smart phone, to collect a constant stream of images. Camera technology for robots is a large research field in itself but for now just think of a standard video camera. We pass those images to a computer and then we have options.

Since the 1970s, robot vision engineers have thought about features in images. These might be lines, or interesting points like corners or certain textures. The engineers write algorithms to find these features and track them from image frame to image frame in the video stream.

This step is essentially reducing the amount of data from the millions of pixels in an image to a few hundred or thousand features.

In the recent past when computing power was limited, this was an essential step in the process. The engineers then think about what the robot is likely to see and what it will need to do. They write software that will recognise patterns in the world to help the robot understand what is around it.

The local environment

The software may create a very basic map of the environment as the robot operates or it may try to match the features that it finds with a library of features that the software is looking for.

In essence the robots are being programmed by a human to see things that a human thinks the robot is going to need to see. There have been many successful examples of this type of robot vision system, but practically no robot that you find today is capable of navigating in the world using vision alone.

Such systems are not yet reliable enough to keep a robot from bumping or falling long enough to give the robot a practical use. The driverless cars that are talked about in the media either use lasers or radar to supplement their vision systems.

In the past five to ten years a new robot vision research community has started to take shape. These researchers have demonstrated systems that are not programmed as such but instead learn how to see.

They have developed robot vision systems whose structure is inspired by how scientists think animals see. That is they use the concept of layers of neurons, just like in an animal brain. The engineers program the structure of the system but they do not develop the algorithm that runs on that system. That is left to the robot to work out for itself.

This technique is known as machine learning and because we now have easy access to significant computer power at a reasonable cost, these techniques are beginning to work! Investment in these technologies is accelerating fast.

The hive mind

The significance of having robots learn is that they can easily share their learning. One robot will not have to learn from scratch like a newborn animal. A new robot can be given the experiences of other robots and can build upon those.

One robot may learn what a cat looks like and transfer that knowledge to thousands of other robots. More significantly, one robot may solve a complex task such as navigating its way around a part of a city and instantly share that with all the other robots.

Equally important is that robots which share experiences may learn together. For example, one thousand robots may each observe a different cat, share that data with one another via the internet and together learn to classify all cats. This is an example of distributed learning.

The fact that robots of the future will be capable of shared and distributed learning has profound implications and is scaring some, while exciting others.

It is quite possible that your credit card transactions are being checked for fraud right now by a data centre self-learning machine. These systems can spot possible fraud that no human could ever detect. A hive mind being used for good.

The real robot revolution

There are numerous applications for robots that can see. It’s hard not to think of a part of our life where such a robot could not help.

The first uses of robots that can see are likely to be in industries that either have labour shortage issues, such as agriculture, or are inherently unattractive to humans and maybe hazardous.

Examples include searching through rubble after disasters, evacuating people from dangerous situations or working in confined and difficult to access spaces.

Applications that require very long period of attention, something humans find hard, will also be ripe to be done by a robot that can see. Our future home-based robot companions will be far more useful if they can see us.

And in an operating theatre near you, it is soon likely that a seeing robot will be assisting surgeons. The robot’s superior vision and super precise and steady arms and hands will allow surgeons to focus on what they are best at – deciding what to do.

Even that decision-making ability may be superseded by a hive mind of robot doctors. The robots will have it all stitched up!

The Conversation

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Star Wars: these could be the droids we’re looking for in real life

BB-8 (left) is a new droid addition to the Star Wars universe. Disney

Jonathan Roberts, Queensland University of Technology

The latest episode of Star Wars is now upon us and has unleashed a new era of science fantasy robots, or “droids” as they are known.

One of the heroes of the new movie The Force Awakens is BB-8, a cute but capable spherical droid that is at the centre of the story (sorry, no spoilers).

But droids have been at the heart of the epic science fantasy saga since the original Star Wars movie back in 1977, when C-3PO uttered the immortal words:

I am C-3PO, human-cyborg relations. And this is my counterpart R2-D2.

Star Wars has always been a droid story, just as much as a story about the Skywalker family.

The old and the new: C-3PO (left), BB-8 (centre) and R2-D2 (right) from the Star Wars universe.
Reuters/Carlo Allegri

Even though we all know that Star Wars happened a long time ago, in a galaxy far, far away, just how good has it been a predicting the usefulness and development of robots on our own planet today?

Is that you R2?

For those non-Star Wars experts reading this, R2-D2 is an R-series astromech droid. Such droids work on spaceships and are particularly good at working outside in the vacuum of space. They are the mechanics of space travel and are packed with tools and know-how to fix things. They also seem to be fully waterproof, can fly short distances using deployable thrusters and somehow possess a cheeky character.

But did you know that working in orbit around Earth right now is NASA’s Robonaut 2, also known as R2. It is one of the International Space Station’s test bed droids, having a humanoid shape and proportions so that it can undertake maintenance tasks originally designed for human astronauts.

Robonaut2 – or R2 for short – from NASA and General Motors, is a robot designed to work side-by-side with people in difficult or dangerous places on Earth and in space.
NASA

Perhaps in the future, when all spaceship maintenance will be performed by droids, this real R2 unit will replace the humanoid form.

The diplomatic droid

The golden humanoid C-3PO is a protocol droid fluent in more than six million forms of communication. A protocol droid’s primary purpose in Star Wars is to help non-droids, creatures of all kinds, communicate with one another and generally avoid potentially dangerous misunderstandings.

If there were protocol droids in Mos Eisley’s Cantina then maybe no-one would have shot first! But as the bartender said of R2-D2 and C-3PO: “We don’t serve their kind here.”

We have human diplomats in our world to negotiate and attempt to head off conflict, and there seems no need for a mechanical interface such as a protocol droid.

But we are seeing translation apps on our phones and their accuracy is improving to the point where it is conceivable that live language translation between two people speaking to one another may not be too far away. Until we find non-human sentient equals then there will be few diplomatic jobs for C-3PO-like droids here on earth.

A place that we are likely to see humanoid robots like C-3PO is as artificial companions and carers. The advantage of a humanoid robot is that it should be able to cope in our homes or care facilities as they have all been designed for humans.

This is one of the great advantages of the humanoid robot form, although there is the so-called “uncanny valley” to deal with and the feeling by some that we should always ensure people have a human touch.

Best Star Wars Droids

A way of thinking about the dozens of droids of Star Wars is to classify them by how they are used. We have seen them being used in applications as diverse as farming, medicine, war, torture and space exploration.

Farming robots

When R2-D2 and C-3PO escape Darth Vader and land in their escape pod on the sand plant of Tatooine, they are picked up by the Jawas scavenging for droids to sell to local moisture farmers. The lack of labour on Tatooine results in droids being critical for the functioning of the farms.

Note to non-Star Wars experts: Darth Vader himself, or the least a young Anakin Skywalker, built C-3PO on Tatooine from spare parts.

In the past year alone, very capable agricultural robots have been demonstrated by Queensland University of Technology, The University of Sydney and by Swarm Farm Robotics.

Robots down on the farm.

Many other research organisations and companies are developing agricultural robotics as a way of overcoming labour availability issues, reducing the cost of inputs such as diesel and herbicide, and enabling the use of smaller machines that compact the soil less than the large tractors we see commonly used today.

Medical robots

In the Star Wars movies, medical droids appear at critical moments. The medical droids 2-1B and FX-7 twice patched up Luke in The Empire Strikes Back. Once when he survived the Wampa attack on Hoth and then again at the end when they grafted on a robotic hand to Luke after his father sliced it off.

Similar Imperial DD-13 medical droids created the droid-like Darth Vader from his battered body following his light sabre duel with Obi Wan on the volcanic plant Mustafar in Revenge of the Sith.

An EW-3 midwife droid even helped Padmé give birth to the twins Luke and Leia just prior to her tragic death.

Here on Earth, Google has been talking about its plans for new medical robots. It’s teaming up with medical device companies to develop new robotic assistants for minimally invasive surgery.

Medical robotic assistants have already become a common sight in well-equipped modern hospitals and are being used to help surgeons during urology procedures and more recently for knee replacements. New research is also showing how novel tentacle-like robot arms may be used to get to difficult to reach places.

The hope is that medical robotics will enable shorter training times for surgeons, lengthen a surgeon’s career and improve outcomes for patients. All these benefits could drive the cost of these procedures down, giving access to more people around the world.

Killer robots

Unsurprisingly, there are many droids in the Star Wars universe dedicated to killing. In Episodes I-III, the Trade Federation used droid starfighters. These were spaceships that were droids themselves and the droid command ships housed thousands of them.

The Trade Federation were also fans of deploying thousands of humanoid shaped B1 Battle Droids. Although they were relatively well equipped, they seemed stupid and were even worse shots than Stormtroopers. The far more capable Destroyer Droids had deflector shields and rapid fire laser cannons.

Killer robots and their development is a hot topic right now on Earth. A campaign has been started with the aim of developing arms controls and some killer robots have already been deployed.

In the Middle East, drones are routinely used to deliver missiles. These are human controlled and are not autonomous but they are changing the face of conflict.

In the DMZ between the Koreas you will find fully autonomous robots equipped with heavy duty, long-range machine guns. If they spot movement in the DMZ they are capable of firing. There is no need for a human in the command chain. They are real Destroyer Droids.

Are these the utility droids you’re looking for?
Flickr/donsolo, CC BY-NC-SA

What is missing?

Even though we can see many examples of how the droids of Star Wars may have inspired the design of the robots of today, there is one major missing piece of technology that means our robots are nothing like a Star Wars droid. And that is the almost complete lack of reliable and capable artificial intelligence in our robots.

Nearly all of the human created robots that I have mentioned rely entirely on a human expert to either control them remotely or program them to do a small range of very specific tasks. The robots that we have today are not very autonomous.

Most of them cannot see, and even if they could, engineers have yet to develop artificial intelligence to the point where a robot by itself could solve a meaningful problem it may encounter in the world.

Really smart robots are coming and many people are working hard to tackle the challenges but we are not likely to see general-purpose droids in the near future. We have a long time to go, and are far, far away from welcoming cute robot companions such as R2-D2 and BB-8 into our homes and workplaces. Until then, let’s just all enjoy Star Wars.

The Conversation

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Sea level rise is real – which is why we need to retreat from unrealistic advice

In the aftermath of 2012’s deadly Hurricane Sandy, New York launched a US$20 billion plan to defend the city against future storms as well as rising sea levels. David Shankbone/Flickr, CC BY

Mark Gibbs, Queensland University of Technology

Coastal communities around the world are being increasingly exposed to the hazards of rising sea levels, with global sea levels found to be rising faster over the past two decades than for the bulk of the 20th century.

But managing the impacts of rising seas for some communities is being made more difficult by the actions of governments, homeowners – and even some well-intentioned climate adaptation practitioners.

Coastal adaptation policies usually carry political risk. One of the main risks is when communities end up divided between those wanting a response to the growing risks of coastal flooding, and those more concerned about how their own property values or insurance premiums might be hit in the short-term by such action. For some, the biggest threat is seen to be from sea level rise adaptation policies rather than sea level rise itself.

Some organisations and governments have side-stepped the political risk by commissioning or preparing adaptation plans – but then not implementing them.

A colleague of mine describes this as the “plan and forget” approach to coastal adaptation. It’s all too common, not only here in Australia but internationally. And it can be worse than completely ignoring the risk, because local communities are given the impression that the risk is being managed, when in fact it is not.

[iframe width=”640″ height=”360″ src=”https://www.youtube.com/embed/wZXbpBEfojU?feature=player_embedded” frameborder=”0″ allowfullscreen>

The Australian Broadcasting Corporation’s Catalyst program examines past and future sea level rise.

‘The road to hell is paved with good intentions’

Coastal adaptation researchers and practitioners (and I’m one of them) must reconsider some of the common recommendations typically contained in coastal adaptation studies.

In my experience, well-intentioned but poorly considered recommendations – such as advocating for highly urbanised city centres to be relocated inland – prevent many adaptation studies being implemented.

Relocating buildings and other built infrastructure further away from the coast to reduce or eliminate the risk of flooding might sound like a sensible, long-term option, and indeed it is in some cases.

But too often, the advice given to “retreat” or relocate established, highly built-up city blocks makes little economic or practical sense. Such advice can be inconsistent with well-established engineering disaster risk reduction frameworks such as Engineers Australia’s Climate Change Adaptation Guidelines in Coastal Management and Planning.

Much to the chagrin of many in the coastal adaptation science community, cities and owners of major coastal facilities around the world are voting with their feet – largely rejecting coastal retreat recommendations in favour of coastal protection.

Major cities choosing defence, not retreat

New York is perhaps the best example of governments and individuals alike choosing protection rather than retreat.

In October 2012, Hurricane Sandy left behind a trail of destruction of more than US$71 billion in the United States. In New York alone, 43 people were killed.

In June 2013, then Mayor Mike Bloomberg said rising temperatures and sea levels were only making it harder to defend New York, warning:

We expect that by mid-century up to one-quarter of all of New York City’s land area, where 800,000 residents live today, will be in the floodplain. If we do nothing, more than 40 miles of our waterfront could see flooding on a regular basis, just during normal high tides.

Yet even after acknowledging that threat, New York’s response wasn’t to retreat. Instead, the mayor launched a US$20 billion plan to protect the city with more flood walls, stronger infrastructure and renovated buildings. As that “Stronger, More Resilient New York” plan declared:

We can fight for and rebuild what was lost, fortify the shoreline,
and develop waterfront areas for the benefit of all New Yorkers. The city cannot, and will not, retreat.

Similarly, none of the winners of Rebuild By Design – an international competition to make New York and surrounding regions more resilient to coastal inundation – focused on retreat strategies. In fact, some involve intensifying urban areas that were under water during Hurricane Sandy.

In the worst hit areas, even when given the choice of a state buy-out scheme relatively few New Yorkers chose to leave.

[iframe width=”640″ height=”360″ src=”https://www.youtube.com/embed/1pW5MZFU0E8?feature=player_embedded” frameborder=”0″ allowfullscreen>

PBS Newshour looks at how New York and other world cities can better protect against rising seas and storm surges.

Although not directly related to climate change, the Japanese response to the devastating 2011 tsunami is another telling example.

There, some residents did choose to relocate to higher ground. However, the government did not relocate major facilities inland, including the Fukushima nuclear facility. Instead, Japan will spend US$6.8 billion to form a 400-kilometre-long chain of sea walls, towering up to four storeys high in some places.

[iframe width=”640″ height=”360″ src=”https://www.youtube.com/embed/AXhjXkd5O7U?feature=player_embedded” frameborder=”0″ allowfullscreen>

In Melbourne, Australia, four local councils from the Association of Bayside Municipalities worked on the science-based Port Phillip Bay Coastal Adaptation Pathways Project to systematically identify the most effective adaptation responses. That project highlighted the effectiveness of accommodating and reducing flooding through established engineering approaches.

For example, the project concluded that while the popular Southbank waterfront in the City of Melbourne is likely to see even more common and extreme flooding in the coming decades, “retreat is not necessary”.

The Yarra River flows through the heart of Melbourne, in Australia, with Southbank on the left.
R Reeve/Flicker, CC BY-ND

More practical advice is crucial for greater action

Coastal adaptation studies and plans need to be based on practical, defensible and implementable recommendations.

That means climate adaptation practitioners need to refrain from recommending that major urbanised coastal centres be relocated further inland in coming decades, unless that really is the only viable option.

Instead, I think we can achieve more by concentrating more on how lower- and medium-density coastal communities can adapt to higher sea levels. This is a more challenging problem, as economic analyses can produce very different recommendations depending on which so-called “externalities” are included or left out in the analysis.

On the same note, adaptation studies that make recommendations without considering the impacts to present-day home-owners, or how adaptation plans are financed, can also be unhelpful.

Florida, USA, photographed from space – one of many highly urbanised coastal areas around the world needing to adapt to rising seas.
NASA

Good adaptation strategies need to acknowledge the real political risks involved with any change involving people and property. Along with making recommendations, they also need to lay out an implementation plan showing how individual and community concerns will be taken into account.

So far the climate models have done a good job in estimating the likely future sea levels. The same cannot be said for our adaptation responses.

But if you’re looking for examples of how we can be better prepared for growing sea level risks, initiatives such as the Port Phillip Bay Coastal Adaptation Pathways Project and the Queensland Climate Adaptation Strategy (currently under development) seem to be heading in the right direction.

The Conversation

Mark Gibbs, Director: Knowledge to innovation, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Your questions answered on artificial intelligence

Have questions about robots and artificial intelligence? Shutterstock

Toby Walsh, Data61; David Dowe, Monash University; Gary Lea, Australian National University; Jai Galliott, UNSW Australia; Jonathan Roberts, Queensland University of Technology; Katina Michael, University of Wollongong; Kevin Korb, Monash University; Robert Sparrow, Monash University, and Sean Welsh, University of Canterbury

Artificial intelligence and robotics have enjoyed a resurgence of interest, and there is renewed optimism about their place in our future. But what do they mean for us?

You submitted your questions about artificial intelligence and robotics, and we put them – and some of our own – to The Conversation’s experts.

Here are your questions answered (scroll down or click on the links below):

  1. How plausible is human-like artificial intelligence, such as the kind often seen in films and TV?
  2. Automation is already replacing many jobs, from bank tellers to taxi drivers in the near future. Is it time to think about making laws to protect some of these industries?
  3. Where will AI be in five-to-ten years?
  4. Should we be concerned about military and other armed robots?
  5. How plausible is super-intelligent artificial intelligence?
  6. Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?
  7. How do cyborgs differ (technically or conceptually) from A.I.?
  8. Are you generally optimistic or pessimistic about the long term future of artificial intelligence and its benefits for humanity?

 

Q1. How plausible is human-like artificial intelligence?

A. Toby Walsh, Professor of AI:

It is 100% plausible that we’ll have human-like artificial intelligence.

I say this even though the human brain is the most complex system in the universe that we know of. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities.

A. Kevin Korb, Reader in Computer Science

Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn’t address is: when will it be plausible?

Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.

What I find less plausible than the AI in fiction is the emotional and moral lives of robots. They seem to be either unrealistically empty, such as the emotion-less Data in Star Trek, or unrealistically human-identical or superior, such as the AI in Spike Jonze’s Her.

All three – emotion, ethics and intelligence – travel together, and are not genuinely possible in some form without the others, but fiction writers tend to treat them as separate. Plato’s Socrates made a similar mistake.

A. Gary Lea, Researcher in Artificial Intelligence Regulation

AI is not impossible, but the real issue is: “how like is like?” The answer probably lies in applied tests: the Turing test was already (arguably) passed in 2014 but there is also the coffee test (can an embodied AI walk into an unfamiliar house and make a cup of coffee?), the college degree test and the job test.

If AI systems could progressively pass all of those tests (plus whatever else the psychologists might think of), then we would be getting very close. Perhaps the ultimate challenge would be whether a suitably embodied AI could live among us as J. Average and go undetected for five years or so before declaring itself.

Back to top


 

Q2. Automation is already replacing many jobs. Is it time make laws to protect some of these industries?

A. Jonathan Roberts, Professor of Robotics

Researchers at the University of Oxford published a now well cited paper in 2013 that ranked jobs in order of how feasible it was to computerise or automate them. They found that nearly half of jobs in the USA could be at risk from computerisation within 20 years.

This research was followed in 2014 by the viral video hit, Humans Need Not Apply, which argued that many jobs will be replaced by robots or automated systems and that employment would be a major issue for humans in the future.

Of course, it is difficult to predict what will happen, as the reasons for replacing people with machines are not simply based around available technology. The major factor is actually the business case and the social attitudes and behaviour of people in particular markets.

A. Rob Sparrow, Professor of Philosophy

Advances in computing and robotic technologies are undoubtedly going to lead to the replacement of many jobs currently done by humans. I’m not convinced that we should be making laws to protect particular industries though. Rather, I think we should be doing two things.

First, we should be making sure that people are assured of a good standard of living and an opportunity to pursue meaningful projects even in a world in which many more jobs are being done by machines. After all, the idea that, in the future, machines would work so that human beings didn’t have to toil used to be a common theme in utopian thought.

When we accept that machines putting people out of work is bad, what we are really accepting is the idea that whether ordinary people have an income and access to activities that can give their lives meaning should be up to the wealthy, who may choose to employ them or not. Instead, we should be looking to redistribute the wealth generated by machines in order to reduce the need for people to work without thereby reducing the opportunities available to them to be doing things that they care about and gain value from.

Second, we should be protecting vulnerable people in our society from being treated worse by machines than they would be treated by human beings. With my mother, Linda Sparrow, I have argued that introducing robots into the aged care setting will most likely result in older people receiving a worse standard of treatment than they already do in the aged care sector. Prisoners and children are also groups who are vulnerable to suffering at the hands of robots introduced without their consent.

A. Toby Walsh, Professor of AI:

There are some big changes about to happen. The #1 job in the US today is truck driver. In 30 years time, most trucks will be autonomous.

How we cope with this change is a question not for technologists like myself but for society as a whole. History would suggest that protectionism is unlikely to work. We would, for instance, need every country in the world to sign up.

But there are other ways we can adjust to this brave new world. My vote would be to ensure we have an educated workforce that can adapt to the new jobs that technology create.

We need people to enter the workforce with skills for jobs that will exist in a couple of decades time when the technologies for these jobs have been invented.

We need to ensure that everyone benefits from the rising tide of technology, not just the owners of the robots. Perhaps we can all work less and share the economic benefits of automation? This is likely to require fundamental changes to our taxation and welfare system informed by the ideas of people like the economist Thomas Piketty.

A. Kevin Korb, Reader in Computer Science

Industrial protection and restriction are the wrong way to go. I’d rather we develop our technology so as to help solve some of our very real problems. That’s bound to bring with it economic dislocation, so a caring society will accommodate those who lose out because of it.

But there’s no reason we can’t address that with improving technology as long as we keep the oligarchs under control. And if we educate people for flexibility rather than to fit into a particular job, intelligent people will be able to cope with the dislocation.

A. Jai Galliot, Defence Analyst

The standard argument is that workers displaced by automation go on to find more meaningful work. However, this does not hold in all cases.

Think about someone who signed up with the Air Force to fly jets. These pilots may have spent their whole social, physical and psychological lives preparing or maintaining readiness to defend their nation and its people.

For service personnel, there are few higher-value jobs than serving one’s nation through rendering active military service on the battlefield, so this assurance of finding alternative and meaningful work in a more passive role is likely to be of little consolation to a displaced soldier.

Thinking beyond the military, we need to be concerned that the Foundation for Young Australians indicates that as many as 60% of today’s young people are being trained for jobs that will soon be transformed due to automation.

The sad fact of the matter is that one robot can replace many workers. The future of developed economies therefore depends on youth adapting to globalised and/or shared jobs that are increasingly complemented by automation within what will inevitably be an innovation and knowledge economy.

Back to top

Shutterstock


 

Q3. Where will AI be in five-to-ten years?

A. Toby Walsh, Professor of AI:

AI will become the operating system of all our connected devices. Apps like Siri and Cortana will morph into the way we interact with the connected world.

AI will be the way we interact with our smarthphones, cars, fridges, central heating system and front door. We will be living in an always-on world.

A. Jonathan Roberts, Professor of Robotics

It is likely that in the next five to ten years we will see machine learning systems interact with us in the form of robots. The next large technology hurdle that must be overcome in robotics is to give them the power of sight.

This is a grand challenge and one that has filled the research careers of many thousands of robotics researchers over the past four or five decades. There is a growing feeling in the robotics community that machine learning using large datasets will finally crack some of the problems in enabling a robot to actually see.

Four universities have recently teamed up in Australia in an ARC funded Centre of Excellence in Robotic Vision. Their mission is to solve many of the problems that prevent robots seeing.

Back to top


 

Q4. Should we be concerned about military and other armed robots?

A. Rob Sparrow, Professor of Philosophy

The last thing humanity needs now is for many of its most talented engineers and roboticists to be working on machines for killing people.

Robotic weapons will greatly lower the threshold of conflict. They will make it easier for governments to start wars because they will hold out the illusion of being able to fight without taking any casualties. They will increase the risk of accidental war because militaries will deploy unmanned systems in high threat environments, where it would be too risky to place a human being, such as just outside a potential enemy’s airspace or deep sea ports.

In these circumstances, robots may even start wars without any human being having the chance to veto the decision. The use of autonomous robots to kill people threatens to further erode respect for human life.

It was for these reasons that, with several colleagues overseas, I co-founded the International Committee for Robot Arms Control, which has in turn supported the Campaign to Stop Killer Robots.

A. Toby Walsh, Professor of AI:

“Killer robots” are the next revolution in warfare, after gunpowder and nuclear bombs. If we act now, we can perhaps get a ban in place and prevent an arms race to develop better and better killer robots.

A ban won’t uninvent the technology. It’s much the same technology that will go, for instance, into our autonomous cars. And autonomous cars will prevent the 1,000 or so deaths on the roads of Australia each year.

But a ban will associate enough stigma with the technology that arms companies won’t sell them, that arms companies won’t develop them to be better and better at killing humans. This has worked with a number of other weapon types in the past like blinding lasers. If we don’t put a ban in place, you can be sure that terrorists and rogue nations will use killer robots against us.

For those who argue that killer robots are already covered by existing humanitarian law, I profoundly disagree. We cannot correctly engineer them today not to cause excessive collateral damage. And in the future, when we can, there is little stopping them being hacked and made to behave unethically. Even used lawfully, they will be weapons of terror.

You can learn more about these issues by watching my TEDx talk on this topic.

A. Sean Welsh, Researcher in Robot Ethics

We should be concerned about military robots. However, we should not be under the illusion that there is no existing legislation that regulates weaponised robots.

There is no specific law that bans murdering with piano wire. There is simply a general law against murder. We do not need to ban piano wire to stop murders. Similarly, existing laws already forbid the use of any weapons to commit murder in peacetime and to cause unlawful deaths in wartime.

There is no need to ban autonomous weapons as a result of fears that they may be used unlawfully any more than there is a need to ban autonomous cars for fear they might be used illegally (as car bombs). The use of any weapon that is indiscriminate, disproportionate and causes unnecessary suffering is already unlawful under international humanitarian law.

Some advocate that autonomous weapons should be put in the same category as biological and chemical weapons. However, the main reason for bans on chemical and biological weapons is that they are inherently indiscriminate (cannot tell friend from foe from civilian) and cause unnecessary suffering (slow painful deaths). They have no humanitarian positives.

By contrast, there is no suggestion that “killer robots” (even in the examples given by opponents) will necessarily be indiscriminate or cause painful deaths. The increased precision and accuracy of robotic weapons systems compared to human operated ones is a key point in their favour.

If correctly engineered, they would be less likely to cause collateral damage to innocents than human operated weapons. Indeed robot weapons might be engineered so as to be more likely to capture rather than kill. Autonomous weapons do have potential humanitarian positives.

Back to top


 

Q5. How plausible is super-intelligent AI?

A. David Dowe, Associate Professor in Machine Learning and Artificial Intelligence

We can look at the progress made at various tasks once said to be impossible for machines to do, and see them one by one gradually being achieved. For example: beating the human world chess champion (1997); winning at Jeopardy! (2011); driverless vehicles, which are now somewhat standard on mining sites; automated translation, etc.

And, insofar as intelligence test problems are a measure of intelligence, I’ve recently looked at how computers are performing on these tests.

A. Rob Sparrow, Professor of Philosophy

If there can be artificial intelligence then there can be super-intelligent artificial intelligences. There doesn’t seem to be any reason why entities other than human beings could not be intelligent. Nor does there seem to be any reason to think that highest human IQ represents the upper limit on intelligence.

If there is any danger of human beings creating such machines in the near future, we should be very scared. Think about how human beings treat rats. Why should machines that were as many times more intelligent than us, as we are more intelligent than rats, treat us any better?

Back to top


 

Q6. Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?

A. Kevin Korb, Reader in Computer Science

As a believer in functionalism, I believe it is possible to create artificial consciousness. It doesn’t follow that we can “expect” to do it, but only that we might.

John Searle’s arguments against the possibility of artificial consciousness seem to confuse functional realisability with computational realisability. That is, it may well be (logically) impossible to “compute” consciousness, but that doesn’t mean that an embedded, functional computer cannot be conscious.

A. Rob Sparrow, Professor of Philosophy

A number of engineers, computer scientists, and science fiction authors argue that we are on the verge of creating artificial consciousness. They usually proceed by estimating the number of neurons in the human brain and pointing out that we will soon be able to build computers with a similar number of logic gates.

If you ask a psychologist or a psychiatrist, whose job it is to actually “fix” minds, I think you will likely get a very different answer. After all, the state-of-the-art treatment for severe depression still consists in shocking the brain with electricity, which looks remarkably like trying to fix a stalled car by pouring petrol over the top of the engine. So I’m sceptical that we understand enough about the mind to design one.

Back to top


 

Q7. How do cyborgs differ (technically or conceptually) from A.I.?

A. Katina Michael, Associate Professor in Information Systems

A cyborg is a human-machine combination. By definition, a cyborg is any human who adds parts, or enhances his or her abilities by using technology. As we have advanced our technological capabilities, we have discovered that we can merge technology onto and into the human body for prosthesis and/or amplification. Thus, technology is no longer an extension of us, but “becomes” a part of us if we opt into that design.

In contrast, artificial intelligence is the capability of a computer system to learn from its experiences and simulate human intelligence in decision-making. A cyborg usually begins as a human and may undergo a transformational process, whereas artificial intelligence is imbued into a computer system itself predominantly in the form of software.

Some researchers have claimed that a cyborg can also begin in a humanoid robot and incorporate the living tissue of a human or other organism. Regardless, whether it is a human-to-machine or machine-to-organism coalescence, when AI is applied via silicon microchips or nanotechnology embedded into prosthetic forms like a dependent limb, a vital organ, or a replacement/additional sensory input, a human or piece of machinery is said to be a cyborg.

There are already early experiments with such cybernetics. In 1998 Professor Kevin Warwick named his first experiment Cyborg 1.0, surgically implanting a silicon chip transponder into his forearm. In 2002 in project Cyborg 2.0, Warwick had a one hundred electrode array surgically implanted into the median nerve fibres of his left arm.

Ultimately we need to be extremely careful that any artificial intelligence we invite into our bodies does not submerge the human consciousness and, in doing so, rule over it.

Back to top

Cybernetics is already with us.
Shutterstock


 

Q8. Are you generally optimistic or pessimistic about future of artificial intelligence and its benefits for humanity?

A. Toby Walsh, Professor of AI:

I am both optimistic and pessimistic. AI is one of humankind’s truly revolutionary endeavours. It will transform our economies, our society and our position in the centre of this world. If we get this right, the world will be a much better place. We’ll all be healthier, wealthier and happier.

Of course, as with any technology, there are also bad paths we might end up following instead of the good ones. And unfortunately, humankind has a track record of late of following the bad paths.

We know global warming is coming but we seem unable not to follow this path. We know that terrorism is fracturing the world but we seem unable to prevent this. AI will also challenge our society in deep and fundamental ways. It will, for instance, completely change the nature of work. Science fiction will soon be science fact.

A. Rob Sparrow, Professor of Philosophy

I am generally pessimistic about the long term impact of artificial intelligence research on humanity.

I don’t want to deny that artificial intelligence has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach. Investigating how brains work by trying to build machines that can do what they do is an interesting and worthwhile project in its own right.

However, there is a real danger that the systems that AI researchers come up with will mainly be used to further enrich the wealthy and to entrench the power of the powerful.

I also think there is a risk that the prospect of AI will allow people to delude themselves that we don’t need to do something about climate change now. It may also distract them from the fact that we already know what to do, but we lack the political will to do it.

Finally, even though I don’t think we’ve currently got much of a clue of how this might happen, if engineers do eventually succeed in creating genuine AIs that are smarter than we are, this might well be a species-level extinction threat.

A. Jonathan Roberts, Professor in Robotics

I am generally optimistic about the long-term future of AI to humanity. I think that AI has the potential to radically change humanity and hence, if you don’t like change, you are not going to like the future.

I think that AI will revolutionise health care, especially diagnosis, and will enable the customisation of medicine to the individual. It is very possible that AI GPs and robot doctors will share their knowledge as they acquire it, creating a super doctor that will have access to all the medical data of the world.

I am also optimistic because humans tend to recognise when technology is having major negative consequences, and we eventually deal with it. Humans are in control and will naturally try and use technology to make a better world.

A. Kevin Korb, Reader in Computer Science

I’m pessimistic about the medium-term future of humanity. I think climate change and attendant dislocations, wars etc. may well massively disrupt science and technology. In that case progress on AI may stop.

If that doesn’t happen, then I think progress will continue and we’ll achieve AI in the long-term. Along the way, AI research will produce spin-offs that help economy and society, so I think as long as it exists AI tech will be important.

A. Gary Lea, Researcher in Artificial Intelligence Regulation

I suspect the long-term future for AI will turn out to be the usual mixed bag: some good, some bad. If scientists and engineers think sensibly about safety and public welfare when making their research, design and build choices (and provided there are suitable regulatory frameworks in place as a backstop), I think we should be okay.

So, on balance, I am cautiously optimistic on this front – but there are many other long-term existential risks for humanity.

Back to top

The Conversation

Toby Walsh, Professor of AI, Research Group Leader, Optimisation Research Group , Data61; David Dowe, Associate Professor, Clayton School of Information Technology, Monash University; Gary Lea, Visiting Researcher in Artificial Intelligence Regulation, Australian National University; Jai Galliott, Research Fellow in Indo-Pacific Defence, UNSW Australia; Jonathan Roberts, Professor in Robotics, Queensland University of Technology; Katina Michael, Associate Professor, School of Information Systems and Technology, University of Wollongong; Kevin Korb, Reader in Computer Science, Monash University; Robert Sparrow, Professor, Department of Philosophy; Adjunct Professor, Centre for Human Bioethics, Monash University, and Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.

Rise of the humans: intelligence amplification will make us as smart as the machines

Augmented reality technology could soon boost our intelligence. COM SALUD Agencia de comunicación/Flickr, CC BY

Alvin DMello, Queensland University of Technology

In January this year Microsoft announced the HoloLens, a technology based on virtual and augmented reality (AR).

HoloLens supplements what you see with overlaid 3D images. It also uses artificial intelligence (AI) to generate relevant information depending on the situation the wearer is in. The information is then augmented to the your normal vision using virtual reality (VR).


Microsoft’s HoloLens in action.

It left a lot of us imagining its potential, from video games to medical sciences. But HoloLens might also give us insight into an idea that goes beyond conventional artificial intelligence: that technology could complement our intelligence, rather than replacing it, as is often the case when people talk about AI.

From AI to IA

Around the same time that AI was first defined, there was another concept that emerged: intelligence amplification (IA), which was also variously known as cognitive augmentation or machine augmented intelligence.

In contrast to AI, which is a standalone system capable of processing information as well as or better than a human, IA is actually designed to complement and amplify human intelligence. IA has one big edge over AI: it builds on human intelligence that has evolved over millions of years, while AI attempts to build intelligence from scratch.

IA has been around from the time humans first began to communicate, at least in a very broad sense. Writing was among the first technologies that might be considered as IA, and it enabled us to enhance our creativity, understanding, efficiency and, ultimately, intelligence.

For instance, our ancestors built tools and structures based on trial and error methods assisted by knowledge passed on verbally and through demonstration by their forebears. But there is only so much information that any one individual can retain in their mind without external assistance.

Today we build complex structures with the help of hi-tech survey tools and highly accurate software. Our knowledge has also much improved thanks to the recorded experiences of countless others who have come before us. More knowledge than any one person could remember is now readily accessible through external devices at the push of a button.

Although IA has been around for many years in principle, it has not been a widely recognised subject. But with systems such as HoloLens, IA can now be explicitly developed to be faster than was possible in the past.

From AR to IA

Augmented reality is just the latest technology to enable IA, supplementing our intelligence and improving it.

The leap that Microsoft has taken with HoloLens is using AI to boost IA. Although this has also been done in various disparate systems before, Microsoft has managed to bring all the smaller components together and present it on a large scale with a rich experience.

Augmented Reality experience on HoloLens
Microsoft

For example, law enforcement agencies could use HoloLens to access information on demand. It could rapidly access a suspect’s record to determine whether they’re likely to be dangerous. It could anticipate the routes the suspect is likely to take in a pursuit. This would effectively make the officer more “intelligent” in the field.

Surgeons are already making use of 3D printing technology to pre-model surgery procedures enabling them to conduct some very intricate surgeries that were never before possible. Similar simulations could be done by projecting the model through an AR device, like HoloLens.

Blurred lines

Lately there has been some major speculation about the threat posed by superintelligent AI. Philosophers such as Nick Bostrom have explored many issues in this realm.

AI today is far behind the intelligence possessed by any individual human. However, that might change. Yet the fear of superintelligent AI is predicated on there being a clear distinction between the AI and us. With IA, that distinction is blurred, and so too is the possibility of there being a conflict between us and AI.

Intelligence amplification is an old concept, but is coming to the fore with the development of new augmented reality devices. It may not be long before your own thinking might be enhanced to superhuman levels thanks to a seamless interface with technology.

The Conversation

Alvin DMello, PhD Candidate, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

What a ‘digital first’ government would look like

The digital economy means people are no longer passive consumers. Image sourced from Shutterstock.com

Michael Rosemann, Queensland University of Technology

Australia’s new prime minister, Malcolm Turnbull, has announced what he calls a “21st century government”. This article is part of The Conversation’s series focusing on what such a government should look like.


When discussing the digital economy it’s easy to focus on technology, and its exponential uptake.

In reality, there’s been a shift from an “economy of corporations” to an “economy of people”. While previous technologies were largely dedicated to automating and streamlining business processes, digital technologies allow active citizen contributions.

In the economy of people, citizens are no longer passive consumers, but come with their own digital identities, maintain personal networks that give them the ability to influence, and contribute data, opinions and even apps to the economy.

The public sector, like any sector, is not immune to the serious implications of the digital economy. As a consequence, future governments have to keep up with the increasing digital literacy of their citizens and adopt new ways of thinking. This demands a “digital mind” that is technology-agnostic, but focused on the impact of the digital economy.

In the economy of corporations, governments, like most organisations, could rely on largely reactive service provision. Citizens would approach the government via offices, call centres or web pages and government services would be provided in response. A proactive government, however, is able to react to citizens’ life events without being prompted. This could be facilitated by the provision of data from third parties or by proactively providing services based on available data.

An example would be age-based welfare payments. Instead of relying on literate citizens who have awareness of government services, a proactive government would offer such services when they become relevant to the citizen. One step further is the vision of a predictive government. In this case, the government would offer services before a life event even occurs. Such services could be related to health care, (un)employment or (upcoming) disasters.

What does a ‘digital mind’ look like?

Future governments will have to take part in the life of their citizens, as opposed to citizens taking part in the life of the government. This will require focusing on the following emerging trends.

Share of digital attention

“Share of digital attention” captures the relative time a citizen dedicates to a specific provider. Digitally minded corporations such as Google or Facebook have a detailed understanding of their share of digital attention, and how this leading indicator contributes to lagging indicators such as revenue. Most non-digitally minded companies do not measure it. Governments can compete for this share of attention by building mobile applications that bring citizens closer to government services. Proactive or predictive services can help them channel traffic away from web pages to mobile solutions.

Digital signals

Digital signals are the information that is streamed from citizens to organisations. In the health sector medical device sensors allow citizens to share digital data with trusted health experts. Instead of patients (physically) coming to health care providers, they let their data travel and enable medical advice. This trend will most likely flow on to other sectors of the economy leading to an increased willingness to share digital signals with trusted providers. Citizens would no longer look for services, but simply share life events (e.g., my house is flooded, I lost my job, I am a first time parent) and expect a government service in response.

Digital identities

The economy of people will see the emergence of citizens who “bring their own data”. In such a world, a drivers license would simply be an attribute of a citizen and not a separate entity. Governments have grappled with their role in providing platforms for such digital identities, but it’s likely citizens will look for a single digital identity that can be used across all interactions spanning private and public sector providers. A prominent example is Estonia’s digital identity solution, which supports its citizens in daily interactions such as public transport, voting or picking up e-prescriptions.

The economy of things

We predict the emergence of an economy of things, with wide participation of smart devices in economic and societal activities. This could include smart cars notifying of accidents, smart homes asking for help in case of a flood or bushfire, or robots sharing information or triggering further activities. The emergence of such G2T (government-to-thing) relationships will require entire new channels and interaction patterns as “things” cannot read web pages.

The ambidextrous government

Whatever the future will hold, the government, like any corporation, needs to establish innovation capabilities. This will demand new explorative, design-intensive capabilities in addition to the dominating ability to incrementally improve exiting services and processes.

Explorative, innovation services consist of environmental scanning (what are emerging technologies), ideation (how could these be utilised), incubation (testing and prototyping) and implementation (rapid, agile, scalable roll-out). An ambidextrous government is characterised by low innovation latency, that is, the time it takes to convert emerging opportunities into available government services.

This skill set will require changes in existing recruitment practices to attract people who are driven by what is possible in the future as opposed to by what is broken today.

The Conversation

Michael Rosemann, Professor of Information Systems, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The VW scandal exposes the high tech control of engine emissions

It’s the software that controls how VW’s diesel engines perform. EPA/Patrick Pleul

Ben Mullins, Curtin University; Richard Brown, Queensland University of Technology, and Zoran Ristovski, Queensland University of Technology

As the fallout continues from the emissions scandal engulfing Volkswagen, the car maker has said it will make its vehicles meet the United States emissions standards.

The company has also revealed it will fix more than 77,000 VW and Skoda vehicles sold in Australia, plus a number of Audi vehicles, to add to the already 11-million vehicles affected worldwide.

But why did the German car maker try to cheat the emissions testing in the first place?

Diesel engines as a whole are very thermally efficient, and consequently fuel efficient, compared to gasoline engines.

But they have the downside of generating a large quantity of ultrafine particulates, classified by the International Agency for Research on Cancer as a Group 1 carcinogen.

They also produce NOx – which includes nitric oxide NO and nitrogen dioxide NO2 which are both highly reactive atmospheric pollutants and highly toxic to humans.

For these reasons NOx (as well as other gases) emitted by vehicle engines are heavily regulated by legislation.

The software is in control

Most operations of modern engines are controlled by quite sophisticated software. The engine is managed by an Engine Management System, a specialised computer that receives input from the driver through the brake and accelerator pedals.

It also receives data from many sensors on the engine and at other important points in the vehicle. The software then makes decisions regarding the operation of the engine including the fuel volume injected, timing of injection and operation of emission aftertreatment systems.

Since engines can be used in many different applications – from automobiles to ships, boats or electric generators – the emission standards for these various applications may vary significantly. The Engine Management System is therefore able to vary the operation of the engine under a wide range of requirements to meet these different applications. These different settings are generally termed “maps” in the automotive world.

There is actually negligible nitrogen in diesel fuel. All the NOx is generated through a reaction between nitrogen in the air and oxygen at the high temperatures reached during combustion. The problem is that low NOx combustion conditions are at odds with combustion conditions to generate maximum power and maximum fuel efficiency.

To reduce the emissions

Modern diesels use Exhaust Gas Recirculation (EGR) to combat NOx production and catalysts to (chemically) reduce NOx already generated. The use of Selective Catalytic Reduction (SCR) has become increasingly widespread.

SCR basically injects aqueous ammonia into the exhaust to reduce NO to N2 and O2. Many of the low emission diesel vehicles on the market have found it necessary to fit SCR to meet the most stringent emissions targets (not yet in place in Australia).

There is public perception (which has some foundation) that EGR is unhealthy for engines, as a result there are many aftermarket “kits” available for sale to remove or block EGR systems. Likewise SCR adds additional cost, as diesel exhaust fluid (Adblue) must be purchased.

The computer also controls the EGR and SCR in modern engines including the amount of exhaust gas which is recycled or ammonia injected.

Secrets in the system

Engine manufacturers are generally very protective of their engine management software as they do not wish their competitors to know their engine management or emission control strategies.

When regulators test the emissions from a vehicle they have no direct knowledge of the software operating in the Engine Management System. They can only operate the engine at various loads and engine speeds, then measure the emissions under these conditions.

If the software detected the engine was in a vehicle undergoing an emissions test there is little to stop the software switching to a different “map” which conforms to emissions standards.

But why would a vehicle manufacturer do this. The figure below gives the best answer to the question: power and performance.

NOx emitted as a function of air fuel ratio in a diesel.
BenMullins, Author provided

We can see that higher air to fuel ratios reduce NOx generation, but this also results in lower power generation and possibly lower fuel efficiency.

The VW fix may lead to other problems

VW has announced a recall on more than 11-million vehicles but it is likely the affected owners will notice reduced power after the fix. This may then lead to many owners seeking third party options to restore lost engine power.

Many aftermarket engine tuning companies already exist and they can modify the Engine Management Systems by altering, adding or rewriting “map” settings. Other companies provide electronic devices that alter sensor settings before they reach the computer, to fool the manufacturer’s Engine Management System. Many of these such devices reduce EGR and SCR or turn it off entirely, as well as reducing air fuel ratios.

Note that this amounts to modification or removal of pollution control equipment, which carries heavy penalties for individuals and even heavier penalties for companies in Australia. Retailers of such systems generally avoid this legality issue by branding their products as for racing or offroad use only.

The problem in Australia, and indeed much of the world, is that emissions tests during safety or roadworthy inspections are generally conducted at idle (if at all) for vehicles in use. Vehicle roadworthy inspections cannot (easily) determine if a software alteration has been conducted to render pollution controls inoperable.

Since the inspectors cannot access the software, they can only check if manufacturer pollution control devices appear to be present and appear to be in working order.

The VW recalls won’t alter the fact that emissions tests are not representative of normal driving conditions, and aftermarket modification of engine management computers is widespread. Just one modified vehicle could emit enough NOx for a thousand unmodified vehicles.

Without access to the source code, we don’t know what the difference is between the “map” for normal driving and the “map” for emissions testing in affected vehicles.

One solution would be to allow open access to the source code. If the US testing authorities had had access to that code in the VW case then any “defeat device” would likely have been obvious.

The Conversation

Ben Mullins, Associate Professor in School of Public Health, Curtin University; Richard Brown, Associate Professor in Mechanical and Environmental Engineering, Queensland University of Technology, and Zoran Ristovski, Professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

What human emotions do we really want of artificial intelligence?

The challenge in making AI machines appear more human. Flickr/Rene Passet, CC BY-NC-ND

David Lovell, Queensland University of Technology

Forget the Turing and Lovelace tests on artificial intelligence: I want to see a robot pass the Frampton Test.

Let me explain why rock legend Peter Frampton enters the debate on AI.

For many centuries, much thought was given to what distinguishes humans from animals. These days thoughts turn to what distinguishes humans from machines.

The British code breaker and computing pioneer, Alan Turing, proposed “the imitation game” (also known as the Turing test) as a way to evaluate whether a machine can do something we humans love to do: have a good conversation.

If a human judge cannot consistently distinguish a machine from another human by conversation alone, the machine is deemed to have passed the Turing Test.

Initially, Turing proposed to consider whether machines can think, but realised that, thoughtful as we may be, humans don’t really have a clear definition of what thinking is.

Tricking the Turing test

Maybe it says something of another human quality – deviousness – that the Turing Test came to encourage computer programmers to devise machines to trick the human judges, rather than embody sufficient intelligence to hold a realistic conversation.

This trickery climaxed on June 7, 2014, when Eugene Goostman convinced about a third of the judges in the Turing Test competition at the Royal Society that “he” was a 13-year-old Ukrainian schoolboy.

Eugene was a chatbot: a computer program designed to chat with humans. Or, chat with other chatbots, for somewhat surreal effect (see the video, below).


And critics were quick to point out the artificial setting in which this deception occurred.

The creative mind

Chatbots like Eugene led researchers to throw down a more challenging gauntlet to machines: be creative!

In 2001, researchers Selmer Bringsjord, Paul Bello and David Ferrucci proposed the Lovelace Test – named after 19th century mathematician and programmer Ada, Countess of Lovelace – that asked for a computer to create something, such as a story or poem.

Computer generated poems and stories have been around for a while, but to pass the Lovelace Test, the person who designed the program must not be able to account for how it produces its creative works.

Mark Riedl, from the School of Interactive Computing at Georgia Tech, has since proposed an upgrade (Lovelace 2.0) that scores a computer in a series of progressively more demanding creative challenges.

This is how he describes being creative:

In my test, we have a human judge sitting at a computer. They know they’re interacting with an AI, and they give it a task with two components. First, they ask for a creative artifact such as a story, poem, or picture. And secondly, they provide a criterion. For example: “Tell me a story about a cat that saves the day,” or “Draw me a picture of a man holding a penguin.”

But what’s so great about creativity?

Challenging as Lovelace 2.0 may be, it’s argued that we should not place creativity above other human qualities.

This (very creative) insight from Dr Jared Donovan arose in a panel discussion with roboticist Associate Professor Michael Milford and choreographer Prof Kim Vincs at Robotronica 2015 earlier this month.

Amid all the recent warnings that AI could one day lead to the end of humankind, the panel’s aim was to discuss the current state of creativity and robots. Discussion led to questions about the sort of emotions we would want intelligent machines to express.

Empathy – the ability to understand and share feelings of another – was top of the list of desirable human qualities that day, perhaps because it goes beyond mere recognition (“I see you are angry”) and demands a response that demonstrates an appreciation of emotional impact.

Hence, I propose the Frampton Test, after the critical question posed by rock legend Peter Frampton in the 1973 song “Do you feel like we do?

True, this is slightly tongue in cheek, but I imagine that to pass the Frampton Test an artificial system would have to give a convincing and emotionally appropriate response to a situation that would arouse feelings in most humans. I say most because our species has a spread of emotional intelligence levels.

I second that emotion

Noting that others have explored this territory and that the field of “affective computing” strives to imbue machines with the ability to simulate empathy, it is still fascinating to contemplate the implications of emotional machines.

This July, AI and robotics researchers released an open letter on the peril of autonomous weapons. If machines could have even a shred of empathy, would we fear these developments in the same way?

This reminds us, too, that human emotions are not all positive: hate, anger, resentment and so on. Perhaps we should be more grateful that the machines in our lives don’t display these feelings. (Can you imagine a grumpy Siri?)

Still, there are contexts where our nobler emotions would be welcome: sympathy and understanding in health care for instance.

As with all questions worthy of serious consideration, the Robotronica panellists did not resolve whether robots could perhaps one day be creative, or whether indeed we would want that to pass.

As for machine emotion, I think the Frampton Test will be even longer in the passing. At the moment the strongest emotions I see around robots are those of their creators.



Acknowledgement: This article were inspired by discussion and debate at the Robotronica 2015 panel session The Lovelace Test: Can Robots be Creative? and I gratefully acknowledge the creative insights of panellists Dr Jared Donovan (QUT), Associate Professor Michael Milford (QUT) and Professor Kim Vincs (Deakin).

The Conversation

David Lovell, Head of the School of Electrical Engineering and Computer Science, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.