Rise of the humans: intelligence amplification will make us as smart as the machines

Augmented reality technology could soon boost our intelligence. COM SALUD Agencia de comunicación/Flickr, CC BY

Alvin DMello, Queensland University of Technology

In January this year Microsoft announced the HoloLens, a technology based on virtual and augmented reality (AR).

HoloLens supplements what you see with overlaid 3D images. It also uses artificial intelligence (AI) to generate relevant information depending on the situation the wearer is in. The information is then augmented to the your normal vision using virtual reality (VR).

Microsoft’s HoloLens in action.

It left a lot of us imagining its potential, from video games to medical sciences. But HoloLens might also give us insight into an idea that goes beyond conventional artificial intelligence: that technology could complement our intelligence, rather than replacing it, as is often the case when people talk about AI.

From AI to IA

Around the same time that AI was first defined, there was another concept that emerged: intelligence amplification (IA), which was also variously known as cognitive augmentation or machine augmented intelligence.

In contrast to AI, which is a standalone system capable of processing information as well as or better than a human, IA is actually designed to complement and amplify human intelligence. IA has one big edge over AI: it builds on human intelligence that has evolved over millions of years, while AI attempts to build intelligence from scratch.

IA has been around from the time humans first began to communicate, at least in a very broad sense. Writing was among the first technologies that might be considered as IA, and it enabled us to enhance our creativity, understanding, efficiency and, ultimately, intelligence.

For instance, our ancestors built tools and structures based on trial and error methods assisted by knowledge passed on verbally and through demonstration by their forebears. But there is only so much information that any one individual can retain in their mind without external assistance.

Today we build complex structures with the help of hi-tech survey tools and highly accurate software. Our knowledge has also much improved thanks to the recorded experiences of countless others who have come before us. More knowledge than any one person could remember is now readily accessible through external devices at the push of a button.

Although IA has been around for many years in principle, it has not been a widely recognised subject. But with systems such as HoloLens, IA can now be explicitly developed to be faster than was possible in the past.

From AR to IA

Augmented reality is just the latest technology to enable IA, supplementing our intelligence and improving it.

The leap that Microsoft has taken with HoloLens is using AI to boost IA. Although this has also been done in various disparate systems before, Microsoft has managed to bring all the smaller components together and present it on a large scale with a rich experience.

Augmented Reality experience on HoloLens

For example, law enforcement agencies could use HoloLens to access information on demand. It could rapidly access a suspect’s record to determine whether they’re likely to be dangerous. It could anticipate the routes the suspect is likely to take in a pursuit. This would effectively make the officer more “intelligent” in the field.

Surgeons are already making use of 3D printing technology to pre-model surgery procedures enabling them to conduct some very intricate surgeries that were never before possible. Similar simulations could be done by projecting the model through an AR device, like HoloLens.

Blurred lines

Lately there has been some major speculation about the threat posed by superintelligent AI. Philosophers such as Nick Bostrom have explored many issues in this realm.

AI today is far behind the intelligence possessed by any individual human. However, that might change. Yet the fear of superintelligent AI is predicated on there being a clear distinction between the AI and us. With IA, that distinction is blurred, and so too is the possibility of there being a conflict between us and AI.

Intelligence amplification is an old concept, but is coming to the fore with the development of new augmented reality devices. It may not be long before your own thinking might be enhanced to superhuman levels thanks to a seamless interface with technology.

The Conversation

Alvin DMello, PhD Candidate, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

What a ‘digital first’ government would look like

The digital economy means people are no longer passive consumers. Image sourced from Shutterstock.com

Michael Rosemann, Queensland University of Technology

Australia’s new prime minister, Malcolm Turnbull, has announced what he calls a “21st century government”. This article is part of The Conversation’s series focusing on what such a government should look like.

When discussing the digital economy it’s easy to focus on technology, and its exponential uptake.

In reality, there’s been a shift from an “economy of corporations” to an “economy of people”. While previous technologies were largely dedicated to automating and streamlining business processes, digital technologies allow active citizen contributions.

In the economy of people, citizens are no longer passive consumers, but come with their own digital identities, maintain personal networks that give them the ability to influence, and contribute data, opinions and even apps to the economy.

The public sector, like any sector, is not immune to the serious implications of the digital economy. As a consequence, future governments have to keep up with the increasing digital literacy of their citizens and adopt new ways of thinking. This demands a “digital mind” that is technology-agnostic, but focused on the impact of the digital economy.

In the economy of corporations, governments, like most organisations, could rely on largely reactive service provision. Citizens would approach the government via offices, call centres or web pages and government services would be provided in response. A proactive government, however, is able to react to citizens’ life events without being prompted. This could be facilitated by the provision of data from third parties or by proactively providing services based on available data.

An example would be age-based welfare payments. Instead of relying on literate citizens who have awareness of government services, a proactive government would offer such services when they become relevant to the citizen. One step further is the vision of a predictive government. In this case, the government would offer services before a life event even occurs. Such services could be related to health care, (un)employment or (upcoming) disasters.

What does a ‘digital mind’ look like?

Future governments will have to take part in the life of their citizens, as opposed to citizens taking part in the life of the government. This will require focusing on the following emerging trends.

Share of digital attention

“Share of digital attention” captures the relative time a citizen dedicates to a specific provider. Digitally minded corporations such as Google or Facebook have a detailed understanding of their share of digital attention, and how this leading indicator contributes to lagging indicators such as revenue. Most non-digitally minded companies do not measure it. Governments can compete for this share of attention by building mobile applications that bring citizens closer to government services. Proactive or predictive services can help them channel traffic away from web pages to mobile solutions.

Digital signals

Digital signals are the information that is streamed from citizens to organisations. In the health sector medical device sensors allow citizens to share digital data with trusted health experts. Instead of patients (physically) coming to health care providers, they let their data travel and enable medical advice. This trend will most likely flow on to other sectors of the economy leading to an increased willingness to share digital signals with trusted providers. Citizens would no longer look for services, but simply share life events (e.g., my house is flooded, I lost my job, I am a first time parent) and expect a government service in response.

Digital identities

The economy of people will see the emergence of citizens who “bring their own data”. In such a world, a drivers license would simply be an attribute of a citizen and not a separate entity. Governments have grappled with their role in providing platforms for such digital identities, but it’s likely citizens will look for a single digital identity that can be used across all interactions spanning private and public sector providers. A prominent example is Estonia’s digital identity solution, which supports its citizens in daily interactions such as public transport, voting or picking up e-prescriptions.

The economy of things

We predict the emergence of an economy of things, with wide participation of smart devices in economic and societal activities. This could include smart cars notifying of accidents, smart homes asking for help in case of a flood or bushfire, or robots sharing information or triggering further activities. The emergence of such G2T (government-to-thing) relationships will require entire new channels and interaction patterns as “things” cannot read web pages.

The ambidextrous government

Whatever the future will hold, the government, like any corporation, needs to establish innovation capabilities. This will demand new explorative, design-intensive capabilities in addition to the dominating ability to incrementally improve exiting services and processes.

Explorative, innovation services consist of environmental scanning (what are emerging technologies), ideation (how could these be utilised), incubation (testing and prototyping) and implementation (rapid, agile, scalable roll-out). An ambidextrous government is characterised by low innovation latency, that is, the time it takes to convert emerging opportunities into available government services.

This skill set will require changes in existing recruitment practices to attract people who are driven by what is possible in the future as opposed to by what is broken today.

The Conversation

Michael Rosemann, Professor of Information Systems, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The VW scandal exposes the high tech control of engine emissions

It’s the software that controls how VW’s diesel engines perform. EPA/Patrick Pleul

Ben Mullins, Curtin University; Richard Brown, Queensland University of Technology, and Zoran Ristovski, Queensland University of Technology

As the fallout continues from the emissions scandal engulfing Volkswagen, the car maker has said it will make its vehicles meet the United States emissions standards.

The company has also revealed it will fix more than 77,000 VW and Skoda vehicles sold in Australia, plus a number of Audi vehicles, to add to the already 11-million vehicles affected worldwide.

But why did the German car maker try to cheat the emissions testing in the first place?

Diesel engines as a whole are very thermally efficient, and consequently fuel efficient, compared to gasoline engines.

But they have the downside of generating a large quantity of ultrafine particulates, classified by the International Agency for Research on Cancer as a Group 1 carcinogen.

They also produce NOx – which includes nitric oxide NO and nitrogen dioxide NO2 which are both highly reactive atmospheric pollutants and highly toxic to humans.

For these reasons NOx (as well as other gases) emitted by vehicle engines are heavily regulated by legislation.

The software is in control

Most operations of modern engines are controlled by quite sophisticated software. The engine is managed by an Engine Management System, a specialised computer that receives input from the driver through the brake and accelerator pedals.

It also receives data from many sensors on the engine and at other important points in the vehicle. The software then makes decisions regarding the operation of the engine including the fuel volume injected, timing of injection and operation of emission aftertreatment systems.

Since engines can be used in many different applications – from automobiles to ships, boats or electric generators – the emission standards for these various applications may vary significantly. The Engine Management System is therefore able to vary the operation of the engine under a wide range of requirements to meet these different applications. These different settings are generally termed “maps” in the automotive world.

There is actually negligible nitrogen in diesel fuel. All the NOx is generated through a reaction between nitrogen in the air and oxygen at the high temperatures reached during combustion. The problem is that low NOx combustion conditions are at odds with combustion conditions to generate maximum power and maximum fuel efficiency.

To reduce the emissions

Modern diesels use Exhaust Gas Recirculation (EGR) to combat NOx production and catalysts to (chemically) reduce NOx already generated. The use of Selective Catalytic Reduction (SCR) has become increasingly widespread.

SCR basically injects aqueous ammonia into the exhaust to reduce NO to N2 and O2. Many of the low emission diesel vehicles on the market have found it necessary to fit SCR to meet the most stringent emissions targets (not yet in place in Australia).

There is public perception (which has some foundation) that EGR is unhealthy for engines, as a result there are many aftermarket “kits” available for sale to remove or block EGR systems. Likewise SCR adds additional cost, as diesel exhaust fluid (Adblue) must be purchased.

The computer also controls the EGR and SCR in modern engines including the amount of exhaust gas which is recycled or ammonia injected.

Secrets in the system

Engine manufacturers are generally very protective of their engine management software as they do not wish their competitors to know their engine management or emission control strategies.

When regulators test the emissions from a vehicle they have no direct knowledge of the software operating in the Engine Management System. They can only operate the engine at various loads and engine speeds, then measure the emissions under these conditions.

If the software detected the engine was in a vehicle undergoing an emissions test there is little to stop the software switching to a different “map” which conforms to emissions standards.

But why would a vehicle manufacturer do this. The figure below gives the best answer to the question: power and performance.

NOx emitted as a function of air fuel ratio in a diesel.
BenMullins, Author provided

We can see that higher air to fuel ratios reduce NOx generation, but this also results in lower power generation and possibly lower fuel efficiency.

The VW fix may lead to other problems

VW has announced a recall on more than 11-million vehicles but it is likely the affected owners will notice reduced power after the fix. This may then lead to many owners seeking third party options to restore lost engine power.

Many aftermarket engine tuning companies already exist and they can modify the Engine Management Systems by altering, adding or rewriting “map” settings. Other companies provide electronic devices that alter sensor settings before they reach the computer, to fool the manufacturer’s Engine Management System. Many of these such devices reduce EGR and SCR or turn it off entirely, as well as reducing air fuel ratios.

Note that this amounts to modification or removal of pollution control equipment, which carries heavy penalties for individuals and even heavier penalties for companies in Australia. Retailers of such systems generally avoid this legality issue by branding their products as for racing or offroad use only.

The problem in Australia, and indeed much of the world, is that emissions tests during safety or roadworthy inspections are generally conducted at idle (if at all) for vehicles in use. Vehicle roadworthy inspections cannot (easily) determine if a software alteration has been conducted to render pollution controls inoperable.

Since the inspectors cannot access the software, they can only check if manufacturer pollution control devices appear to be present and appear to be in working order.

The VW recalls won’t alter the fact that emissions tests are not representative of normal driving conditions, and aftermarket modification of engine management computers is widespread. Just one modified vehicle could emit enough NOx for a thousand unmodified vehicles.

Without access to the source code, we don’t know what the difference is between the “map” for normal driving and the “map” for emissions testing in affected vehicles.

One solution would be to allow open access to the source code. If the US testing authorities had had access to that code in the VW case then any “defeat device” would likely have been obvious.

The Conversation

Ben Mullins, Associate Professor in School of Public Health, Curtin University; Richard Brown, Associate Professor in Mechanical and Environmental Engineering, Queensland University of Technology, and Zoran Ristovski, Professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.