Digital diagnosis: intelligent machines do a better job than humans

It takes time for a human to become good at diagnosing ailments, but that learning is lost when they retire. Shutterstock/Poproskiy Alexey

Ross Crawford, Queensland University of Technology; Anjali Jaiprakash, Queensland University of Technology, and Jonathan Roberts, Queensland University of Technology

Until now, medicine has been a prestigious and often extremely lucrative career choice. But in the near future, will we need as many doctors as we have now? Are we going to see significant medical unemployment in the coming decade?

Dr Saxon Smith, president of the Australian Medical Association NSW branch, said in a report late last year that the most common concerns he hears from doctors-in-training and medical students are, “what is the future of medicine?” and “will I have a job?”. The answers, he said, continue to elude him.

As Australian, British and American universities continue to graduate increasing numbers of medical students, the obvious question is where will these new doctors work in the future?

Will there be an expanded role for medical professionals due to our ageing populations? Or is pressure to reduce costs while improving outcomes likely to force the adoption of new technology, which will then likely erode the number of roles currently performed by doctors?

Driving down the costs

All governments, patients and doctors around the world know that healthcare costs will need to reduce if we are to treat more people. Some propose making patients pay more, but however we pay for it, it’s clear that driving the cost down is what needs to happen.

The use of medical robots to assist human surgeons is becoming more widespread but, so far, they are being used to try and improve patient outcomes and not to reduce the cost of surgery. Cost savings may come later when this robotic technology matures.

It is in the area of medical diagnostics where many people see possible significant cost reduction while improving accuracy by using technology instead of human doctors.

It is already common for blood tests and genetic testing (genomics) to be carried out automatically and very cost effectively by machines. They analyse the blood specimen and automatically produce a report.

The tests can be as simple as a haemoglobin level (blood count) through to tests of diabetes such as insulin or glucose levels. They can also be used for far more complicated tests such as looking at a person’s genetic makeup.

A good example is Thyrocare Technologies Ltd in Mumbai, India, where more than 100,000 diagnostic tests from around the country are done every evening, and the reports delivered within 24 hours of blood being taken from a patient.

Machines vs humans

If machines can read blood tests, what else can they do? Though many doctors will not like this thought, any test that requires pattern recognition will ultimately be done better by a machine than a human.

Many diseases need a pathological diagnosis, where a doctor looks at a sample of blood or tissue, to establish the exact disease: a blood test to diagnose an infection, a skin biopsy to determine if a lesion is a cancer or not and a tissue sample taken by a surgeon looking to make a diagnosis.

All of these examples, and in fact all pathological diagnoses are made by a doctor using pattern recognition to determine the diagnosis.

Artificial intelligence techniques using deep neural networks, which are a type of machine learning, can be used to train these diagnostic machines. Machines learn fast and we are not talking about a single machine, but a network of machines linked globally via the internet, using their pooled data to continue to improve.

It will not happen overnight – it will take some time to learn – but once trained the machine will only continue to get better. With time, an appropriately trained machine will be superior at pattern recognition than any human could ever be.

Pathology is now a matter of multi-million dollar laboratories relying on economies of scale. It takes around 15 years from leaving high school to train a pathologist to function independently. It probably takes another 15 years for the pathologist to be as good as they will ever be.

Some years after that, they will retire and all that knowledge and experience is lost. Surely, it would be better if that knowledge could be captured and used by future generations? A robotic pathologist would be able to do just that.

Radiology, X-rays and beyond

Radiological tests account for over AUS$2 billion of the annual Medicare spend. In a 2013 report, it was estimated that in the 2014-15 period, 33,600,000 radiological investigations would be performed in Australia. A radiologist would have to study every one of these and write a report.

Radiologists are already reading, on average, more than seven times the number of studies per day than they were five years ago. These reports, like those written by pathologists, are based on pattern recognition.

Currently, many radiological tests performed in Australia are being read by radiologists in other countries, such as the UK. Rather than having an expert in Australia get out of bed at 3am to read a brain scan of an injured patient, the image can be digitally sent to a doctor in any appropriate time zone and be reported on almost instantly.

What if machines were taught to read X-rays working at first with, and ultimately instead of, human radiologists? Would we still need human radiologists? Probably. Improved imaging, such as MRI and CT scans, will allow radiologists to perform some procedures that surgeons now undertake.

The field of diagnostic radiology is rapidly expanding. In this field, radiologists are able to diagnose and treat conditions such as bleeding blood vessels. This is done using minimally invasive techniques, passing wires through larger vessels to reach the point of bleeding.

So the radiologists may end up doing procedures that are currently done by vascular and cardiac surgeons. The increased use of robotic assisted surgery will mean this is more likely than not.

There is a lot more to diagnosing a skin lesion, rash or growth than simply looking at it. But much of the diagnosis is based on the dermatologist recognising the lesion (again, pattern recognition).

If the diagnosis remains unclear then some tissue (a biopsy) is sent to the laboratory for a pathological diagnosis. We have already established that a machine can read the latter. The same principle applies to the recognition of the skin lesion.

Once recognised and learnt, the lesion will be able to be recognised again. Mobile phones with high-quality cameras will be able to link to a global database that will, like any other database with learning capability, continue to improve.

It’s not if, but when

These changes will not happen overnight, but they are inevitable. Though many doctors will see these changes as a threat, the chance for global good is unprecedented.

An X-ray taken in equatorial Africa could be read with the same reliability as one taken in an Australian centre of excellence. An infectious rash could be uploaded to a phone and the diagnosis given instantly. Many lives will be saved and the cost of health care to the world’s poor can be minimal and, in many cases, free.

For this to become a reality, it will take experts to work with machines and help them learn. Initially, the machines may be asked to do more straightforward tests but gradually they will be taught, just as humans learn most things in life.

The medical profession should grasp these opportunities for change, and our future young doctors should think carefully where the medical jobs of the future will lie. It is almost certain that the medical employment landscape in 15 years will not look like the one we see today.

The Conversation

Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology; Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology, and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The VW scandal exposes the high tech control of engine emissions

It’s the software that controls how VW’s diesel engines perform. EPA/Patrick Pleul

Ben Mullins, Curtin University; Richard Brown, Queensland University of Technology, and Zoran Ristovski, Queensland University of Technology

As the fallout continues from the emissions scandal engulfing Volkswagen, the car maker has said it will make its vehicles meet the United States emissions standards.

The company has also revealed it will fix more than 77,000 VW and Skoda vehicles sold in Australia, plus a number of Audi vehicles, to add to the already 11-million vehicles affected worldwide.

But why did the German car maker try to cheat the emissions testing in the first place?

Diesel engines as a whole are very thermally efficient, and consequently fuel efficient, compared to gasoline engines.

But they have the downside of generating a large quantity of ultrafine particulates, classified by the International Agency for Research on Cancer as a Group 1 carcinogen.

They also produce NOx – which includes nitric oxide NO and nitrogen dioxide NO2 which are both highly reactive atmospheric pollutants and highly toxic to humans.

For these reasons NOx (as well as other gases) emitted by vehicle engines are heavily regulated by legislation.

The software is in control

Most operations of modern engines are controlled by quite sophisticated software. The engine is managed by an Engine Management System, a specialised computer that receives input from the driver through the brake and accelerator pedals.

It also receives data from many sensors on the engine and at other important points in the vehicle. The software then makes decisions regarding the operation of the engine including the fuel volume injected, timing of injection and operation of emission aftertreatment systems.

Since engines can be used in many different applications – from automobiles to ships, boats or electric generators – the emission standards for these various applications may vary significantly. The Engine Management System is therefore able to vary the operation of the engine under a wide range of requirements to meet these different applications. These different settings are generally termed “maps” in the automotive world.

There is actually negligible nitrogen in diesel fuel. All the NOx is generated through a reaction between nitrogen in the air and oxygen at the high temperatures reached during combustion. The problem is that low NOx combustion conditions are at odds with combustion conditions to generate maximum power and maximum fuel efficiency.

To reduce the emissions

Modern diesels use Exhaust Gas Recirculation (EGR) to combat NOx production and catalysts to (chemically) reduce NOx already generated. The use of Selective Catalytic Reduction (SCR) has become increasingly widespread.

SCR basically injects aqueous ammonia into the exhaust to reduce NO to N2 and O2. Many of the low emission diesel vehicles on the market have found it necessary to fit SCR to meet the most stringent emissions targets (not yet in place in Australia).

There is public perception (which has some foundation) that EGR is unhealthy for engines, as a result there are many aftermarket “kits” available for sale to remove or block EGR systems. Likewise SCR adds additional cost, as diesel exhaust fluid (Adblue) must be purchased.

The computer also controls the EGR and SCR in modern engines including the amount of exhaust gas which is recycled or ammonia injected.

Secrets in the system

Engine manufacturers are generally very protective of their engine management software as they do not wish their competitors to know their engine management or emission control strategies.

When regulators test the emissions from a vehicle they have no direct knowledge of the software operating in the Engine Management System. They can only operate the engine at various loads and engine speeds, then measure the emissions under these conditions.

If the software detected the engine was in a vehicle undergoing an emissions test there is little to stop the software switching to a different “map” which conforms to emissions standards.

But why would a vehicle manufacturer do this. The figure below gives the best answer to the question: power and performance.

NOx emitted as a function of air fuel ratio in a diesel.
BenMullins, Author provided

We can see that higher air to fuel ratios reduce NOx generation, but this also results in lower power generation and possibly lower fuel efficiency.

The VW fix may lead to other problems

VW has announced a recall on more than 11-million vehicles but it is likely the affected owners will notice reduced power after the fix. This may then lead to many owners seeking third party options to restore lost engine power.

Many aftermarket engine tuning companies already exist and they can modify the Engine Management Systems by altering, adding or rewriting “map” settings. Other companies provide electronic devices that alter sensor settings before they reach the computer, to fool the manufacturer’s Engine Management System. Many of these such devices reduce EGR and SCR or turn it off entirely, as well as reducing air fuel ratios.

Note that this amounts to modification or removal of pollution control equipment, which carries heavy penalties for individuals and even heavier penalties for companies in Australia. Retailers of such systems generally avoid this legality issue by branding their products as for racing or offroad use only.

The problem in Australia, and indeed much of the world, is that emissions tests during safety or roadworthy inspections are generally conducted at idle (if at all) for vehicles in use. Vehicle roadworthy inspections cannot (easily) determine if a software alteration has been conducted to render pollution controls inoperable.

Since the inspectors cannot access the software, they can only check if manufacturer pollution control devices appear to be present and appear to be in working order.

The VW recalls won’t alter the fact that emissions tests are not representative of normal driving conditions, and aftermarket modification of engine management computers is widespread. Just one modified vehicle could emit enough NOx for a thousand unmodified vehicles.

Without access to the source code, we don’t know what the difference is between the “map” for normal driving and the “map” for emissions testing in affected vehicles.

One solution would be to allow open access to the source code. If the US testing authorities had had access to that code in the VW case then any “defeat device” would likely have been obvious.

The Conversation

Ben Mullins, Associate Professor in School of Public Health, Curtin University; Richard Brown, Associate Professor in Mechanical and Environmental Engineering, Queensland University of Technology, and Zoran Ristovski, Professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Charged up: the history and development of batteries

Batteries have come a long way since their beginning back in 250BC. Flickr/Patty, CC BY-NC-SA

Jose Alarco, Queensland University of Technology and Peter Talbot, Queensland University of Technology

Batteries are so ubiquitous today that they’re almost invisible to us. Yet they are a remarkable invention with a long and storied history, and an equally exciting future.

A battery is essentially a device that stores chemical energy that is converted into electricity. Basically, batteries are small chemical reactors, with the reaction producing energetic electrons, ready to flow through the external device.

Batteries have been with us for a long time. In 1938 the Director of the Baghdad Museum found what is now referred to as the “Baghdad Battery” in the basement of the museum. Analysis dated it at around 250BC and of Mesopotamian origin.

Controversy surrounds this earliest example of a battery but suggested uses include electroplating, pain relief or a religious tingle.

American scientist and inventor Benjamin Franklin first used the term “battery” in 1749 when he was doing experiments with electricity using a set of linked capacitors.

The first true battery was invented by the Italian physicist Alessandro Volta in 1800. Volta stacked discs of copper (Cu) and zinc (Zn) separated by cloth soaked in salty water.

Wires connected to either end of the stack produced a continuous stable current. Each cell (a set of a Cu and a Zn disc and the brine) produces 0.76 Volts (V). A multiple of this value is obtained given by the number of cells that are stacked together.

One of the most enduring batteries, the lead-acid battery, was invented in 1859 and is still the technology used to start most internal combustion engine cars today. It is the oldest example of rechargeable battery.

A typical car battery.
Flickr/Asim Bharwani, CC BY-NC-ND

Today batteries come in a range of sizes from large Megawatt sizes, which store the power from solar farms or substations to guarantee stable supply in entire villages or islands, down to tiny batteries like those used in electronic watches.

Batteries are based on different chemistries, which generate basic cell voltages typically in the 1.0 to 3.6 V range. The stacking of the cells in series increases the voltage, while their connection in parallel enhances the supply of current. This principle is used to add up to the required voltages and currents, all the way to the Megawatt sizes.

There is now much anticipation that battery technology is about to take another leap with new models being developed with enough capacity to store the power generated with domestic solar or wind systems and then power a home at more convenient (generally night) time for a few days

How do batteries work?

When a battery is discharged the chemical reaction produces some extra electrons as the reaction occurs. An example of a reaction that produces electrons is the oxidation of iron to produce rust. Iron reacts with oxygen and gives up electrons to the oxygen to produce iron oxide.

The standard construction of a battery is to use two metals or compounds with different chemical potentials and separate them with a porous insulator. The chemical potential is the energy stored in the atoms and bonds of the compounds, which is then imparted to the moving electrons, when these are allowed to move through the connected external device.

 

A conducting fluid such as salt and water is used to transfer soluble ions from one metal to the other during the reaction and is called the electrolyte.

The metal or compound that loses the electrons during discharge is called the anode and the metal or compound that accepts the electrons is called the cathode. This flow of electrons from the anode to the cathode through the external connection is what we use to run our electronic devices.

Primary vs rechargeable batteries

When the reaction that produces the flow of electrons cannot be reversed the battery is referred to as a primary battery. When one of the reactants is consumed the battery is flat.

The most common primary battery is the zinc-carbon battery. It was found that when the electrolyte is an alkali, the batteries lasted much longer. These are the alkali batteries we buy from the supermarket.

Typical alkaline batteries we use in everyday devices.
Flickr/Simon Greig, CC BY-NC-SA

The challenge of disposing with such primary batteries was to find a way to reuse them, by recharging the batteries. This becomes more essential as the batteries become larger, and frequently replacing them is not commercially viable.

One of the earliest rechargeable batteries, the nickel-cadmium battery (NiCd), also uses an alkali as an electrolyte. In 1989 nickel-metal hydrogen batteries (NiMH) were developed, and had a longer life than NiCd batteries.

These types of batteries are very sensitive to overcharging and overheating during charge, therefore the charge rate is controlled below a maximum rate. Sophisticated controllers can speed up the charge, without taking less than a few hours.

In most other simpler chargers, the process typically takes overnight.

Rechargeable batteries can be reused many times.
Flickr/Brian J Matis, CC BY-NC-SA

Portable applications – such as mobile phones and laptop computers – are constantly looking for maximum, most compact stored energy. While this increases the risk of a violent discharge, it is manageable using current rate limiters in the mobile phone batteries because of the overall small format.

But as larger applications of batteries are contemplated the safety in large format and large quantity of cells has become a more significant consideration.

The first mobile phone had a large battery and short battery life – modern mobile and smart phones demand smaller batteries but longer lasting power.

First great leap forward: lithium-ion batteries

New technologies often demand more compact, higher capacity, safe, rechargeable batteries.

In 1980, the American physicist Professor John Goodenough invented a new type of lithium battery in which the lithium (Li) could migrate through the battery from one electrode to the other as a Li+ ion.

Lithium is one of the lightest elements in the periodic table and it has one of the largest electrochemical potentials, therefore this combination produces some of the highest possible voltages in the most compact and lightest volumes.

This is the basis for the lithium-ion battery. In this new battery, lithium is combined with a transition metal – such as cobalt, nickel, manganese or iron – and oxygen to form the cathode. During recharging when a voltage is applied, the positively charged lithium ion from the cathode migrates to the graphite anode and becomes lithium metal.

Because lithium has a strong electrochemical driving force to be oxidised if allowed, it migrates back to the cathode to become a Li+ ion again and gives up its electron back to the cobalt ion. The movement of electrons in the circuit gives us a current that we can use.

The second great leap forward: nano technology

Depending on the transition metal used in the lithium-ion battery, the cell can have a higher capacity but can be more reactive and susceptible to a phenomenon known as thermal runaway.

In the case of lithium cobalt oxide (LiCoO2) batteries made by Sony in the 1990s, this led to many such batteries catching fire. The possibility of making battery cathodes from nano-scale material and hence more reactive was out of the question.

But in the 1990s Goodenough again made a huge leap in battery technology by introducing a stable lithium-ion cathode based on lithium iron and phosphate.

This cathode is thermally stable. It also means that nano-scale lithium iron phosphate (LiFePO4) or lithium ferrophosphate (LFP) materials can now be made safely into large format cells that can be rapidly charged and discharged.

Many new applications now exist for these new cells, from power tools to hybrid and electric vehicle. Perhaps the most important application will be the storage of domestic electric energy for households.

Cordless power tools are possible thanks to advances in rechargeable batteries.
Flickr/People s Network, CC BY-NC

Electric cars

The leader in manufacturing this new battery format for vehicles is the Tesla electric vehicle company, which has plans for building “Giga-plants” for production of these batteries.

The size of the lithium battery pack for the Tesla Model S is an impressive 85kWh.

This is also more than enough for domestic household needs, which is why there has been so much speculation as to what Tesla’s founder Elon Musk is preparing to reveal this week.

A modular battery design may create battery formats that are somewhat interchangeable and suited to both vehicle and domestic applications without need for redesign or reconstruction.

Perhaps we are about to witness the next generational shift in energy generation and storage driven by the ever-improving capabilities of the humble battery.

The Conversation

Jose Alarco, Professorial Fellow, Queensland University of Technology and Peter Talbot, Professorial Fellow, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

How building codes save homes from cyclones, and how they don’t

How building codes save homes from cyclones, and how they don’t

By Wendy Miller, Queensland University of Technology

During Queensland’s preparations for Severe Tropical Cyclone Ita, Queensland Premier Campbell Newman advised residents who lived in older houses (those built before 1985) to evacuate their homes as they were not likely to stand up to the storm’s destructive winds.

In the event, the damage was largely to the electricity network, while Cooktown, very close to the path of the storm, suffered less destruction than had been feared.

But the episode still begs the question: what was so special about 1985?

That was the year that building regulations changed to require new houses in cyclone-prone areas to be able to withstand higher winds. But how were these regulations determined, what do they mean for modern homes, and why do regulators always seem to wait until after a severe storm before updating the codes?

Updating the regulations

Building codes are drawn up by the Australian Building Codes Board (ABCB), which cites its mission as addressing “issues of safety and health, amenity and sustainability”.

Its job is to set minimum standards for the design, construction and performance of buildings to “withstand extreme climate related natural hazard events”. It is then up to each state and territory to adopt the recommended standards.

After natural disasters, the ABCB examines the nature of building damage to decide whether the regulations provide enough protection. During Cyclone Tracy in 1974, 70% of Darwin’s houses suffered severe damage (90% in some areas), causing 65 deaths and damage worth hundreds of millions of dollars. It was obvious that existing building standards were not protecting the community.

As a result, the regulations were changed in the 1980s to improve the construction processes that attach the roof to the rest of the house, making homes more resistant to severe wind damage.

Analysis after cyclones Vance (1999), Larry (2006) and Yasi (2011) showed that the updated regulations have resulted in much less building damage and consequent loss of life. During Cyclone Yasi, for example, 12% of older homes suffered severe roof damage, but only 3% of newer homes.

Of course, this does not mean that newer homes are completely impregnable. Analysis of damage from Cyclone Yasi showed some remaining “weak points”: tiled roofs, sheds, garage doors, and doors/windows. It was found that more attention needs to be paid to the design, testing, installation, use and maintenance of products, components and fixings. Revised standards have since been developed for roof tiles, garage doors and shed design.

The right way to think about risk

Campbell Newman’s comment about older houses was partially correct – houses built to older standards were indeed more likely to suffer damage. But his statement is perhaps also misleading. Current building standards for Far North Queensland are designed to protect structural integrity in winds up to a Category 4 cyclone. If Ita had crossed the coast and maintained its Category 5 intensity, it is possible that all of Cooktown’s houses – old and new – would have been subject to severe damage.

It is crucial that the community understands what hazards and risks are being addressed by building regulations, and which ones are not.

For example, current regulations address wind loading associated with cyclones, but take no consideration of wind-driven rain (a major cause of water damage). There are also three specific hazards that are not addressed: hail, storm surges and heatwaves. The first two present risks to property; the third contributes to heat stress, which is in turn linked to health problems and deaths.

When deciding whether and how to update the regulations, the ABCB considers both costs and benefits. Regulations will only change if the ABCB and its stakeholders determine that the cost of the changes (such as higher building costs) are less than the benefits (the expected savings in reduced damage).

As a result, the regulations establish minimum standards, not best practice standards.

The community’s role

Home owners and residents therefore need to be aware of these limitations in the building regulations, and be much more proactive in determining what level of risk is appropriate to their circumstances. A simple risk assessment identifies three things:

  1. What hazards and risks might your house/unit/building be exposed to?
  2. What is the likelihood and frequency of those hazards?
  3. What are the consequences if the event happens?

Property damage is perhaps the first risk that people consider, particularly in relation to natural hazards such as cyclones, floods and bushfires. But buildings and their contents can also be damaged by heavy rain, hail, tidal surges, ground movement, and other phenomena.

There are also financial risks associated with not taking action, such as increased power prices and insurance premiums. These financial risks affect everyone in the community.

Meanwhile, people who are directly affected by property loss can also suffer resettlement costs and loss of income.

What should you do?

Listening to the authorities is important, but you should take responsibility for your own household risks too. One way to do it is to take out home insurance, but you need to be aware of what risks your insurance does and does not cover, and what are your responsibilities.

Another way to manage your risk is to take these things into consideration when you are building, buying or renting a property. Find out how a building has been designed and constructed to manage these risks. Ask the architect, designer, builder, estate agent, landlord, body corporate or local council for documentary evidence.

The insurance sector could also play a more proactive role in promoting better building design, perhaps by offering lower premiums for buildings with stronger construction.

Consideration also needs to be given to changing the way damaged buildings are evaluated and repaired after a disaster. New Zealand has acknowledged that the earthquake recovery process provides an opportunity for creating a more resilient city, not just restoring what was lost.

What is needed is a more collaborative approach to withstanding risks to our buildings, our property, and even our health. The ABCB seems to be moving in this direction, and it should not be expected to go it alone.

We all have a role to play in creating robust and resilient neighbourhoods that stand up to natural hazards.

The Conversation

Wendy Miller receives / has received funding from the Australian Research Council and the National Climate Change Adaptation Fund.

She is being sponsored by ICPS Australia to present her research findings at the Australian and New Zealand Disaster and Emergency Management Conference on the Gold Coast in May.

This article was originally published on The Conversation.
Read the original article.

How to keep your house cool in a heatwave

How to keep your house cool in a heatwave

By Wendy Miller, Queensland University of Technology

Should you open or close your house to keep cool in a heatwave? Many people believe it makes sense to throw open doors and windows to the breeze; others try to shut out the heat. Listen to talk radio during a hot spell and you are likely to hear both views.

In a modern house the best advice is to shut up shop during the heat of the day, to keep the heat out. Then, throw open the windows from late afternoon onwards, as long as overnight temperatures are lower outside than inside.

But our research shows that opening and closing doors, windows and curtains is just one of the factors at play. To really stay cool when the heat is on, you also need to think about what type of house you have, and what its surroundings are like.

The traditional “Queenslander” house has long been seen as ideally suited for hot weather. Such houses have great design features for cooling, including shady verandas and elevated floors. But the traditional timber and tin construction provides very little resistance to heat transfer.

If uninsulated homes are closed up during a heatwave they would very likely become too hot. This has led people to opening up their house, to stop them getting much hotter inside than outside.

But in temperatures of 40C and above, one could argue that both strategies (opening and closing) in an uninsulated house would result in very uncomfortable occupants. Such houses would also not meet current building regulations, as insulation has been required in new houses since 2003 (or earlier in some parts of Australia).

Our research explores the role of design and construction on occupant comfort in hot weather. We have looked at brick and lightweight houses, as well as those made from less common materials such as structural insulated panels, earth, straw, and advanced glass and roof coatings.

We found that three factors influence the comfort of people inside a house: whether is it opened or closed; its urban context; and its construction materials. Having a better understanding of these factors could help you to keep cool this summer – or prepare for the next one.

To breeze or not to breeze

Whether they have air-conditioning or not, we found that people usually approach hot weather in the same way: by opening doors and windows to capture breezes.

People in both groups also tended to shut up the house if it gets hot outside, or if there is no breeze, or before switching on the air-conditioner if they have one. Most participants in our survey, which looked at homes less than 10 years old, also used ceiling fans to create air movement.

Occupants tape foil to the inside of windows to try to stop their home from overheating in Queensland.

 

But our research showed that many people failed to take advantage of cooler overnight temperatures, meaning their homes were hotter than the outside during the night. This may mean that houses have not been designed to get rid of daytime heat. Or that people aren’t opening the windows overnight to allow the house to cool down.

The impact of context

The research shows that occupants first try natural ventilation for achieving comfort. But the success of this strategy depends on the urban context of the house. This includes factors such as housing density, street scape and microclimate.

For example, during a hot spell in 2013 an Ipswich estate experienced minimum and maximum temperatures that were 3-4C hotter than the local weather station. Restricted air movement due to nearby buildings, and radiant heat from hard surfaces such as concrete, can both drive temperatures up.

Built for comfort

Both the housing industry and occupants seem to have little understanding of the impact design and construction have on the temperature inside the building. As a result, air-conditioning is now seen not as desirable, but as a necessity. This does not have to be the case.

Most houses are built to minimum regulations (5-6 stars out of 10). There is also evidence that, with poor construction practices and virtually non-existent compliance testing, many would fail to meet even this level.

What does this mean for comfort year-round, and in a heatwave?

In inland southeast Queensland, a 6-star home will have an internal temperature of 18-28C for 80-85% of the time. In a typical year, its temperature will be above 30C for between 300 and 350 hours (3.5% of the time). Heat-wave conditions would result in more hours above 30C.

In contrast, a 9- or 10-star house in the same climate would deliver more “comfort” hours (85-95%) and would be above 30C less than 2% of the time. These houses are designed to slow down the transfer of heat, meaning they naturally stay cooler for longer. And there is no (or little) need for air-conditioning!

This 9-star home uses 48% less electricity than the south-east Queensland average.

 

A wide variety of design and construction techniques and materials can be used to achieve such high performance houses in every climate zone in Australia.

Open and shut case

So when facing a heatwave, should we open up our houses or close them up? The answer is… it depends.

If your home is well insulated and shaded, it should be able to resist several days of extreme heat. Closing doors, windows and curtains during the heat of the day can help the house stay cooler than outside. Ceiling fans provide air movement to make you feel cooler.

Opening the house as much as possible from late afternoon to early morning is beneficial if overnight temperatures will fall below your inside temperature.

Air conditioning a poorly insulated house with little shading is expensive and futile. In a well-insulated and shaded house, air-conditioning can be used quite efficiently by using the same strategies as above. A higher thermostat setting (perhaps 26-28C), combined with ceiling fans, can provide comfort with lower running costs. This can also reduce strain on the electricity network.

Whether air-conditioned or not, houses can be designed specifically for their climate, to limit the flow of heat between the outside and inside. The higher the star rating of the house, the more effectively it stops unwanted heat from entering the house. Different strategies are required for different climates.

Of course, the knowledge that you might be more comfortable in a different house is likely to be cold comfort as you swelter through this summer. But perhaps you can prepare a “cool comfort” plan for next summer.

The Conversation

Wendy Miller has conducted consultancy research for Metecno Pty Ltd, the Australian Glass and Glazing Association and Ergon.

She has also received funding from the Sustainable Built Environment National Research Centre and the Australian Government through the National Climate Change Adaptation Research Facility.

This article was originally published on The Conversation.
Read the original article.

Weighing the environmental costs: buy an eReader, or a shelf of books?

Tom Rainey, Queensland University of Technology

Bookshelves towering floor to ceiling filled with weighty tomes, or one book-sized device holding hundreds of “books” in electronic form: which one of these options for the voracious reader creates the least damaging environmental footprint?

There is no easy answer to the question, dependent as it is on personal environmental values and a reader’s reading habits. eReaders tend to be popular not only amongst voracious readers but also amongst occasional readers, who might previously have only owned a handful of books, complicating the question further.

Regardless, more can be done to improve the environmental performance of both eReaders and paper publications.

The environmental consequences of pulp and paper manufacturing are well documented, even though the worst excesses are now corrected. But at least once the paper is made and the book published, there are no significant further negative impacts and the carbon is captured.

eReaders have a higher environmental cost per unit – but unlike books, you can get by with only one.
Christchurch City Libraries

There are higher environmental costs involved in manufacturing an eReader unit compared to a unit of paper, and there are on-going operational effects. However, one eReader can hold any number of eBooks, newspapers and magazines – which means that eReader users purchase fewer printed publications.

Trying to environmentally promote or denigrate – depending on your point of view – one form of reading over another is inevitably controversial, and perhaps futile. It is not just about numbers, such as tonnes of CO₂, raw materials and waste, but also about human behaviour and interpretation of the impacts.

For example, is the logging of (mostly plantation) trees of greater environmental significance than the extraction of limited resources of rare earth metals? Is it more important to consider the greenhouse effect of CO₂ emissions rather than the health effects of air and water quality? These are just a few of the many environmental issues involved.

Much of the discussion about eReaders versus paper books has taken place with the best of intentions and indeed makes the most of available information. But the fact remains that reliable information at the required scale (both micro and macro) is not available, and probably never will be because of the cost of acquiring that information in light of how quickly it becomes redundant.

The few areas where commentators are in agreement are that:

  1. eReaders will continue to increase their share of human reading needs regardless of environmental considerations – few people will make purchases based on environmental credentials;
  2. Paper based reading will continue to meet a significant proportion of reading needs;
  3. The more eBooks read on a single eReader, the greater the potential offset vs paper books. Depending on who you believe and what is being compared, that might be 20-100 paper books for equivalent CO2 emissions, or 40-70 paper books taking into account other impacts like fuel, water, minerals and human health. But that does not mean either has an impact that is good – both can improve; and
  4. the lowest long term environmental impact remains sharing paper books, buying second hand books and borrowing books from a library (provided you catch public transport there). While a feel good option, this is an unlikely game changer.

Borrowing from libraries, sharing books or buying second-hand minimises the environmental footprint left by your reading habits.
Marcus Hansson

Inevitably the eReader and paper books (both including newspapers and magazines) have their environmental pluses and minuses. These cover the cradle to grave elements: sourcing and extraction of raw material sources; processing materials and manufacturing products (including byproducts and disposal); distribution and retailing; end user uses (including maintenance and replacement); disposal; and transport at all stages.

Each of these elements has within it considerations of sustainability, energy consumption (source of fuel and production of emissions), health and environmental hazards, air and water pollution, and waste disposal.

Then there are further individual human behaviour variables such as how the eReader or paper book is used, frequency of use, frequency of replacement (including planned obsolescence) and recycling/solid waste disposal.

For example, any environmental benefits arising from using an eReader and not buying paper books are likely to vanish if, like many of us, people give in to the temptation to update their reading device every year or two – long before it stops working.

A full Life Cycle Analysis of books versus eReaders might be desirable but is difficult and potentially misleading. These analyses rely on averages or a range of performance inputs and outputs. For the consumer it is difficult to evaluate all the issues let alone compare the different approaches to reading.

Both eReaders and paper publications are likely to be part of our reading future.
Annie Mole

The future will have both eReaders and paper publications. Rather than comparing one with the other for the “best” environmental credentials, it would be better to aim at improving the environmental performance of each.

We should require manufacturers to strive for the smallest possible footprint in a sustainable cradle-to-grave operating environment. If manufacturers transparently demonstrate they are meeting this objective, then consumers have the option to prefer their products. Responsible environmental behaviour by consumers is a further critical element in maintaining a sustainable reading environment.

Nonetheless, sharing a book appears to be the best way to ensure you minimise the impact of your reading habits.

This article was written with the assistance of Dr Bruce Allender, Microscopist & Environmental Specialist at Covey Consulting.

Comments welcome below.

The Conversation

Tom Rainey is Research Fellow at Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Explainer: the Doppler effect

Gillian Isoardi, Queensland University of Technology

When an ambulance passes with its siren blaring, you hear the pitch of the siren change: as it approaches, the siren’s pitch sounds higher than when it is moving away from you. This change is a common physical demonstration of the Doppler effect.

The Doppler effect describes the change in the observed frequency of a wave when there is relative motion between the wave source and the observer. It was first proposed in 1842 by Austrian mathematician and physicist Christian Johann Doppler. While observing distant stars, Doppler described how the colour of starlight changed with the movement of the star.

To explain why the Doppler effect occurs, we need to start with a few basic features of wave motion. Waves come in a variety of forms: ripples on the surface of a pond, sounds (as with the siren above), light, and earthquake tremors all exhibit periodic wave motion.

Two of the common characteristics used to describe all types of wave motion are wavelength and frequency. If you consider the wave to have peaks and troughs, the wavelength is the distance between consecutive peaks and the frequency is the count of the number of peaks that pass a reference point in a given time period.

Snapshot of a moving wave showing the wavelength.
Gillian Isoardi

When we need to think about how waves travel in two- or three-dimensional space we use the term wavefront to describe the linking of all the common points of the wave.

So the linking of all of the wave peaks that come from the point where a pebble is dropped in a pond would create a series of circular wavefronts (ripples) when viewed from above.

Wavefronts emerging from a central source.
Gillian Isoardi

Consider a stationary source that’s emitting waves in all directions with a constant frequency. The shape of the wavefronts coming from the source is described by a series of concentric, evenly-spaced “shells”. Any person standing still near the source will encounter each wavefront with the same frequency that it was emitted.

Wavefronts surrounding a stationary source.
Gillian Isoardi

But if the wave source moves, the pattern of wavefronts will look different. In the time between one wave peak being emitted and the next, the source will have moved so that the shells will no longer be concentric. The wavefronts will bunch up (get closer together) in front of the source as it travels and will be spaced out (further apart) behind it.

Now a person standing still in front of the moving source will observe a higher frequency than before as the source travels towards them. Conversely, someone behind the source will observe a lower frequency of wave peaks as the source travels away from it.

Wavefronts surrounding a moving source.
Gillian Isoardi

This shows how the motion of a source affects the frequency experienced by a stationary observer. A similar change in observed frequency occurs if the source is still and the observer is moving towards or away from it.

In fact, any relative motion between the two will cause a Doppler shift/ effect in the frequency observed.

So why do we hear a change in pitch for passing sirens? The pitch we hear depends on the frequency of the sound wave. A high frequency corresponds to a high pitch. So while the siren produces waves of constant frequency, as it approaches us the observed frequency increases and our ear hears a higher pitch.

After it has passed us and is moving away, the observed frequency and pitch drop. The true pitch of the siren is somewhere between the pitch we hear as it approaches us, and the pitch we hear as it speeds away.

Frontal lobe animation

Wikimedia.

For light waves, the frequency determines the colour we see. The highest frequencies of light are at the blue end of the visible spectrum; the lowest frequencies appear at the red end of this spectrum.

If stars and galaxies are travelling away from us, the apparent frequency of the light they emit decreases and their colour will move towards the red end of the spectrum. This is known as red-shifting.

A star travelling towards us will appear blue-shifted (higher frequency). This phenomenon was what first led Christian Doppler to document his eponymous effect, and ultimately allowed Edwin Hubble in 1929 to propose that the universe was expanding when he observed that all galaxies appeared to be red-shifted (i.e. moving away from us and each other).

The Doppler effect has many other interesting applications beyond sound effects and astronomy. A Doppler radar uses reflected microwaves to determine the speed of distant moving objects. It does this by sending out waves with a particular frequency, and then analysing the reflected wave for frequency changes.

It is applied in weather observation to characterise cloud movement and weather patterns, and has other applications in aviation and radiology. It’s even used in police speed detectors, which are essentially small Doppler radar units.

Medical imaging also makes use of the Doppler effect to monitor blood flow through vessels in the body. Doppler ultrasound uses high frequency sound waves and lets us measure the speed and direction of blood flow to provide information on blood clots, blocked arteries and cardiac function in adults and developing fetuses.

Our understanding of the Doppler effect has allowed us to learn more about the universe we are part of, measure the world around us and look inside our own bodies. Future development of this knowledge – including how to reverse the Doppler effect – could lead to technology once only read about in science-fiction novels, such as invisibility cloaks.

See more Explainer articles on The Conversation.

The Conversation

Gillian Isoardi is Lecturer in Optical Physics Science and Engineering Faculty at Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Explainer: is recycled paper really better for the environment?

Tom Rainey, Queensland University of Technology

For many years, individual consumers, industries and governments have all purchased printing and writing paper made with a high recycled-fibre content.

Why? Because they believe it is the most responsible and environmentally friendly thing to do. But is it really better for the environment?

Traditionally, fibre for paper and paper packaging comes from the wood of trees, with other sources of fibre – such as sugarcane and straw – making a minor contribution.

In more recent times, waste paper has also been used as a source of fibres. These recycled fibres are processed to make paper products similar to those made from original (virgin) wood fibres.

Paper used in stationery products is often a blend of virgin and recycled fibres. This is to maintain the brightness of the paper – virgin fibres produce whiter paper – while minimising environmental impacts. Manufacturers also tend to “remix” the blend depending on whether there is more recycled or virgin fibre available.

Products made from 100% recycled content are usually of a lower quality than those from virgin fibres. But 100% recycled products are still very suitable for most stationery applications.

White paper in; white paper out

Recycled paper is now everywhere in our lives. But recycling paper and then reusing the fibres is not always as simple as it might seem.

There are real benefits in paper recycling.
dwwebber

To manufacture recycled paper, the source paper needs to be white and similar in composition to what the recycled paper will be used for. For instance, if you’re going to make recycled paper for use in office photocopiers, you’ll need waste office paper to begin with.

Daily newspapers and weekly magazines are usually not suitable. Although they are white, their “whiteness” is often from clay powder coating, and the fibres themselves discolour readily.

(Newspapers are generally recycled to make newspapers. Cardboard is usually recycled into other “brown” products, including paper bags and corrugated cardboard.)

Where does the fibre come from?

To make recycled paper that is good enough to meet consumer demand, manufacturers have to collect and sort enough high-quality white paper.

In relatively low-population-density countries such as Australia this can be a costly exercise. (Driving bundles of waste paper between cities is expensive in a country this big).

But when the international price of virgin white fibre goes up – making non-recycled paper prohibitively expensive – it becomes more attractive to pay for the local collection and processing of waste white paper into recycled white fibre.

When you head to your local stationery store, the products you see on the shelves are made from both imported – usually from south-east Asia, east Asia or the Americas – and locally-made paper.

Virgin fibres often come from plantation forests.
hannanik

These products may be marketed as having various levels of recycled fibre content up to 100%. Any virgin fibre content is usually labelled as coming from certified, sustainable wood sources – selected native forests and plantations – rather than from tropical rainforests or old growth native forests.

This is because consumers are now demanding more sustainable, certified papers, and tend to avoid papers from old-growth forests.

(Certification of a paper’s origin is done by an independent authority such as the Forest Stewardship Council (FSC)).

Product and brand credibility would be at risk if a supplier chose to make incorrect claims about the paper they sold.

But of course, not all claims are entirely watertight. For example, tropical rainforest tree species have recently been identified in stationery papers imported into Australia.

But the provenance of paper is difficult for consumers to independently confirm, and can be difficult to prove even by experts with specialist laboratory analysis of the paper.

Better for the environment?

From an environmental-impact point of view, using products made from recycled fibre or virgin fibre is not an either/or proposition.

It makes sense to recycle paper (and use recycled-paper products) where possible, even though there are some environmental penalties involved in the production of recycled paper. These penalties can include the use of transport fuel used in the collection and transport of paper over long distances.

There are significant costs involved in collecting paper for recycling.
AFP/Richard A. Brooks

Also, we can’t keep recycling the paper we have now forever. Virgin fibres have to be introduced into the process at some point.

On average, a fibre can be recycled seven times before it is too degraded to make paper. Because of this fact, there has to be a supply of virgin wood fibres to maintain the supply and quality of fibres and, therefore, the paper.

Fortunately, the modern processes for extracting virgin fibres are biomass energy self-sufficient. When making stationary paper, the natural adhesive which binds the wood together (lignin) is dissolved from the wood then burnt to power the process. Any shortage in necessary energy can be generated by burning forestry waste such as tree bark.

These processes have minimal environmental impact, provided renewable and certified wood sources are used.

Stationery papers are only a part of the paper and paper packaging industry, but consumers can choose to buy quality goods with recycled fibre content, according to their preference for quality, environmental impact and cost.

Consumers should be aware that recycled paper can, but not always, have an environmental advantage and that they may be charged more for the privilege. The best protection is to buy paper produced entirely in Australia.

This article was written with the assistance of Dr Bruce Allender, Microscopist & Environmental Specialist at Covey Consulting.

The Conversation

Tom Rainey is Research Fellow at Queensland University of Technology

This article was originally published on The Conversation. Read the original article.