Well, it looks like the science fiction writers had some surprisingly accurate crystal balls to predict what life would be like over 50 years into the future. Autonomous vehicles, for the majority, seem very close, mixed reality is finding its way into niche solutions, mobile phones facilitate a computer in the pockets of most people, and 3D printing continues to evolve as examples of this progress.
So where are we today in terms of artificial intelligence (AI) becoming part of our daily personal and work lives?
AI has promised the world, to the extent that there are valid concerns over what the future of work will be. Will we, as a human race, no longer need to work? What jobs will be replaced by AI and what skills should we be learning to stay ahead of the replacement of mundane tasks?
It is anything but a level playing field when it comes to the adoption of AI within governments, organisations and individuals. The effect of this is that here are significant competitive opportunities available today and equally significant risks associated with not getting involved in the AI revolution now.
What makes things even more difficult is that there is no generally accepted definition of AI. Some would say that there is a requirement that an element of machine learning is necessary, yet we continually see products marketed as containing AI, that simply have a graphical component which is made to look ‘futuristic’ in some way.
AI can take enormous quantities of data, identifying characteristics and trends, making predictions, and learning from the results to further refine its algorithms. It may also be more powerful when combined with other emerging technologies, such as robotic process automation (RPA). RPA can be utilised to automate processes, bringing about a higher level of consistency as well as increasing the performance of a system. To consider this another way, RPA can automate the decisions made by AI, to turn insights into actions.
What is the risk?
A few risks are present when considering AI for any application. One of these is the concept of explainability, which is sometimes referred to as the ‘black box’ issue. The real power of AI is its ability to make predictions or decisions that no human reasonably could, because it is able to ingest enormous amounts of data and perform complex computations quickly. Exactly how it arrives at its conclusions can be difficult (or impossible) to explain.
When we start our cars each day, we trust that turning the key (or pressing the start button) will cause a series of events to occur that will start the engine. We do not need to know how the system works – but we trust that all the complex systems will work together to achieve the expected goal.
Can it be trusted?
AI may be perceived to have a trust problem, and potentially with good reason. The international community is grappling with how to find the right balance between explainability of AI algorithms together with being able to trust that, although we do not know how it works, the value of the predictions and decisions outweigh the risks.
Do we have control?
Another challenge presented by AI is the fallacy that we have a large degree of control over when and where we will allow AI to make decisions. For us all, AI is not coming, it is here now. AI-powered decisions about what emails should be delivered directly to our mailboxes and which should be marked as junk, are common. Some robot vacuum cleaners sold through retail shops today ‘learn’ from previous routes taken through our homes to clean our floors.
Not only is AI finding its way into our mobile phones and many aspects of our daily lives, but there is also another risk in our organisations. Users and teams do not always wait for centralised IT teams to sanction applications and services to be approved for use. Shadow IT is a term used to explain the use of unapproved technology, without the attention of those serving, to provide IT services and protect corporate systems and information. We are now presented with a new challenge of Shadow AI – the application of AI without the knowledge or approval by IT.
And even if we do not use a particular AI solution because of an intentional decision by ourselves or our teams, how do we know what systems already contain some form of embedded AI that may not be disclosed in any brochure or technical documentation?
Questions for you to consider in your own AI journey:
- In our team, do we have a clear understanding of what decisions are currently being made by, or influenced by, an AI algorithm?
- What external data sources are available that may be able to be utilised in an AI decision-making engine that are not currently being used because it would take too long to incorporate all this information manually?
- Where are our competitors at with AI? Is there an emerging risk that a competitor may beat us to market with an AI-powered feature that our customers will value greatly?
So where do you continue your AI journey?
Every operational process, every new project, and every strategic planning discussion may benefit from consideration of AI risks and opportunities. Experts are available to guide us, new solutions are being developed every day, and your favourite search engine has more than a lifetime of information and videos available to improve your understanding of the technology – just be aware that there may be some AI helping to ensure that your search results are relevant to you personally. Scary? Maybe. Powerful? Absolutely!
Tim Timchur leads the QUTeX Queensland Government Digital Leadership: Digital Project Board Governance Masterclass. Find out more here.