Artificial intelligence is taking the consulting industry by storm – should we be concerned?
- Written by Tapani Rinta-Kahila, ARC DECRA Fellow / Lecturer in Business Information Systems, The University of Queensland
Artificial intelligence (AI) is enjoying a long moment in the spotlight. But debate continues over whether it’s a “shortcut to utopia” or possible harbinger of the end of the world.
Meanwhile, one group has quickly leapt on both the technology and all the hype – consulting firms. And they’ve been spending big.
Advocates of the technology are heralding a new era of professional efficiency for consultants. Once-tedious emails, presentations and reports can now be completed in a flash.
Many consulting firms have also already leapt at the opportunity to professionally advise other businesses on making the most of new generative AI tools.
So why does the consulting industry see such potential for transformation – and is hurling itself headfirst at this technology a good idea?
Wait, what do consultants do?
The consulting industry is notoriously shrouded in mystique, despite regularly winning huge contracts from governments and major businesses.
But in simple terms, consultants aim to offer their clients expert advice and solutions to help improve their performance, solve problems and achieve certain goals.
They often possess specialised knowledge, skills or experience relevant to a particular client, so the nature of their work can vary significantly.
Clients often seek consulting services because they want help with problem-solving and decision-making on a particular project, or want external validation for their decisions and need an independent report.
A wide range of professional services firms offer consulting services, including the “big four”: Deloitte, PricewaterhouseCoopers (PwC), Ernst & Young (EY) and KPMG.
Diego Fedele, Joel Carrett, Dan Himbrechts/AAPThere are also many specialist consulting firms, including McKinsey, Bain & Company, Boston Consulting Group, Kearney and L.E.K. Consulting.
Much of the demand for consultancy services is driven by the increasing complexity of doing business, due to globalisation, digitalisation, changing regulations and many other factors.
However, growth in Australia’s consulting sector has slowed this year amid the fallout from the PwC tax leaks scandal and sluggish economic growth.
How can AI help?
Artificial intelligence (AI) technology has been around in a range of forms for a while now. But until recently, it was mainly used internally by organisations and required specific training. This changed with the public launch of “generative AI” models, such as OpenAI’s ChatGPT.
Markus Mainka/ShutterstockThese models differ from traditional AI in their capability to generate something new, such as text that is virtually indistinguishable from one written by human, or other types of output such as images, videos or sounds.
Large language models like ChatGPT were some of the first to give the general public a sense of what AI could be used for.
But this has had some specific implications for consulting businesses. The models can analyse large amounts of data quickly and cheaply to generate tailored feedback.
With generative AI offering such efficient services for business analysis and strategic planning, many would-be clients might now be questioning whether buying consulting services will remain worthwhile in the long run – particularly as these technologies improve.
Getting ahead of the curve
It should therefore come as no surprise that the consulting industry is investing heavily in generative AI. Just take the big four, for example.
Deloitte and EY have already deployed conversational AI assistants aimed at boosting staff productivity.
KPMG’s customised version of ChatGPT – KymChat – was launched in March to speed up the preparation of sales proposals for consulting work, by quickly identifying relevant experts.
In May, PwC became OpenAI’s biggest enterprise customer after purchasing more than 100,000 licences for the AI giant’s latest models.
Other players operating in the knowledge work space have also been hyping up the productivity-boosting potential of generative AI for similar tasks.
US finance behemoth JPMorgan Chase recently rolled out its own large language model called LLM Suite, which it says can “do the work of a research analyst”.
What does this mean for the business model?
To harness this technology’s potential, firms will need to use it effectively. This means ensuring they maintain a human-centred value proposition for clients that goes above what the technology alone can offer.
Generative AI tools will not replace the human trust that is often crucial for successful consulting, nor provide the depth of specialised knowledge (and access to relevant human experts) that a seasoned consultant currently can.
Ground Picture/ShutterstockBut the technology will streamline a lot of everyday tasks. It can also provide a sounding board for decisions and business strategies, and suggest solutions.
At least in the near term, firms are likely to use AI to “augment” human consultants, rather than replace them.
What are the risks?
The uptake of generative AI also presents risks to the consultancy business.
One big one concerns creativity. Since the models produce their outputs based on past data, the range of potential solutions they can identify will always be limited to their training data.
Excessive reliance on the same models could start eroding consultant companies’ ability to innovate, by diluting their distinct competitive advantages and make them increasingly resemble each other.
Such a phenomenon has already been observed in research looking into generative AI’s effect on student creativity.
My own research has shown that excessive reliance on automation technologies like generative AI can lead to the erosion of professional expertise. In the long run, this effect could seriously organisations’ knowledge and business reputation.
If younger consultants still in training offload too much of their thinking and analytical work to generative AI, they may fail to develop their own analytical abilities.
And of course, the technology itself isn’t perfect. Generative AI is known to make mistakes and even “hallucinate” – that is, completely make things up.
All of this highlights the importance of using generative AI thoughtfully, and perhaps above all, not losing sight of the unique value humans can bring.
Authors: Tapani Rinta-Kahila, ARC DECRA Fellow / Lecturer in Business Information Systems, The University of Queensland