Colonial First State sees a future role for generative AI in cutting through the complexity and cost of financial planning, making good advice more accessible.
Jeroen Buwalda.
Transformation, technology and operations group executive Jeroen Buwalda told the Microsoft AI Tour that he saw generative AI as a way to “help clients understand the complexities” of superannuation and wealth management.
“We believe that generative AI’s going to help us simplify that world that we live in in the regulated environment and help create more access to the advisor industries.”
Buwalda likened the current task of staying abreast of information and rules to “drinking from a fire hydrant”.
“At the moment, it’s almost impossible for anyone in this room to understand the detailed rules around superannuation and taxation, and because it’s so complex, because of the regulatory settings, it’s costing about $3000-to-$4000 for a person to tackle proper financial planning.
“If you only have $100,000 in your superannuation account, you’re not going to pay three-to-four percent for financial planning.
“So just project forward and think through how GenAI could play a role in creating advice and accessibility and advisor efficiencies to do that better. We’re hopeful we can get there.”
Buwalda was cognisant of the significant amount of work that lay between that vision and its achievement.
“There’s a long way to go including making sure we work very closely with our colleagues in government to enable these types of things going forward,” he said.
“We would love to work with the government to make sure that the regulatory settings across our industry are being tuned to where we can experiment more freely with GenAI, given the enormous upside that the technology promises to not only us but to the general population.”
The technology would also need to develop alongside internal capability at Colonial First State.
On that note, the institution is attempting to skill up, setting up a “safe environment” to run generative AI experiments relatively quickly.
Buwalda said the company had made a conscious decision to “work concurrently” on generative AI strategy development, foundational work and experimentation”.
“We didn’t want to sequence it,” he said.
Part of the reason for this is to harness some of the excitement among staff about the technology’s potential.
“[Like] everybody on the back of the launch of ChatGPT, we’ve seen huge demand from our staff to use this capability,” Buwalda said.
“We decided that, rather than trying to protect our organisation and prevent [staff] from using it, we’d very quickly create an environment for them to learn, plan and experiment in, and so that’s exactly what we have done.
“Our mission has been to not block them from using it but do it within the confines of Colonial First State and do it in a safe environment … where we know that our proprietary information is not going to end up on the public cloud somewhere and all the other complications that can arise from that.”
Buwalda said the sandbox-style environment also helped to allay regulatory and risk concerns.
“Being in a regulated industry means that trust is incredibly important, and so that is where we are very cautious in terms of how we use AI, because there’s no little room in our industry for error. We need to make sure we deploy these capabilities in a way we feel very comfortable,” he said.
“That’s why we’re starting inside the org – to gain more knowledge, to experiment, to learn these technologies, to embrace them, before we put them closer to the customer with a human in the loop and in an assisted way.
“Down the track when we build up confidence and when things like hallucinations and auditability hopefully become less of an issue, that’s where we can consider using it in a different way – in an unassisted way. But we’re a way from that.”
Buwalda noted that auditability was not a problem unique to machine-assisted decision-making; in certain settings, it may be necessary to seek transparency and rationale from humans on how they reached a decision.
But, he added, it was important to have that explainability.
“We need to try and find out what the rationale for an answer is – and [like] you would do with a human you would ask them a question so: ‘Would you please explain to me how you reached that answer?’
“We’re not there yet but we’re looking similarly to try to understand what was the rationale behind an outcome from GenAI, particularly what were the source documents associated with that, and that way build up our confidence that over time we can have a very high level of precision around the answers that come out of it.”
Ry Crozier attended the Microsoft AI Tour event in Sydney as a guest of Microsoft.