Conor D’Arcy, Interim Chief Executive, Money and Mental Health Policy Institute

What AI could mean for our money and mental health

16 June 2023

  • Artificial Intelligence (AI) can potentially support people with mental health problems by analysing transaction data to identify when they are struggling with their mental health and finances.
  • AI can also alleviate the administrative burden by automating tasks such as form filling, email drafting, and fund management, which can be challenging for people experiencing poor mental health.
  • Personalising customer experiences through AI can help bridge the service gap faced by people with mental health problems, making mainstream services more accessible and inclusive.
  • There are potential dangers associated with AI, including the misuse of sensitive data, such as mental health information, for exploitative purposes, and the risk of biased decision-making and reduced access to human support for people who rely on it.

It’s hard to get through a meeting these days without Artificial Intelligence (AI) being mentioned. Just last week, its impact – for better or for worse – cropped up in discussions I was in on pensions (launching our new report), debt advice and recruitment. 

With the mind-boggling pace of change, predicting what might be possible even a few months down the line feels foolish. But when it comes to financial services and the experiences of people with mental health problems, it’s already clear there are huge opportunities and risks that need grappling with, sooner rather than later.

The opportunities

We’ve long been excited by the potential to use transaction data to spot when a customer is struggling. Missed payments, high gambling losses or a sudden drop in income could all be a trigger for banks and others to deliver proactive support. While that could benefit anyone, a common symptom many of us with mental health problems face is difficulty asking for help. Breaking down those barriers, and using AI to understand which messages are most effective at what moments, could help to nip money worries in the bud.

But some members of our Research Community – 5,000 people all with personal experience of mental health problems – told us they’d be supportive of their bank using their data to go beyond their financial health. 

“In an ideal world… [financial services firms] could help me to understand what areas of spend indicate that I am heading for an episode of depression. I think that there may be a pattern that I follow but can’t always see it.” Expert by experience

Easing the admin burden is another exciting area. When your mental health is poor, finding the energy to fill in forms or make sure there’s enough money in the right account can be impossible. AI helpers who can do some of the legwork – pre-filling forms, drafting emails or shuffling funds around – could be transformative. 

Somewhat ironically, AI also provides the chance for a more ‘personalised’ customer experience. Whenever we speak to firms, we emphasise the importance of inclusive design: if your website, app, phonelines or processes don’t work for the one in four of us who experience mental health problems in any given year, they don’t work. Making mainstream services accessible will continue to be vital, but using AI to more easily adapt them to individuals’ needs could close this ‘service gap’ that many of us with mental health problems encounter.

The dangers

Given how broad the applications of AI might be, understanding how and where your data will be deployed is crucial. Long and confusing terms and conditions or permissions that are excessively broad raise the risk that sensitive information – including about our mental health – is used in ways we didn’t want or realise we were allowing.

Health data is always sensitive but that’s particularly the case when it comes to mental health. The added risks that symptoms of mental health problems can expose us to have been a big focus of our work. 

For instance, we’ve found that people with mental health problems are three times more likely to have been scammed online and are more likely to struggle with compulsive spending. If data relating to a person’s condition is available, or a condition can be inferred, there’s a real risk that – legally or illegally – they’re targeted by exploitative ads or fraudulent messages.

Bias is one of the most-discussed dangers when it comes to AI. When it comes to people with mental health problems, one major concern is being locked out of products, without the opportunity to provide context on past behaviour. That removal of humans from the decision-making loop might also be replicated in general customer support. 

For the reasons mentioned above and more, people with mental health problems often prefer to use face-to-face support, for instance in a bank branch. If increasingly powerful AI makes it harder and harder to speak to a real person, we run the risk of people who rely on those services getting left behind.

Positives won’t just happen on their own

The truth about lots of the potential upsides are versions of them could and should already be happening – but progress seems limited. What’s holding firms back? 

It’s several factors, but fear of customer reactions, muddy regulatory waters and a lack of profit motive strike me as the biggest. The fear of negative responses is understandable: there will undoubtedly be some customers who are not up for their data being used like this. Testing products with a wide range of customers to understand their concerns and the risks, and getting informed consent, should help to minimise backlash. On wanting more regulatory clarity on what’s allowed and what’s not, ongoing dialogue between firms and regulators will be key. But initiatives like ‘sandboxes’ which allow firms to test out ideas in a safe way offer an important route forward.

The lack of a profit motive might be the biggest hurdle. And it’s where AI starts to look less like a break from recent history and more like a continuation of it. The comparison with Open Banking is instructive. We were optimistic about what tools it could lead to that would help people with mental health problems to better manage their money. But over the last few years, we’ve seen relatively few of the exciting ideas out there reach any sort of broad market. 

Research on one of them suggests it wasn’t that the ideas didn’t deliver a real benefit to people in more vulnerable circumstances, it was more the difficulty in commercialising it. Ideas like prizes and specific pieces of funding from large research councils will help, but more imaginative responses from government and regulators may be needed. 

When AI comes up in those recurring conversations, I’m still torn on whether I think it’ll be, on balance, a good or a bad thing for people with mental health problems. It’s a new set of questions but the same old challenge: how can we make change benefit everyone, not just those who the system already works for?