Can I trust AI with my money?

Can I trust AI with my money?

That's the first question people ask when they hear what we're building. It's a good question, because the answer is that it depends entirely on how the AI is used.

AI is genuinely transformative across much of our lives, personal finance included. It can understand patterns, interpret context, spot things a person might miss and explain what matters in plain language. These are exactly the capabilities that make something like Lucie Money possible. For the first time, technology can do more than show you a dashboard and leave you to figure it out: it can engage with you about your money.

But AI is also new and still evolving. Large language models hallucinate. They generate information that sounds authoritative but is simply wrong. And people know this technology isn't perfect. Research from the University of Arizona, published in 2025 across 13 experiments with over 5,000 participants, found that simply labelling content as "AI-generated" can reduce trust by up to 20 percent, even when the content was accurate.

In many contexts, getting something wrong may be little more than an embarrassment or frustration. In finance, it's a serious problem. A wrong balance, a fabricated transaction, an incorrect spending figure… these undermine the very thing a financial product exists to provide: a trustworthy picture of where you stand.

So the question isn't whether AI should be involved in personal finance. It should. The question is how.

This is where we've put a lot of thought into designing Lucie. While on the surface Lucie looks and operates like an engaging AI-powered collaborator, we've architected it so that financial data is designed to flow through deterministic, verifiable paths rather than being generated by the AI. We're building Lucie so that your account balance comes directly from your bank. When Lucie tells you your food spending is up 21.5% this month, that figure will be calculated from your bank data by purpose-built tools rather than generated by a language model.

The AI's role is to understand what you're asking, work out what analysis is relevant and explain the results in plain language. It reasons. It contextualises. It speaks in Lucie's voice. The numbers themselves are designed to come from a controlled pipeline: bank data in, verified computation, results out.

To make this work, we're building an orchestration framework where a deterministic engine coordinates specialised agents, manages structured data access and governs every stage of the process. Every step is logged. The AI operates within this framework, not outside it.

What does this mean if you're using Lucie?

When Lucie shows you your balance, that will come from your bank. When Lucie tells you your food spending is higher than usual, the figures behind that observation will be computed from your bank data; Lucie's words will be its interpretation. And we're designing the experience so you can see what is fact and what is Lucie's perspective, without needing a label or a disclaimer.

This matters beyond Lucie. Anyone building AI into financial services, or any domain where accuracy is non-negotiable, faces the same fundamental question: what should the AI be responsible for, and what should it not?

Our approach is to design the system so that financial figures flow through verified, deterministic processes rather than being generated by the AI. Safety features are then layered on top of that foundation. Trust isn't something you declare. It's something you design.

AI is extraordinary. It's also new and still evolving. We're working to make it safe and trusted, because that's the only way something like Lucie should be built.

We're building the Lucie Money MVP now and targeting launch in Australia later this year. If you're interested in the work we're doing, feel free to get in touch; and if you want early access, there's a waitlist at lucie.money.