There’s a growing unease about AI in business. Not about whether it’s useful – that argument is largely settled – but about whether it can be trusted. Does it make things up? Is it learning from my company’s data? Is it nudging people in ways that serve the vendor’s interests, not theirs?
These are good questions, and as a company using AI to change how businesses travel, we want to answer them directly.
Does our AI hallucinate?
Hallucination is the term used when an AI model generates plausible-sounding but factually wrong information. It’s a real and well-documented problem – particularly when AI is asked to answer open-ended questions from scratch, with no grounding in verified facts.
This is precisely the scenario we’ve designed around. When EngageAI tells a traveller that their 20km taxi ride to the airport produces 4.7kg of CO2e, or that booking their next flight two weeks earlier could save them 21%, those aren’t estimates generated by a language model working from memory. They’re calculations drawn from our verified carbon database, assured to ISO 14083:2023 standard, and built on data from Defra, ICAO, IATA, and the EPA. The AI’s job is to communicate that data clearly and personally, not to invent it.
Think of the difference between asking a friend “roughly how far is New York from London?” versus asking a navigation app for the same answer. Our AI is the navigation app: it works from a map, not intuition.
Are we training on your data?
No. The data your company provides – booking records, travel patterns, expense receipts – is used to generate interventions for your travellers. It is not fed into a model that learns across your organisation and others. Your travel behaviour stays yours.
This matters because the alternative – models that improve by absorbing client data – creates real risks: competitive information leaking across organisations, employees’ habits being used in ways they never consented to, and audit trails that are impossible to unpick. We’ve avoided that architecture entirely.
Is nudging people ethical?
This is the most interesting question, and the one we think about most carefully.
EngageAI is a behaviour change platform. It identifies the travellers most likely to make high-cost, high-emission choices, and it intervenes – with a message, at the right time, through the channel they prefer – to offer a better option. Some people call this nudging. Others might call it manipulation.
We think the distinction matters, and it comes down to whose interests the nudge serves.
When a gambling app uses behavioural science to keep you playing longer, the incentives are misaligned: the platform wins when you lose. When EngageAI suggests taking the train instead of flying, or booking two weeks earlier, the traveller genuinely benefits: they spend less time in transit, their company saves money, and the planet is a little better off. The interests of the traveller, their employer, and the environment are, unusually, all pointing in the same direction.
We’re also transparent about what we’re doing. Travellers receive a message explaining why they’re being contacted and what we’re suggesting. They can ignore it.
There’s a version of AI ethics where “ethical AI” means AI that does nothing consequential. We don’t think that’s good enough. The more interesting challenge, and the one we’ve taken on, is building AI that acts consequentially but in ways that are honest, grounded in data, and genuinely in the interests of the people it’s trying to influence.
What we’re not doing
It’s also worth being clear about a few things EngageAI isn’t. It isn’t scoring employees against each other or creating league tables of “worst offenders.” It isn’t sharing individual travel data across organisations. It isn’t making decisions on behalf of travellers – it’s making suggestions. The human is still in the loop.
Trustworthy AI for a climate that needs it
AI in sustainability is only useful if it’s trustworthy. A hallucinating carbon calculator is worse than no calculator at all: it gives false confidence and misleads decisions. An AI that learns from one company’s sensitive data and leaks signals to another isn’t just unethical, it’s a liability.
We’ve built Thrust Carbon’s AI products with these constraints front of mind, not because trust is good marketing, but because the climate problem we’re trying to solve requires organisations to make genuine, evidence-based decisions. That only works if the evidence is solid.
The AI safety question, in the end, isn’t just about preventing harm. It’s about building tools that people can rely on when it actually matters.