Automated. Audited. Not Trusted.

When our AI system recognized 'Eichenholz natur' and 'Oak untreated' as identical materials, we were impressed. Then it classified a bar stool as a dining chair – and just like that, the trust was gone. Automation doesn’t usually fail because of mistakes. It fails when it doesn’t say: “Be cautious here.”

We built our system to turn manufacturer data into structured product information for e-commerce. It seemed simple: read some files, extract dimensions, map names to categories. Done.

Except it never is.

Input data comes in every shape and format. Some manufacturers send clean spreadsheets. Others send PDFs, Word docs, or plain text, with key info buried between marketing fluff and formatting glitches. Product names are all over the place. Categories feel arbitrary. Attributes like dimensions or materials switch between languages, units – or are missing altogether.

In this kind of landscape, rule-based systems break quickly. Too many edge cases. Too many exceptions. Too much implicit meaning to encode.

So we took a different approach. Not rule-based logic. Not hardcoded assumptions. Instead, we built a system that works probabilistically – a model that recognizes patterns, understands relationships, and makes decisions based on likelihood, not fixed rules. It knows that “Eichenholz natur” and “Oak untreated” mean the same material – even from completely different sources. It sees that “Couchtisch 120x60” contains dimensions – even without units, labels, or context.

Technically, it works well. With 94% accuracy, our system handles most products correctly. Yet our teams continue to manually review every result – not because of the 6% error rate, but because no one knows which 6% will be affected. But in the end, a human still reviews the output. Not because the system is broken. But because it’s not yet trustworthy enough to be left alone.

And that’s the crux: It’s not the error rate that determines success. It’s the question of whether you trust the system enough to let it operate without constant oversight.


From Technology to Trust

To be trusted, a system needs more than good performance. It needs to make itself understandable, verifiable – and eventually, independent.

That means three things:

1. Expose uncertainty

A system shouldn’t act confident when it isn’t. It should signal when it’s on firm ground – and when it’s not.

For example: If the system isn’t sure what to make of a name like “Designkonsole Industrial Raw,” it flags the item, highlights it visually, adds a note like “ambiguous material description,” and routes it for manual review.

This combination – a simple visual marker, a clear rationale, and a defined handoff to human review. That’s how automation earns trust.

2. Explain decisions

It’s not enough to give an answer. The system should show how it got there.

For instance: The system assigns a product to the “side table” category and explains that the decision is based on three signals: the title, the product description (“small, next to the sofa”), and a typical dimension range under 60 cm.

Now you know what it saw. You can agree, disagree – or improve it. But it’s no longer a black box. It’s a conversation. This makes the result understandable – and more importantly, defensible.

3. Feed back corrections

If a human steps in, the system should listen. Someone corrects “solid wood” to “veneer”? That’s not just a one-off. It’s a chance to learn.

Next time it sees a similar case – same phrasing, same supplier – it adjusts.

That’s the shift: From static tool to adaptive partner.


Conclusion: Trust Is the Bottleneck

Automation doesn’t fail because it’s not accurate enough. It fails when it hides what it knows – and what it doesn’t.

As long as a system keeps that hidden, people will keep checking its work. Not because they don’t like automation – but because the system hasn’t earned their trust.

If you're working on automation systems, ask yourself one central question: Would you trust this system if your reputation was at stake? If not, focus first on transparency and trust signals – not on more accuracy. The future belongs not to systems that err the least, but to those that are most honest about their limitations.

A system isn’t truly successful when people stop working. It’s successful when they stop asking: “Did the AI get this right?”