Tailored Not Templated: Helping Explorers Ditch Data Chaos and Make Smarter Decisions from Day One

In an industry awash with plug-and-play software and AI buzzwords, Oliver “Olly” Willetts, senior geologist and resource estimation consultant at SRK Consulting, stands out for his clear-eyed, problem-first approach to geoscientific data management.

Olly, who has been with SRK for six years and in consulting for nearly two decades, doesn’t just build databases – he builds trust. Through carefully tailored systems rooted in practical field experience, he helps explorers structure their field data management to support the challenges and decisions their projects will face.

“It’s not about pushing a technological solution upfront,” he explains to The Rock Wrangler. “It’s about deeply understanding the problem first, and only then selecting and developing appropriate components to create the right system to solve it.”

The burden of choice

One of the biggest challenges facing exploration teams today, Olly says, is the overwhelming choice of data solutions. From cloud-based modelling to field-logging apps to AI-assisted assay interpreters, the technological landscape is expanding faster than most teams can keep up.

“There’s a new tool being announced almost weekly. For a lot of clients, it’s hard to know what to trust or even where to begin,” he says. “That’s where we come in.”

SRK’s role, according to Olly, is to act as both translator and tailor – understanding the client’s goals, constraints, and existing infrastructure, and then recommending a data strategy that’s fit-for-purpose.

“For some, that means starting from scratch – setting up field-logging systems and back-end databases before the first hole is drilled. Others come to us with legacy systems and just want something that works better. Our job is to scale the solution to the actual need.”

Fit-for-purpose over full-fat

Olly has spent much of his career developing bespoke SQL Server databases, often starting from field-level requirements and working up. This produces a lean and efficient data model that perfectly dovetails into the project.

“I’ve built maybe 50 or 60 of these over the years. The goal is always the same – create a system that’s powerful and detailed enough to support the resource team, but simple and transparent enough for the team in the field to use confidently.”

“A lot of the major data management solutions are very feature-rich and well-marketed, but realising their full value requires a certain scale of project. What works for an operating mine with an in-house data team might be complete overkill for a smaller explorer,” he says.

Start clean, stay clean

One of the keys to long-term success, Olly says, is instilling good data practices early – especially when it comes to field logging.

“Field data capture is all about consistency and accuracy,” he says. “If the logging geologist is calling it ‘chlorite alteration’ today, call it the same thing tomorrow. And if it’s wrong, pick the error up early and communicate with the entire logging team.”

Olly’s preferred systems push responsibility back to the logging geologist, with built-in checks and balances. Holes can’t be closed out or uploaded to the database until all required fields are completed correctly. It’s a subtle but powerful verification step in data custody.

“When the person logging the hole is also accountable for data quality, you get cleaner data from day one. That improves everything downstream: QA/QC, interpretation, modelling, decision-making. It really helps to communicate the value of the logging data and its contribution to the project to field geologists.”

Cloud-first, always

Long before cloud computing became mainstream in the mining sector, Olly saw its potential. Every database he’s built since 2012 has been cloud-hosted, and the benefits are undeniable.

“I’ve got one Microsoft Azure-based exploration database that’s been running for nearly 15 years. I’ve never had to update it. The data model is always current, always compatible. If that were hosted on an unmaintained physical server, it’d represent a costly upgrade by now.”

Beyond convenience, cloud systems also simplify collaboration, version control, and security – despite early scepticism from some clients.

“Data security is always a concern, but you must ask yourself: is your office server room really less vulnerable than an enterprise-grade data centre?”

Ownership matters

Olly is especially passionate about ensuring that clients retain full ownership of their data, without vendor lock-in or restrictive licensing.

“When I set up a system, the client owns the subscription. I’m just the administrator. If they get fed up with me, they can kick me out – and I tell them exactly how to do it.”

That transparency, he says, builds trust and prevents the frustration that arises when clients feel boxed in by proprietary systems.

“Data is a company’s primary asset, and an effective database represents more than simple static tables of data – there’s enormous value in the data model, relationships, systems and functions that augment and enhance the dataset. If you want to change systems and have to downgrade to tabular data, all of that project value is lost and that’s not a sustainable solution.”

From Mongolia to machine learning

Olly’s recent project experiences illustrate the full power of a well-architected data system.

Working on a Mongolian uranium resource over three years, he audited everything from sample chain-of-custody to logging protocols to data integration – right down to field visits and verification.

“It was a Russian-style logging system with over 110 numeric codes. If I hadn’t been on site to learn the system firsthand from the geological team, I would have neither understood, nor trusted the data. You have to get close to the metal, so to speak.”

Recent work with one of his first cloud databases for the Gassat phosphate project in Tunisia demonstrated the benefits of machine learning workflows: geochemical clustering and classification tools written in Python that run directly against the centralised database.

“With multivariate data and the right modelling environment, you can do incredible things – like distinguishing waste from ore, or optimising product blends. But none of this is possible without clean, accessible, and thoughtfully structured data at the core.”

Looking ahead

When asked about future trends in exploration data management, Olly doesn’t hesitate.

“Core imagery, hyperspectral scanning, real-time geochemical assaying, digital field mapping – it’s all creating much richer and denser datasets. For the first time, geological exploration data is approaching what you might call ‘big data’.”

That shift, he says, will demand new thinking around storage, processing, and analytics. But it will also unlock powerful capabilities: using machine learning to observe geological and geotechnical properties such as RQD, colour and texture, and integration of this ‘fast data’ into cloud-native modelling platforms.

“I’ve been working on uranium projects where we use colour logs to identify alteration through redox fronts. Why would anyone log something as subjective as colour manually anymore? Just take a consistent photo and let the software do the rest.”

Structural and field mapping from the desktop is now possible thanks to tools like the SRK-developed HiveMap, where drawing features directly onto photorealistic textured 3D surfaces of outcrop increases a project’s geological understanding and modelling capabilities.

He points to emerging field exploration tools like Boart Longyear’s TruScan and CSIRO-developed HyLogger that provide near real-time geochemical and hyperspectral results. Drill core and chip imaging tools such as KORE Geosystems’ Spector Geo and CoreSafe’s Casper Pro integrate with Seequent’s Imago cloud service, allowing machine learning analysis.

In the cloud analytical and modelling space, Micromine’s Grade CoPilot, Seequent’s Evo, Resource Modelling Solutions’ RMSP, and Datarock’s suite of solutions are very much on the radar. Rock Flow Dynamics is bringing technology from the oil and gas industry to the minerals space that can model complex stratigraphy and improve our use of geophysical data.

These tools elevate exploratory data analysis and will help us to interrogate our increasingly expansive future datasets.

“The next generation of exploration won’t just capture more data – it will interpret more of it automatically, faster, and with greater confidence. That’s where the opportunity lies.”

Hivemap
Digital structural mapping using HiveMap software.

Building smarter teams

But even the best tools, Olly insists, are only as good as the people using them.

“Exploration teams need to understand the problem they’re trying to solve. That should drive every decision about what data to collect and how to store it.”

He advocates for better analytical skills among field geologists – not necessarily to turn them into coders, but to help them understand the possibilities within the data they capture.

“If someone on the team can write a bit of Python, or even just use ChatGPT to generate a script, they can extract insights directly from the database. You don’t need to be a wizard. You just need to ask the right questions.”

View original publication HERE