AI on Site:
The New “Engineer’s Assistant” No One Admitted Hiring Yet
It starts quietly.
Not in the arbitration hearing. Not in the final award.
It starts at 11:47 pm. Someone on the project team drops 2,000 pages of correspondence into a tool and says, “just tell me what we’re missing.”
Next morning, the narrative looks cleaner. The delay looks sharper. The claim suddenly feels… structured.
No one says it out loud. But AI has already entered the project.
Let’s strip the noise.
This is not about robots replacing arbitrators. It is about something far more practical.
AI is becoming the invisible junior associate on every dispute.
And right now, the contracts haven’t caught up.
What the institutions are doing is interesting. What they are not doing is more important.
On the FIDIC side, the debate has finally come out into the open. At the December 2025 conference, the industry was asked a blunt question: should AI sit inside the dispute resolution chain? Engineer. DAAB. Arbitration.
The response split right down the middle.
One camp said: “not yet.”
The other said: “everywhere.”
That tells you everything about where we are. Not resistant. Not ready. Just… cautious.
And then came the real signal. The AAA-ICDR example.
An AI arbitrator trained on around 1,500 construction awards. Used for low-value disputes. Faster outcomes. Roughly 30 percent cost savings. Human oversight still in place.
That is not science fiction. That is a pilot already on the runway.
But notice the positioning. It is not replacing judgment. It is supporting it.
That distinction will decide the next decade.
ICC has taken a different route. Less noise. More plumbing.
Instead of jumping into “AI decision-making”, they have built infrastructure. Case Connect. Centralised. Structured. Everything flowing through one digital spine.
It is like cleaning up a messy site before bringing in heavy machinery.
Because here is the truth: you cannot layer AI on chaos.
If your documents are scattered, your records inconsistent, your version control broken, AI will not fix your dispute. It will amplify your confusion.
ICC is quietly solving that first.
The guidance on how AI should be used by tribunals and counsel is still pending. Which means, again, the real decisions are happening in practice, not policy.
SIAC sits somewhere in between.
Procedurally ready. Flexible. Open architecture.
No explicit AI rulebook. But nothing stopping you either.
Which, if you’ve run disputes, you know exactly what that means.
The rules won’t save you. Your process will.
Now step back from institutions and come to site reality.
Because this is where it actually matters.
Imagine this.
The Engineer is making a determination under Sub-Clause 3.7.
The record is messy. Emails contradict site reports. Delay notices are late. Quantum is fuzzy.
Someone uses AI to reconstruct timelines, cluster documents, identify gaps.
The determination improves. Faster. Cleaner.
But here’s the uncomfortable question.
Was AI used? Was it disclosed? Does it matter?
Because if that determination is later challenged, you can already see the argument:
“Was independent judgment exercised, or was it influenced by an unverified tool?”
No contract today answers that cleanly.
Same story at DAAB level.
A member runs document analytics through AI to understand patterns.
Perfectly logical. Efficient. Probably better than manual review.
But now layer in due process.
What if the tool introduced bias? What if it filtered incorrectly? What if one party used it and the other didn’t?
No rulebook. No disclosure norm. No agreed boundary.
And then arbitration.
Quantum models built using AI. Delay analysis refined using predictive tools. Submissions drafted with machine assistance.
This is already happening. Quietly. Widely.
The tribunal may not even know.
This is where the real risk sits.
Not in AI itself. But in asymmetry.
One side uses it well. The other doesn’t. One side discloses. The other doesn’t. One side understands its limits. The other treats it like gospel.
That is where disputes will shift.
What does this look like in practice?
A contractor on a project used AI to rebuild a disrupted programme narrative from fragmented site records. The output was sharp, compelling, and internally consistent.
But under cross-examination, gaps emerged. The underlying assumptions were not fully understood by the team presenting it.
The model was strong. The ownership was weak.
The claim lost credibility, not because it was wrong, but because no one could defend how it was built.
That is the lesson most people are missing.
AI will not kill bad claims.
It will expose poorly understood ones faster.
So where does this leave project teams?
Right now, in a very familiar place.
Think of AI like early BIM days.
Everyone said they were using it. Half actually were. Very few understood it properly. Contracts lagged behind. Disputes caught up later.
Same playbook.
If you’re sitting on a live project, the practical questions are not philosophical.
They are brutally simple:
Are your teams using AI already without a protocol?
Do you know where it is influencing analysis?
Can you defend outputs if challenged?
Should your contracts start addressing disclosure?
Because sooner than you think, someone will ask in a hearing:
“Can you explain how this analysis was generated?”
And “we used a tool” will not be a sufficient answer.
AI is already on your project. The only question is whether you are managing it, or discovering it during cross-examination.
If this resonates with you, join the ConstructHER WhatsApp network, a space for women in construction and infrastructure law to exchange ideas, share opportunities, and continue the conversation beyond this piece.

