When Audit Logs Lie: Misattributed Actions and the Collapse of Trust in Entra

Jan 28, 2026 min read

Hero image generated by ChatGPT

This is a personal blog and all content herein is my own opinion and not that of my employer.


Enjoying the content? If you value the time, effort, and resources invested in creating it, please consider supporting me on Ko-fi.

Support me on Ko-fi

What The Entra Fudge?!

There’s a special kind of unease that sets in when:

  • you trust the audit logs,
  • you follow least privilege,
  • you investigate an unexpected change,

…and the platform calmly tells you:

“Yes, that happened — but not like you think.”

This post is about one of those moments.

Specifically:

A Microsoft-managed Conditional Access policy change appeared in Entra audit logs as being performed by one of our administrators — even though Microsoft later confirmed that admin did not do it.

No exploit claims.
No breach narrative.
No pitchforks.

Just an uncomfortable truth about identity attribution, portal behaviour, and why audit logs stop being authoritative the moment causality breaks.

Welcome back to What The Entra Fudge?!


The incident that triggered the question

We noticed a new Conditional Access policy appear in our tenant.

So far, so normal.

But the Entra Audit Logs showed something specific:

  • The policy creation event
  • A named administrator
  • A real UPN
  • A privileged identity we could talk to

The problem?

That admin hadn’t created the policy.

No recent change window.
No remembered action.
No matching activity elsewhere.

So we did the right thing:
we asked Microsoft Support to investigate.


Microsoft’s response (paraphrased, but accurate)

After reviewing the logs, Microsoft confirmed two things:

  1. The policy was not manually created by the admin shown in the audit log
  2. The admin’s UPN appeared because of how Microsoft-managed policies are deployed

Specifically:

When Microsoft deploys a Microsoft-managed Conditional Access policy, the Azure Portal backend may associate the action with the identity of the currently authenticated administrator’s portal session, even if that admin is not actively performing an action.

Read that again.

Slowly.


The uncomfortable implication

What this means in practice is:

  • A human identity is recorded as the actor
  • Even though that human did not initiate the change
  • Because their session token was available
  • And the platform needed someone to attribute the action to

In other words:

Presence ≠ causality

And once that line is crossed, audit logs stop being evidence - they become narrative suggestions.


Why this is a serious problem in regulated environments

In lightly regulated environments, misattributed audit events are confusing.

In heavily regulated industries, they are something else entirely.

Financial services, healthcare, critical infrastructure, and government environments are typically subject to requirements such as:

  • SOX
  • PCI DSS
  • ISO 27001
  • SOC 2
  • GDPR / UK GDPR
  • FCA / PRA operational resilience expectations

Across all of these frameworks, there is a shared assumption:

Audit logs must provide a reliable, defensible record of who performed privileged actions.

Not “who happened to be logged in at the time.”
Not “who owned a reusable session token.”
But who actually caused the change.

When a platform attributes system-initiated activity to a named human identity, several things break immediately.


1. Non-repudiation is no longer defensible

In regulated environments, audit logs are not just operational tooling — they are evidence.

If a named administrator can truthfully say:

“That wasn’t me — the platform reused my session”

Then:

  • The organisation cannot prove intent
  • The individual cannot reliably defend themselves
  • The audit trail fails its primary purpose

At that point, the log entry is no longer authoritative.

It is contestable.

That alone is enough to fail an audit review.


2. Insider-threat investigations become unreliable

Insider-threat programmes rely on high-confidence attribution.

Misattributed actions introduce a toxic ambiguity:

  • Was this a malicious insider?
  • Was this a platform automation?
  • Was this a background service acting opportunistically?

When system actions are logged as human actions, investigators lose the ability to:

  • distinguish malice from automation
  • establish behavioural baselines
  • correlate intent across events

False positives increase.
Real signals get buried.
Trust in the telemetry erodes.

In regulated environments, that is not acceptable.


3. Segregation of duties is silently violated

Many regulated organisations enforce strict separation between:

  • policy authors
  • approvers
  • operators
  • reviewers

If the platform attributes its own changes to a logged-in administrator, it creates the appearance that:

  • a single individual authored and executed a change
  • approval boundaries were crossed
  • controls were bypassed

Even if none of that is true.

From an auditor’s perspective, appearance matters — because appearance is all they have.


4. Regulatory attestations become harder to stand behind

At some point, a senior leader signs a statement that says:

“Privileged access and changes are logged accurately and reviewed.”

That statement assumes:

  • audit logs are faithful
  • attribution is intentional
  • identities are not reused as convenience wrappers

Once you know that system activity can be logged against arbitrary humans, that attestation becomes uncomfortable - if not misleading.

This is how technical shortcuts turn into governance risk.


Why this isn’t “just an audit UX issue”

It’s tempting to frame this as:

“The action happened - the attribution is just a bit off.”

That framing is dangerous.

In security architecture, who did something is often more important than what was done.

Audit logs answer questions like:

  • Who authorised this?
  • Who executed this?
  • Was this expected?
  • Was this approved?
  • Was this abuse?

If the answer to those questions depends on portal session coincidence, then the audit system is no longer fit for purpose in regulated contexts.

This is not about perfection.

It’s about defensibility under scrutiny.


The pattern nobody wants to talk about

What makes this harder to dismiss is that this isn’t isolated.

Multiple senior engineers and MVPs independently recognised the behaviour as familiar:

  • Similar misattribution patterns have appeared in other Microsoft platforms
  • Portal-driven operations blur system vs user actions
  • The same design shortcut appears across products

That suggests a deeper architectural pattern:

Portal-mediated actions reusing user context without a distinct system actor

It’s convenient.

It’s expedient.

And it’s fundamentally at odds with Zero Trust principles.


What should have happened instead

There are cleaner, safer alternatives:

  • A first-class system identity (for example, an explicit Entra service principal)
  • Explicit actorType: system attribution in audit events
  • Clear separation between:
    • initiatedBy
    • executedBy
    • attributedTo

Other platforms manage this.

Other Microsoft services already do this.

Which makes the current behaviour all the more puzzling.


The real takeaway

When a platform attributes system-initiated actions to human identities based on session availability rather than causality, it creates a silent but serious failure mode:

You can no longer prove who did what — only who happened to be nearby.

In heavily regulated industries, that’s not a minor flaw.

That’s a foundational trust problem.

Audit logs must distinguish clearly between:

  • human-initiated actions
  • system-initiated actions
  • automated enforcement
  • background policy deployment

Anything else undermines non-repudiation, weakens investigations, and places organisations in an indefensible position during audits.

Design for causality.
Log system actors as system actors.
And never treat a human identity as a convenient placeholder.

Because the moment audit logs stop being trustworthy…

…the organisation inherits the risk - not the platform.

Welcome to What The Entra Fudge?!

comments powered by Disqus