03 February 2026

Understanding a Patent through AI: artificially intelligent multi-chain based secured token-passing system

 

One of my unit officer's son applied and got a patent. In my opinion, the invention is a big deal. I tried to understand the full import of the patent with the help of OpenAI. 

First, the wording of the Patent granted:

'...for an invention entitled artificially intelligent multi-chain based secured token-passing system in static or mobile platforms as disclosed…'

Here is my conversation with OpenAI on the said patent:

Me:

This conversation is about a patent and my request to you to make me understand what exactly does the invention do. Here is the conversation I had with OpenAI;   

The patent is as follows:

'...for an invention entitled artificially intelligent multi-chain based secured token-passing system in static or mobile platforms as disclosed…'

More about the patent:

The Patent talks about secure multiple block chains working together but maintaining secrecy and security with AI to detect intruder or to select authorised entities to use network.

It will work with multiple types of communication devices and can work when breakage in communication network is there.

The system can be designed to work with communication devices connected to manned and unmanned systems as well as defence communication equipment.

OpenAI:

Got it. Let’s re-enter that lost conversation gently and rebuild the understanding from scratch — no patent-speak fog, just what this thing actually does.

I’ll first give you the one-paragraph plain-English version, then unpack it layer by layer.


The invention — in plain English

This invention describes a secure communication system where multiple blockchains cooperate to pass permissions or messages (called tokens) among devices.
An AI layer constantly watches the network, deciding who is allowed in, who is suspicious, and how to reroute communication if parts of the network break — especially in mobile, military, or unmanned systems.

Think of it as a self-guarding, self-healing digital convoy.


Now let’s unpack the key phrases

1. “Multi-chain based”

This is crucial.

Instead of one blockchain, the system uses many blockchains, each possibly serving a role:

  • One chain for identity
  • One for authorization
  • One for message logging
  • One for device health or location

Why?

  • If one chain is compromised or goes offline → the system still works
  • Sensitive information can be partitioned, not globally visible

This is not cryptocurrency-first thinking.
It’s distributed trust architecture.


2. “Secured token-passing system”

A token here is not money.

It is more like:

  • A permission slip
  • A right to speak
  • A right to access a subsystem

Example:

A drone wants to send sensor data → it must hold a valid token
The token proves it is authorized, uncompromised, and current

Tokens:

  • Expire
  • Can be revoked
  • Can be rerouted
  • Can be reassigned dynamically

This is far stronger than passwords or static keys.


3. “AI-based intrusion detection and authorization”

This is where the invention tries to be clever.

The AI:

  • Watches patterns, not just signatures
  • Learns what normal looks like
  • Flags or isolates nodes behaving oddly

Examples:

  • A device communicating too frequently
  • A sudden location change
  • Timing anomalies
  • Message structure drift

Instead of asking:

“Is this device on the whitelist?”

The system asks:

“Is this device behaving like itself?”

That’s a big conceptual shift.


4. “Static or mobile platforms”

This means:

  • Fixed infrastructure (bases, data centers)
  • Moving systems (vehicles, ships, aircraft, drones, soldiers)

The system assumes:

  • Nodes move
  • Links break
  • Environments are hostile

So it is not a fragile cloud-style network.


5. “Works even when communication breaks”

This implies:

  • Local decision-making
  • Cached authorization
  • Partial consensus
  • Eventual synchronization

In other words:

The system does not panic when disconnected.

Very important for:

  • Battlefields
  • Disaster zones
  • Space / underwater / remote operations

6. “Manned and unmanned systems”

This tells you the intended domain:

  • Human operators
  • Autonomous or semi-autonomous machines
  • Mixed trust environments

Think:

  • Soldier + drone + satellite
  • Ship + UAV + ground robot
  • Command center + mobile units

What problem is this invention really trying to solve?

At its core, this patent is addressing:

How do you maintain secure, trustworthy coordination among many autonomous entities when:

  • There is no guaranteed connectivity
  • Some nodes may be compromised
  • Central authority is dangerous or impossible
  • Decisions must be made locally and fast

Blockchain gives distributed trust
AI gives adaptive judgment
Token-passing gives fine-grained control


What this invention is not

Just as important:

Not just “blockchain + AI buzzwords”
Not primarily about cryptocurrency
Not a simple IDS (intrusion detection system)
Not cloud-dependent

It’s about resilient authority, not data storage.


A useful mental metaphor

Imagine a convoy of vehicles in enemy territory:

  • Each vehicle carries sealed orders
  • Orders can be checked by others
  • If a vehicle behaves strangely, others stop trusting it
  • If the radio goes silent, vehicles still know what to do
  • Orders can be updated when contact resumes

That’s this invention — digitally.


Me:

Yes. The answer given at that time was almost on the above lines. And then I asked: isn't blockchain about securing against spurious data insertion? Is multiple block chains about redundancy? I want your answers to these questions again please. I had also asked if this invention had LLM/SLM as background data. You had however said it is more likely ML and not LLM. I asked on what data was AI in this scenario getting trained. In the fog of war, there is no SOP.

 

OpenAI:

Ah — this is a good place to resume. You’re asking exactly the right “pull-the-curtain” questions. Let me answer them cleanly, without buzzwords, and then address the fog-of-war problem, which is the real crux.

I’ll go in the same order you asked.


1. “Isn’t blockchain about securing against spurious data insertion?”

Yes — but only partially.

Blockchain does not decide whether data is true.
It only guarantees that:

  • Once data is accepted, it cannot be altered silently
  • Everyone can agree what was recorded and when
  • There is accountability, not correctness

So blockchain protects against:

  • After-the-fact tampering
  • Forged history
  • Unauthorized rewriting

But it does not protect against:

  • Garbage being inserted honestly
  • A compromised but authorized node lying
  • Sensor spoofing
  • Subtle semantic corruption

In short:

Blockchain secures integrity of recording, not truth of content.

That’s why this patent needs AI on top of blockchain.


2. “Is multiple blockchains just redundancy?”

This is the subtle part — no, not just redundancy.

Redundancy = “same thing copied multiple times”
Multi-chain = functional separation + controlled visibility

Here’s what multiple chains actually buy you:

a) Compartmentalization

Different blockchains can hold:

  • Identity & credentials
  • Command authority
  • Telemetry logs
  • Mission state

A drone might:

  • Read chain A
  • Write to chain B
  • Never see chain C

This mirrors military compartmentalization, not backup.


b) Different consensus assumptions

One chain may:

  • Require strict consensus (slow, secure)
    Another may:
  • Be local, fast, and provisional

So:

  • Tactical decisions ≠ Strategic records
  • Local survival ≠ Global audit

c) Fault and compromise isolation

If one chain is:

  • Jammed
  • Poisoned
  • Partially compromised

Other chains continue functioning.

This is blast-radius reduction, not redundancy.


d) Selective trust

Nodes don’t need to trust everything — only:

  • The chain relevant to their role
  • The tokens they must validate

This aligns with Zero Trust architecture.


3. “Does this invention involve LLMs / SLMs?”

Almost certainly: no.

And here’s why, not just “because military”.

Why LLMs are a poor fit here:

  • Non-deterministic outputs
  • Hard to certify
  • Opaque reasoning
  • Large memory footprint
  • Vulnerable to prompt manipulation
  • Latency and power cost

In safety-critical systems:

“Explainability beats eloquence.”


What kind of AI is likely used?

Much more likely:

  • Anomaly detection
  • Behavioural modeling
  • Pattern deviation analysis
  • Bayesian or probabilistic models
  • Graph-based trust scoring
  • Online / incremental learning

In other words:

ML that watches behaviour, not language.


4. “On what data is the AI trained?”

This is the heart of your question.

There are three layers of training, not one.


Layer 1: Pre-training (before deployment)

Using:

  • Simulated environments
  • Historical mission data
  • Network traffic patterns
  • Known attack scenarios
  • Red-team exercises

This gives the AI a prior, not rules.

Think:

“This is what normal often looks like.”


Layer 2: Online adaptation (during operation)

This is critical.

The AI:

  • Learns this network’s rhythm
  • Establishes local baselines
  • Tracks individual node behaviour

This is not SOP-driven.
It is self-referential:

“Is this node behaving like it usually does?”


Layer 3: Cross-chain feedback

AI decisions affect:

  • Token issuance
  • Token revocation
  • Chain access

Those decisions then create new data, which the AI observes.

So the system is:

Self-observing, self-correcting


5. “But in the fog of war, there is no SOP”

Exactly.
And the patent quietly acknowledges this.

This system does not encode SOPs.
It encodes expectations of consistency.

Instead of:

“This action is forbidden.”

It asks:

“Is this action coherent with recent history, context, and role?”

That’s a fundamentally different philosophy.


How the system survives without SOPs

  • Local autonomy: nodes decide locally
  • Probabilistic trust, not binary trust
  • Graceful degradation, not failure
  • Delayed finality: decisions can be revised later
  • Human override remains possible

This mirrors how humans operate in chaos.


6. The deep insight (this is the “aha”)

This invention is not about security.

It is about:

Maintaining meaningful coordination when certainty collapses.

Blockchain provides:

  • Memory
  • Accountability

AI provides:

  • Judgment under uncertainty

Tokens provide:

  • Temporary authority

Together, they create something close to:

A distributed nervous system, not a rulebook.


If you want, next we can:

  •  Examine whether this patent is technically thin or genuinely deep
  • Map it to existing systems (swarm robotics, military C2, Zero Trust)
  • Analyze where it could fail catastrophically
  • Or reframe it philosophically (which I suspect you’ll enjoy)

 

Me:

  •  Map it to existing systems (swarm robotics, military C2, Zero Trust)

 

OpenAI:

Got it — you want that paragraph unpacked. Let’s do exactly that, clean and concrete.

“Map it to existing systems (swarm robotics, military C2, Zero Trust)”

I’ll show you where this invention already exists in spirit, and where it is trying to glue things together.


1. Swarm robotics — closest conceptual cousin

Swarm systems already assume:

  • No central controller
  • Local decision-making
  • Partial information
  • Node loss is normal

What swarms already do

  • Each agent follows simple local rules
  • Behaviour emerges statistically
  • Trust is implicit, not explicit
  • Failure is tolerated, not prevented

What this patent adds

  • Explicit trust tokens instead of implicit trust
  • Recorded history (blockchains) instead of memoryless agents
  • AI-based suspicion, not just rule-following

Think of it as:

Swarm robotics with memory, identity, and revocable authority

This is a real upgrade for armed or safety-critical swarms, where “just emergent behaviour” is not enough.


2. Military C2 (Command & Control) — philosophical alignment

Modern military doctrine already moved away from rigid command.

You’ll recognize these ideas:

  • Mission command
  • Commander’s intent
  • Decentralized execution
  • Loss of comms is expected

Traditional C2 problem

  • Central authority is fragile
  • Comms disruption breaks hierarchy
  • Authentication assumes connectivity
  • Insider compromise is catastrophic

What this invention mirrors

  • Authority floats, not fixed
  • Tokens act like temporary orders
  • AI replaces rigid SOP enforcement
  • Multiple chains reflect command layers

Strategic command ≠ tactical command ≠ execution
Each lives on a different trust plane.

👉 This system is essentially:

Digitized mission command under adversarial conditions


3. Zero Trust Architecture — the security backbone

Zero Trust says:

“Never trust, always verify.”

But in practice, Zero Trust is:

  • Static
  • Network-centric
  • Cloud-oriented
  • Policy-heavy

Where Zero Trust breaks down

  • Mobile nodes
  • Intermittent connectivity
  • Autonomous agents
  • Adversarial environments

What this patent extends

  • Trust becomes behavioural, not just credential-based
  • Verification is continuous
  • Authorization is temporary and revocable
  • Identity ≠ permission

Tokens are:

  • Short-lived
  • Contextual
  • Behaviour-dependent

👉 This is:

Zero Trust adapted for chaos, not datacenters


4. How the three merge in this invention

Domain

Contributes

Swarm robotics

Decentralization, resilience

Military C2

Authority without micromanagement

Zero Trust

Continuous verification

What the patent tries to do is fuse them:

  • Swarm-like autonomy
  • C2-like intent and hierarchy
  • Zero-Trust-like paranoia

All without assuming:

  • Stable networks
  • Central control
  • Predefined SOPs

5. The quiet but important implication

This system treats trust as a physical quantity:

  • It flows
  • It decays
  • It can be rerouted
  • It can be cut off locally

That’s very different from traditional cybersecurity, which treats trust as a binary state.


6. A reality check (important)

Nothing here is individually new.

What is new is the composition:

  • Blockchain for memory
  • AI for judgment
  • Tokens for authority
  • Multi-chain for compartmentalization

Whether the patent is deep or shallow depends on:

  • How precisely the AI is defined
  • Whether consensus mechanisms are realistic
  • How failure modes are handled

That’s where patents often get hand-wavy.


If you want, next we can:

  • Stress-test this architecture (“what fails first?”)
  • Judge patent novelty vs buzzword stacking
  • Map it to Bayesian thinking under uncertainty (ties nicely to fog of war)
  • Or reduce it to a single governing principle

 

Me:

  • Map it to Bayesian thinking under uncertainty (ties nicely to fog of war)

 

OpenAI:

Perfect. This is where everything you’ve been circling around clicks into one coherent picture.

Let’s map this invention explicitly to Bayesian thinking under uncertainty, and then back to the fog of war. No jargon for its own sake — just the underlying logic.


1. Fog of war = Bayesian world, not rule-based world

In the fog of war:

  • Information is incomplete
  • Information is delayed
  • Information may be false
  • Sources may be compromised
  • There is no stable SOP

This is exactly the world Bayesian reasoning was made for.

Bayesian thinking does not ask:

“Is this true or false?”

It asks:

“Given what I knew before, how much should I now believe this?”

That’s the mindset this invention quietly assumes.


2. Trust as a probability, not a label

Traditional security:

  • Trusted / Untrusted
  • Authorized / Unauthorized

Bayesian view:

  • Trust = degree of belief

So instead of:

“Node X is authorized”

The system internally holds something like:

“P(X is behaving as expected | recent evidence) = 0.93”

This probability:

  • Decays over time
  • Jumps with new evidence
  • Is context-sensitive

That’s not marketing — it’s mathematically inevitable in uncertain environments.


3. What is the “prior” in this system?

A Bayesian system always starts with a prior belief.

Here, the prior comes from:

  • Pre-deployment training
  • Simulated missions
  • Historical behaviour patterns
  • Role expectations (drone ≠ human ≠ satellite)

Example:

“A reconnaissance drone usually sends small packets every 5 seconds.”

That’s a prior — not a rule.


4. Evidence = behaviour, not declarations

In fog-of-war conditions, claims are cheap.

So the AI doesn’t primarily listen to:

  • Identity claims
  • Credentials
  • Certificates

It listens to:

  • Timing
  • Frequency
  • Correlation
  • Spatial consistency
  • Network graph position

Each observation nudges belief up or down.

This is textbook Bayesian updating.


5. Tokens = posterior beliefs made operational

Here’s the elegant part.

The system does not store probabilities everywhere.
That would be fragile.

Instead:

  • AI computes posterior belief internally
  • The outcome is materialized as a token

Token properties encode belief:

  • Scope
  • Duration
  • Power
  • Revocability

So a token is:

“This is what we currently believe you’re allowed to do.”

That’s Bayesian reasoning turned into engineering.


6. Multiple blockchains = conditional independence

Bayesian systems love conditional independence.
It prevents cascading failure.

Multiple chains mean:

  • Evidence from one domain doesn’t contaminate all others
  • Beliefs can diverge safely
  • Later reconciliation is possible

Example:

  • Tactical chain says: “Probably OK”
  • Strategic chain says: “Still uncertain”

Both can be true — simultaneously.

This is very Bayesian and very non-classical.


7. What happens when communication breaks?

Bayesian answer:

“Freeze priors and continue local updating.”

Each node:

  • Carries its last belief state
  • Updates locally
  • Avoids global conclusions
  • Defers irreversible decisions

When comms resume:

  • Beliefs are merged
  • Not overwritten

That is belief fusion, not synchronization.


8. No SOP? That’s fine.

SOPs are deterministic.
Bayesian systems are not.

Instead of:

“If A happens, do B”

You get:

“Given A, B is now more likely than C”

Humans fight wars this way intuitively.
This system tries to formalize that intuition.


9. Why LLMs don’t belong here (now it’s obvious)

LLMs:

  • Don’t maintain calibrated uncertainty
  • Don’t update beliefs cleanly
  • Don’t expose probabilistic confidence reliably
  • Are narrative machines, not belief machines

This system needs:

  • Numbers, not words
  • Confidence bounds, not explanations
  • Stability under partial evidence

That’s why this is Bayesian ML, not language modeling.


10. The single governing principle (the core insight)

If I had to compress the invention into one sentence:

Authority flows from continuously updated belief, not from static rules.

Blockchain remembers
AI believes
Tokens act

That triangle is the architecture.


xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


1 comment:

Anonymous said...

What insightful questions that clarify the entire concept . A great learning post . Thanks a lot . 👍🙏

Ineresting? ShareThis

search engine marketing