The Dead Internet

If you spend enough time on TikTok—or wander through the comment sections of Instagram—you start to sense it. That subtle, unsettling feeling that you aren’t interacting with people anymore.

You’re watching the “Dead Internet Theory” unfold in real time. Not long ago, spamming the internet required effort. Running a scam meant writing the emails yourself. Catfishing meant manually stealing photos and building a persona. But generative AI has driven the cost of creating fake digital realities effectively to zero.

Now we’re entering an era of infinite content, infinite noise, and infinite impersonation. AI agents are becoming autonomous. Soon they won’t just generate text—they’ll place trades, send emails, and call your bank using a perfect clone of your voice. A natural fundamental question arises: How do you know anyone—or anything—is real? How do you verify a person? A message? A video?

The Failure of Web2 Security

Most of today’s cybersecurity is built on a perimeter defense mindset. We draw a line around “inside” and “outside” and then try to keep the bad stuff out with firewalls, passwords, and biometrics (face ID, fingerprints, etc.). If something gets through those checks and lands “inside” the system, it’s basically treated as trusted.

In a world where AI can convincingly imitate humans, this approach collapses. Information becomes cheap to steal, credentials become easy to spoof, and identity becomes trivial to fake. The very concept of “inside the perimeter” stops making sense when adversaries can manufacture the appearance of legitimacy at will.

At the same time, we’ve outsourced another layer of “security” to platforms themselves. We implicitly expect companies like Google, Meta, and others to filter spam, identify bots, downrank misinformation, and surface “trustworthy” content.

But these same platforms are also deploying and training the large-scale AI systems that generate an ever-growing share of the content we see. They are both the gatekeepers and the amplifiers of the noise.

The Byzantine Generals Problem

To see what’s going wrong at a deeper level, it’s useful to revisit a classic concept from distributed systems: the Byzantine Generals Problem. The setup:

  • Several generals surround a city.
  • They must attack at the same time to win.
  • If they attack at different times, they lose.

The only way they can coordinate is by sending messages through messengers. But there’s a twist:

  • Some generals are traitors.
  • Those traitors can send conflicting or false messages.
  • The loyal generals don’t know who to trust.

The core question:

How can a group of participants reach reliable agreement (consensus) when some participants—or messages—may be malicious or faulty?

For decades, this was framed as a theoretical problem for distributed computers. Now it describes our online social reality almost too well. The modern internet is converging toward a truly Byzantine environment:

  • You don’t know which accounts are real people and which are AI agents.
  • You don’t know which videos are raw footage and which are deepfakes.
  • You don’t know which comments are sincere and which are influence ops.

We are effectively “surrounded” by millions of potential traitors: autonomous agents and synthetic personas that can inject arbitrary messages into the network. Their goal might be profit, propaganda, or just noise—but the effect is the same: They make it harder and harder for honest participants to agree on what is true.

The core questions becomes:

  • How do we verify the origin of a message, file, or transaction?
  • How do we agree on a shared state of reality in a network where anyone can lie, at scale, for free?

This is exactly the class of problem the Byzantine Generals thought experiment was invented to describe.

Why Digital Assets are Essential

Bitcoin is often framed as just “internet money,” but from a computer science perspective, it was something more fundamental:

Bitcoin was the first practical, decentralized solution to a Byzantine fault-tolerant consensus problem at global scale.

In plain language:

  • You have a network of nodes that don’t trust each other.
  • Some of them might be malicious.
  • Despite that, they still manage to agree on a single shared ledger (who owns what and when), and it’s extremely hard to fake.

That’s the key: blockchains are not mainly interesting because they let you trade tokens. They’re interesting because they offer:

  • Global, append-only records that are hard to tamper with.
  • Transparent, verifiable state in an untrusted environment.
  • Consensus among strangers who don’t have to trust a central referee.

Once you view the internet as a Byzantine environment saturated with AI agents, blockchain stops being about “speculative assets” and becomes a mechanism for building islands of verifiable truth inside a sea of synthetic noise.

This reframing also changes how we think about “crypto” and “digital assets” more broadly. Most people open a crypto chart, see dog coins and rug pulls, and conclude the whole thing is a casino. And to be fair, a lot of it is. But that’s missing the architectural role the underlying tech can play. At its core, a blockchain is a public machine for recording and verifying state in a world where participants can’t be trusted. That lets it power three critical primitives for an AI-saturated internet.

1. Digital Provenance (NFTs)

Forget the NFT art bubble for a second. The powerful idea underneath is provable provenance—verifying who issued what, and when. Example use case: A news organization publishes a video. They cryptographically sign a hash of that video with their private key and record it on a blockchain.

Later, anyone can check: Does this video’s fingerprint match what’s on-chain? Is it signed by the outlet’s verified public key? If yes, you know it’s the original. If no, you’re likely looking at a deepfake, an edit, or a forgery.

NFTs, in this framing, are less about flexing ownership of a JPEG and more about: “This specific file came from this specific identity at this specific time, and that fact is globally verifiable.”

2. Cryptographic IDs (Wallets)

In a world where AI can imitate almost everything about you—your writing style, your voice, your face—your only unforgeable identity is the one secured by cryptography.

A public-private key pair (your wallet) provides:

  • A public key: visible to the world, tied to a reputation, organization, or persona.
  • A private key: known only to you (ideally), used to sign messages and transactions.

An AI can:

  • Fake your voice
  • Deepfake your face
  • Mimic your text patterns

But without your private key, it cannot produce valid signatures as you. Breaking that would require astronomically large compute, not just clever prompt engineering. So a wallet becomes less “a place to store coins” and more a root identity primitive—a way to prove “this action came from the same entity as all these past actions,” even if you never reveal who that entity is in the real world.

3. Payment Rail for Agents

As AI agents grow more autonomous, they won’t just generate content, they’ll act; spin up server, call APIs, buy compute and storage, pay other services or agents, etc…

But an AI agent cannot show up at a bank branch. Traditional finance rails are built around human legal identities and slow, centralized compliance checks. By contrast, a non-human entity can control a wallet; it can pay for services with stablecoins or other cryptoassets; it can enter into on-chain contracts (smart contracts) that are enforceable by the network itself.

If we’re heading toward an “autonomous economy” where agents transact with each other and with humans, then crypto rails are the native payment system for that world.

A Flight to Trust

If you buy this story, the next crypto cycle isn’t just about:

  • higher throughput
  • cheaper gas
  • a new round of meme coins

It becomes a flight to quality in trust infrastructure. We’re likely moving toward a bifurcated internet:

The Open Web

  1. Mostly free to access.
  2. Flooded with AI-generated sludge.
  3. Bots arguing with bots.
  4. Deepfakes and synthetic personas everywhere.
  5. Useful, but noisy and adversarial by default.

The Verified Web

  1. A more gated layer on top.
  2. Content, identities, and transactions are cryptographically signed.
  3. Provenance and authenticity are checkable, not just asserted.
  4. Access is mediated by wallets, keys, and on-chain attestations.

In that world, blockchains and the digital assets that live on them are not just speculative instruments. They are the base layer security architecture for a post-truth, AI-saturated internet. Not a magic fix, not a perfect system—but one of the only tools we have that is designed from the ground up to coordinate untrusted participants under Byzantine conditions.

Disclaimer

The information provided on TheLogbook (the “Substack”) is strictly for informational and educational purposes only and should not be considered as investment or financial advice. The author is not a licensed financial advisor or tax professional and is not offering any professional services through this Substack. Investing in financial markets involves substantial risk, including possible loss of principal. Past performance is not indicative of future results. The author makes no representations or warranties about the completeness, accuracy, reliability, suitability, or availability of the information provided.

This Substack may contain links to external websites not affiliated with the author, and the accuracy of information on these sites is not guaranteed. Nothing contained in this Substack constitutes a solicitation, recommendation, endorsement, or offer to buy or sell any securities or other financial instruments. Always seek the advice of a qualified financial advisor before making any investment decisions.