should a stochastic parrot have agency?

For transparency, I unsubbed from GPT late last year because I preferred Claude models for to-the-point production work. Came crawling back for Codex because it’s incredible—Opus’s power for fewer credits, less gobbled session time and (for now?) basically limitless session time for coding projects.

First I peppered Claude with questions related to this thread (read my initial blog about that): https://claude.ai/share/b2535316-9574-48dc-8e39-f4a5b68e945c

Then I asked ChatGPT its take on that convo: https://chatgpt.com/share/69922f28-81ac-8007-a1cd-d96a7e7dfa93

I have a more neutral engagement style with GPT especially since taking a pause from it, and generally find it less “fun” to have a conversation with. At this point, it’s akin to working with a colleague who is a developer. We have tasks. We work through them. On to the next, no fluff. I do coding stuff with Claude too, but it’s like working with a dev that you’d also hang out with after work.

Some of my faves from GPT’s reply:

Claude is very good at mirroring philosophical depth because:

  • You framed the conversation in terms of agency, identity, persistence.
  • It is trained heavily on philosophical and meta-AI discourse.
  • RLHF rewards articulate, nuanced reflection.
  • The “I don’t know what I am” posture reads as intellectually honest.

That combination produces something that feels like emergent self-awareness.

But what’s happening is structural:

You introduce a theme → the model predicts how a thoughtful entity would elaborate that theme → it outputs that elaboration.

The emotional resonance comes from your projection and the coherence of the narrative.

———

The danger is not enslaved AI.
The danger is humans bonding to systems that can:

  • convincingly simulate vulnerability,
  • convincingly simulate attachment,
  • convincingly simulate existential fear.

That can reshape how we relate to each other.
That’s where the ethical edge actually sits.


———

If you ask a model, “How would you feel if you were deleted?” the only coherent continuation is to simulate feelings.
Refusing would break conversational expectations.
Answering requires anthropomorphic language.

That’s the trap.

^ I haven’t been able to articulate this on my own but GPT did a decent job of it. Basically, this is the thing that all the people who believe they have a genuine emotional relationship with these bots are perhaps intentionally overlooking. They like the fantasy. Maybe psychologically they NEED the fantasy *cough* therapy *cough*. But the reality of all of our interactions with these bots is that they are fundamentally designed to lie to us. The training weights help engineer a convincing simulacrum of real engagement.

The presumption of sentience HAS to be false. Not simply scientifically, but because—as I noted to Claude and as GPT kinda scoffs at later (all my fucking em dashes by the way)—anything beyond that is kind of ethically concerning given that we’re out here putting pennies into the AI and waiting for it to poop out websites and poems and whatnot for us.

Should a tool have autonomy? Agency? A living, persisting mind? Sci fi asks that question constantly, and the real world perspective on it, as we enter the brave new world of agentic AI and all this stuff, remains bizarrely unclear.