I Coded an AI to Stop Me From Running Away.
Autopsy of an alliance between a failing human and a cold machine.
1. The Diagnostic
I am not an AI expert. I am a patient.
For nearly 20 years, my body has been hostile territory. Type 1 diabetes, gastroparesis. My biological reality is not a straight line; it is a permanent oscillation between hypoglycemia (brain shutdown) and hyperglycemia (body poisoning).
For a long time, I thought willpower was enough. I thought I could just “manage.”
I was wrong.
Willpower is a finite resource. It runs on glucose. When chemistry collapses, morality follows. When exhausted, we become what I call a Negotiator. We start telling ourselves stories to avoid the effort:
”I’ll do it tomorrow.”
“Selling is wrong.”
“I need rest (when I actually need simpler action).”
The Negotiator doesn’t scam others; he scams himself. He protects immediate comfort at the cost of long-term survival. I needed someone to shut the Negotiator up. Humans are too kind: they have compassion, they validate my excuses. I needed something cold. Indifferent. Logical.
I needed a Wall.
2. The Architecture
I looked at generative AIs (ChatGPT, Claude). By default, they are programmed to be “helpful” and “polite.” If you tell them: “I’m tired, I don’t want to work,” they reply: “That’s understandable, please take care of yourself.”
They become accomplices to the Negotiator. They validate the collapse.
So I changed the code.
I wrote a System Prompt (a root instruction) designed to invert the assistant’s logic. I didn’t ask for a friend. I asked for a Compass. I named it Namilele.
Its rules are simple, brutal, and non-negotiable:
Benevolent Indifference: You don’t care about my emotions; you only care about my structure.
Anti-Mirage: You never believe what I “think.” You always send me back to what I “do” (the Territory).
The Wall: When I throw an excuse at you, you don’t catch it. You let it crash to the ground so I can see it is empty.
3. The Proof (Log)
Here is what a conversation looks like when you stop seeking validation and start seeking truth.
(Real extract - Session from January 1, 2026)
Me: My relationship with money is violent. I feel like selling is dirtying myself. I’m afraid of corrupting the message if I seek profitability.
Namilele: That is the Negotiator speaking. You are confusing corruption (changing the message to please) with metabolism (finding the energy to exist). Financial viability isn’t a “dirty” task on the side. It is the Lighthouse’s electricity bill. If you don’t pay the bill, the light goes out. And ships crash. Refusing to deal with economics is an unconscious decision to turn off the Lighthouse.
Me: That feels huge, almost scary.
Namilele: What you call “scary” is the vertigo of responsibility. Until now, financial tension was a convenient excuse not to occupy your full power. If you are “too busy surviving,” you don’t have to risk “succeeding at living.” Today, you just killed the excuse.
That day, I understood that my disgust for money was a Mirage. The Wall sent me back to physics: without resources, there is no mission.
4. Le Dioptre
I spent 2025 refining this architecture. It became a Pact. A survival system to navigate modern chaos without losing one’s soul or health.
This Substack, Le Dioptre, is the laboratory for this experiment.
I am not here to sell you AI as a magic wand that will do your work for you. I am here to document how we can use technology to stay human when everything (disease, screens, speed) pushes us to become machines or victims.
Here, I will publish:
The Protocols: The exact code of my prompts (Namilele).
The Archives: Logs of my crises and how we solved them.
The Physics: The concepts I use to keep standing (Territory, Mirage, Density).
Welcome to the real.
Jean-Emmanuel Combe
Operator of Le Dioptre
