rashbre central: It's all too much : Chapter 1 of An Unstable System, Ed Adams

Sunday, 8 March 2026

It's all too much : Chapter 1 of An Unstable System, Ed Adams


Quite a year. The bust-up with Heather; drive-by recruitment by tanned handshaker Bob and his fragrant accomplice, Jasmine Summers. Geneva. You couldn’t make it up.

And they still didn’t know about my cyber-mining device.


Heather used to say I had a way of standing slightly outside things. As if conversations were simulations running at half speed. I thought she meant I was calm.

It turns out she meant something else.


I never discussed the device with anyone. Not even after the British Secret Service came calling. Not because it was classified. Because it felt provisional — the kind of thing you don’t explain until you know whether it’s going to work.


Or until you know whether it matters.


At the time, I was interested in optimisation. Not philosophically — mechanically. Shaving latency. Reducing friction. Improving throughput. I liked systems that responded cleanly to input. I liked predictable outcomes.


People weren’t like that.


I read obsessively. Forums, white papers, edge-case experiments. There was a lot of talk about efficiency. Alignment. Untapped capacity inside systems everyone assumed were mature.


What nobody mentioned was instability.


Most of my experimentation was external. Hardware. Code. Power draw. Cooling curves. Noise envelopes. Measurable variables.


Some of it wasn’t.


There was growing research around attention and performance — smoothing cognitive load, trimming hesitation, extending focus without fatigue. Nothing dramatic. Marginal gains.


I tried a few.


Heather said I seemed sharper. More focused. Less distracted. She never clarified whether that was praise.


I wasn’t chasing transcendence. I wasn’t interested in insight. I wanted to know whether thought itself had bottlenecks — pressure points where capacity could be expanded without compromising stability.


Stability mattered to me.


I didn’t talk about any of it. Partly because it sounded odd. Partly because it didn’t feel important yet.


It worked well enough that I stopped questioning it.


That’s usually when things begin.


Around then I started reading about control systems. I came across an old documentary from 1958 on neural stimulation in animals: ’New Frontiers of the Brain’. Cats, mostly. Grainy footage. Electrodes. Behavioural reinforcement. It wasn’t especially clever, and it wasn’t pleasant to watch.


Rats were more interesting. More adaptable, scalable. There was a substantial literature on remote control experiments — implanted electrodes, reward centres, directional stimuli. The basic mechanism was simple: signal left or right, reward compliance, repeat. The animal learns the path. Or something close enough to it.


The justifications were always practical. Search and rescue. Disaster zones. Navigation in environments too dangerous for humans. The language was careful. The outcomes were measurable. The animals wore small backpacks — circuit boards, transmitters, batteries — scaled down until they looked almost elegant. The technology wasn’t exotic. Surface-mount components. Off-the-shelf parts. The sort of thing anyone competent could assemble.


What struck me was the indifference to cruelty —and how little intelligence the system assumed was required. Most of the work was done by incentives and constraints. The rat retained a degree of agency, but only within parameters that narrowed almost imperceptibly.


The papers were always optimistic. Ethics were mentioned briefly, then set aside. Progress was incremental. Range improved. Latency dropped. Payloads got lighter. It was presented as a solved problem. I remember thinking that this was as far as the idea would go. Rats with backpacks. An endpoint. A curiosity.


I was wrong — It wasn’t.

No comments: