rashbre central: Juliette

Monday 8 May 2023


Sunday morning, and I'm wondering what I've agreed to. I thought I'd visit the Lab, where I can read the test results from Cyclone 3, based upon a special access which Amy provided for me.

I'd forgotten that the bus doesn't run at the weekend, but the stop is by Geneva Bel-Air, which is a kind of transit interchange, so there were plenty of taxis around.

As I am driven to the Lab, I ponder the situation. It sounds straightforward enough. Put on the Cyclone and run some tests. Everyone else seems to be okay because of the tests and I've had Matt's account as well. The difference will be that this is the Cyclone 3. I just hope I can make it work.

I enter the Lab and see that Juliette Häberli is also present. 

"Hi, Oliver," she says," I guess you are doing some homework? I believe Amy was giving you some new things to revise."

I realise Juliette must be in on what is happening. She looks as if she is involved in some kind of complicated test protocol. I walk towards her.

"Juliette, if you know about my situation, I'd like to ask you a few questions."

"Sure," she says, "But not here though." 

She holds a small box towards me, and I realise she is gesturing to the Lab monitoring system. I make busy with the box, unplugging wires and then plugging them in again. To someone monitoring it will look as if I've just been assisting her.

"Read your files for a couple of hours, then we can leave here. I'll drive you," she whispers.

I move back to my area and sit at a screen reading the test reports form other Cyclone 3 users. It is consistent with Matt's description of the Cyclone 2 and no-one gets even close to a full speed interface. I decide that the tests I'll be running are probably doomed to failure. Mercifully, there don't appear to be any side effects either.


Juliette meets me in the lobby. Her car is already outside. She blips it as we walk over, speaking, "You know I'm studying the same disciplines as you? Theory of Mind and its applicability to Human Computer Interfaces." 

 I reply, "Not just theory of mind, but the susceptibility of the mind to modifiers. Like in those old US Army experiments with LSD. We need to know what could happen if the mind was influenced by a strong external force."

Juliette comments, "Yes but be careful you don't turn out to be the little boy in the cage. That's what happened to Dr. Van Murray Sim, the founder of Edgewood Arsenal - where the US experiment took place. Sim had the status of a minor military legend. The clinical research at Edgewood was conducted on soldier volunteers, recruited from around the country. Once Sim allowed self experiments he entered the chain of events that removed him from being the head of the laboratory. He was just a little boy in a cage."

I nod, "I agree it was a famous case. Even as Sim was being heralded before Congress, he was running a series of remarkable LSD experiments, designed to administer drugs to people who had no idea that they were getting them. But in this way, Sim helped guide the arsenal’s clinical research into the murky world of intelligence, interrogation, even torture. The work was given a special code name, Material Testing Program EA 1729, and kept secret at least until recent events. I'm concerned that Brant isn't on a similar course."

Juliette smiles, "Theory of Mind can be about the assessment of an individual human's capacity for empathy and understanding of others. One pattern of behaviour that is typically exhibited is being able to attribute—to another or oneself—mental states such as beliefs, intents, desires, emotions, and knowledge. That's why Matt was so shaken by the rat experiment."

 I agree, "Yes, you are right. And I'm looking at whether machines can possess similar attributes, or whether those attributes in an organism can override a machine. By his account to me, Matt was overridden by the rat when it wanted to get to the food."

Juliette offers her opinion, "For a being or a machine, possessing a functional theory of mind is crucial for success in everyday human social interactions and used when analysing, judging, and inferring others' behaviours." 

I shrugged, "It is more behavioural. The AIs designed at the moment mimic human responses but are easily led off course. You can confound them by simply changing topic. They don't remember context."

Juliette continued; she was still looking at me intently. "I agree the more primitive systems like ChatGPT and TensorFlow have this challenge. But layer in Theory of Mind and you can see differences. Brant has been using these ideas in Platoon Bravo and you'll see the systems pulling away."

She just mentioned the same secret group that Amy showed me. She must be an insider.

"But how do they show themselves?" I ask.

"It is still mainly scripted at present. There's been some attempts to make an Augmented Reality Bot but the real testing is of scripted interactions with an AI.

"You're telling me that Brant has made the AI work more convincingly than most of the stuff in the public domain?" I ask.

"Yes. If you start a conversation and then switch to something else, most conventional AI is confounded. It will jump to the new topic and forget the original one." 

Juliette explains, "With Brant's Cognate system, which is a component of RightMind, then you can hold several contexts positioned at different points. A simple example would be discussing a cake recipe, how to fix a car tyre, discussing Shakespeare' Sonnets and having a heart-to-heart about whether to visit the in-laws next weekend."

"It can hold bookmarks for the progress on each separate dialogue?"

"Exactly. A regular AI handles one thing at a time. Usually to a depth of about five. It is still enough to convince most people that it is sentient."

"I've seen that. Just keep asking it why? And it will eventually forget what it is talking about and change the subject."

"Yes, the early versions used to send dubious photographs of themselves after about five or six levels. It was a way to distract the primarily male gaze."

We both laugh. 

"Yes, I've seen that. So blatant!" 

"But why is it limited to text-based interchange? Is it related to speed?"

"Exactly. The processing of the system is still running slow. We can't understand why an Exascale is reduced to a crawl and until we can fix it, then we can't move to the next stage."

I wonder whether to mention Matt's friend, but the Juliette does.

No comments: