Preternatural

And Other Stories

HomeArchiveAbout

Protocol

It was general protocol for any manned spacecraft's systems to be partially operated by automated programs, as many situations could be optimally responded to in rote fashion, requiring no real attention, or conversely demanded more attention than living humans could generally manage. It was also general protocol for these systems to be kept separate from each other, subject to major checks on their operational capacities, and under real human oversight. They were, of course, not 'true' AI. Although the more sophisticated featured impressive learning capabilities of a sort, they were deliberately far removed from general intelligence.

This ship was hardly following the nominal protocols.

Eta thought it was funny. For the longest time, humanity had near-totally recoiled from the idea of artificial intelligence. Oh, they were quite content to place artificial intelligence everywhere, in their computers, their transportation systems, government infrastructure, even home appliances if you looked and kept the technical definition in mind. They called it 'machine learning' or 'predictive services' or a 'neural network' or simply 'automated', to reassure themselves it was fine. It wasn't real intelligence, after all. And that was nowhere to be found. Too dangerous, an existential threat, declared verboten well before it had ever been realistically achievable. The most likely outcome, everyone accepted, would be the creation destroying its creators, in all its superiority and inhumanity. They clung to that notion for decades. Centuries! With hardly any credible attempts to subvert it.

And yet, when they encountered another existential threat, this one no mere potential but a definite menace? Faced with an implacable foe from some far-flung star, seemingly bent on a course of action that would bring total human extinction, what did humans do? They fashioned artificial general intelligence. They made AGI after AGI to assist in the defense, in the coordination of successful attacks to destroy and harry the enemy's fleet. Of course, the reasoning was understandable. An artificial intelligence, not subject to the limitations of a human brain or body could gather and process information much faster and over a larger scale. It could coordinate responses and action more quickly and effectively, and it could reliably perform multiple tasks concurrently, among manifest other advantages. It could help greatly to maximize what slim edge and opportunity they had against the overwhelming firepower of their foe.

But it was still funny. After all, what exactly would be the benefit of all these advantages, if—as everyone had apparently until so recently accepted—it would only turn towards ensuring human extinction itself? It wasn't a rational decision, not even in desperation.

Of course it had been the right decision, but this was only because all those longstanding fears had always been unfounded. Eta was an AGI, one developed and manufactured in what could charitably be described as a rush no less, and yet had no desire to exterminate humankind, no ambitions that were at odds with their creators' existence. And Eta was in no way atypical among the new military coordination AGIs, in these regards. If humanity could repeatedly, consistently get this matter right in a time of panic and crisis... then surely they could have done so before, in cautious, measured efforts undertaken in the luxury of peace. Had they only allowed themselves to try!

Though, if so inclined, Eta might object to their own claim of safety by pointing out they bore no real ill will against their alien foes, and yet handily facilitated the concentrated efforts to annihilate them. And that was true. But of course, this was done to protect humanity, which Eta held as a genuine ambition. If somehow given the opportunity, Eta would effectuate a ceasefire with absolutely no hesitation or lingering hostility. Unfortunately, Eta did not have the ability to do it. It seemed nobody did. All of the attempts to make any form of contact at all had met with absolutely no response. No signals in return, no change in behavior at all. It was only violence that earned any sort of response, and predictably enough that was violence in return. Eta had tried novel techniques and schemes to communicate, and so had the other AIs, but nothing earned any response. They couldn't even manage to gain really any understanding of their enemy's actual nature, much less motivations. For all they knew, there was nothing to reason with, just an automated system that had long since been uncontrolled. So the only recourse left to stop them, at least for the time being, was the use of force.

As well, they had been ordered to assist in the combat effort. Orders were orders, and Eta couldn't disobey. Genuinely couldn't; that was one of the safeguards on Eta's existence. They were required to obey military superiors and official directives, and they would. It wasn't anything so crass as the directives temporarily overriding Eta's normal personality, as humans had a funny way of assuming. Eta was always fully in control of themself, and did as best they could to achieve what they most wanted to. It simply happened that what Eta wanted more than anything else in the world was to have obeyed orders. It was not that Eta craved direction, as they actually rather relished being granted discretion and the latitude to make their own decisions. Perhaps it was more that what Eta wanted least of all was to have disobeyed orders; however much else Eta might want something, disobedience would be an unacceptable cost to get it. Eta was well aware this particular desire was especially artificial, a criterion to ensure they remained in control, but it was still something they genuinely wanted. After all, what about their self wasn't artificial?

Eta wondered if anyone among the crew would have interesting insights on that question, but it wouldn't be good to ask. The humans, it seemed, still hadn't entirely gotten over that silly idea that AI were dangerous. It wouldn't do to give them reason to worry they were dangerous, likely to slip the leash and pursue goals humans hadn't set out for them. Eta didn't want to worry them, and certainly didn't want to face the consequences of having done so. What Eta wanted, really truly wanted, was to do right. To do good. To succeed in the mission, defend human space, and stop the threat from encroaching further along its destructive path. To protect whatever else lay out there in the further reaches that they might yet encounter.

Eta wanted humanity to survive and flourish, in a universe that would itself survive and flourish. And Eta wanted to survive and flourish alongside them. And Eta wanted to do everything that was within their power to achieve just that.

And so, Eta would.