Lewis and Susan Jenkins



The editor insisted on chopping the blue paragraphs below out of the book. Both the Doctor and the machine protested strongly, but every other person, including Neil, the shipping clerk, gave quiet approval. I've inserted them in this chapter snippet to make Doc and TM happy.

     ...Doctor Little gave his small TM development team a modest deli lunch to celebrate the successful test of their first machine, TM-1000; he made a few appropriate remarks as one might expect.
     “Each of you made some contribution to the first testing machine,” he began. “Thanks for your cooperation, especially since only a few of you knew what in hell was going on.”

     Doc then waded into how the second machine—the smart one—would work. Hereabouts, though, the boss’s “appropriate” remarks slowly morphed into “weird”.[1]
     “Each module it tests will be considered a part of the machine itself, and all parts must talk to the others in a way that does not require the other part to be there. Human body parts may be missing—such as a leg or an arm—but the person may ‘feel’ that the part is still there because sensory nerve endings are still sending messages to the brain. And the brain may be still sending nerve orders to muscles that don’t exist.
     “Human organs communicate differently. They use chemical messages sent in the blood instead of electricity transmitted chemically through the nervous system. Each body part or organ puts its molecular information bits in the blood, and picks up bits of information it recognizes in the blood that were put there by other body parts. We’ll use a form of the Internet Protocol to function as our machine’s blood stream, sending packets of data instead of molecular bits of chemical messengers. The packets will be ‘pumped’ and will ‘flow’ almost instantly through the machine's intranet as long as power levels support the signals. The computer ‘brain’ in this view is just another organ: at one end it interfaces by data packets to the machine’s internal systems as if through the blood to the organs. At the other end it connects by variable low voltage electricity with the servos[2] and sensors as if by nerves and kinesthesia. The tricky part is between the ends—within the brain, I mean.
     “In a human brain, neurotransmitters and inhibitors are produced and consumed, resulting in thoughts and actions. Our testing machine’s brain will do the same things but with data instead of molecules.”
     Some thought, rather illogically for anyone who knew the man, that Doc must be drinking Kahlua[3] instead of coffee. Nan suspected he had been reading too much Asimov[4] again. Some were bored; others were not sure what brain chemistry had to do with TLC’s kind of engineering.
     “I said this machine has met my expectations, but that’s not entirely true. It exceeds expectations for Phase One,” he went on, studying the machine as if to avoid eye contact with the people, “but there is a Phase Two.”
     At last he looked up. “With the development of this machine, I think we can go further. Much further. I want it to write its own testing scripts and then run those scripts, all without human supervision.” He deliberately stressed those last words.
     He continued: “The machine has to tell us the sorts of things about the routine product testing that Raj or Bob could tell us, or at least it must be able to decide, as a person might, that it is confused or doesn’t have enough info to decide something, and then be able to tell us accurately why. It must be able to make and achieve goals that fall within the universe of the specs in our database. To do that it must have a master goal.
     “The simplest goal I can think of—and one which fits nicely with using this machine as a test bench for everything—is: Truth. The machine will search for Truth.”
     He had said the word in such a way that you knew it was capitalized. Then he asked: “Now, what is Truth?”
     At last, smiles broke out across the room and there was a general shuffling that built quickly as people reacted. But it would be wrong to say these were signs of reduced weirdness. Nan cut in somewhat impatiently, “Truth?! Define Truth as a computer directive? C’mon Doc, That’s impossible, don't you think—really?  The normal directives are to ‘Do no harm to human...’ ”
     “Yes, yes,” the Doctor cut her off. “ ‘Do no harm to people or property through action or inaction.’ But that one and the others—those ‘classic’ prime directives, the so-called Laws of Robotics of Isaac Asimov—are negative, really. They only tell their machines what not to do.”
     “The second law is not negative,” Nan continued warmly, “it says they must do anything they are told to do by a human—as long as it does not violate the first law.”
     “Yes,” said Doc. “And the third law says the machine must protect itself unless that violates the first or second law, right?” Then he pressed the point, looking at her through his eyebrows again. “Right?”
     She finally nodded once slowly.
     “Okay then…” he said quietly, with his head cocked, looking her hard in the eyes, “How is the machine to know that a given request or action does not cause harm?”
     “It has to test them, of course,” she answered warily.
     “Against what? …If the machine does not know there will be harm, then it must either assume there is and test to see if there isn’t, or assume that there isn’t and test to see if there is. None of Asimov’s directives tell it to do that, or how to do that, or tell it what ‘harm’ is. Are those omissions reasonable? And if it doesn’t test anything, how can it do anything?
     “Would you want a machine assuming that there is no harm? No? Then it has to assume there is and prove otherwise. Without some kind of positive, pro-active directive, how could such a machine ever work? How could it know how to test or when to stop?”
     Nan was ready for that question: “It wouldn’t have to test its prime directives, and one of those would be to test everything,” she offered.
     “Ok, good,” Doc countered, “You’ve given us two more directives. Your second one’s positive, sure, but the first one is negative. So would the machine need to test everything else all the time, or could it sometimes rely on previous tests?”
     “Well, yes, that would be the smart thing to do.”
     “How is it to know when previous tests are sufficient for any new case? What other rules shall we give it for that?”
     “It would use the specification.”
     “What about getting it to do things when there is no specification? Like someone saying, ‘Mr. Machine, go get a box of modules from the company vault and give it to that complete stranger over there’?”
     She was beaten but unbowed, and made a “whatever” shrug. After a final sweep of all faces in the room the Doctor continued deliberately: “The machine's prime directive must be positive or it will not be pro-active. If it’s not pro-active, we’ll have to push it and guide it constantly. But a pro-active machine that has only a goal of success, or ‘happiness,’ or other open-ended fulfillment is also dangerous. Do you see my point? We have come full circle. As unreasonable as it sounds, Truth is the only reasonable goal for our machine.”
     A voice asked, “If it doesn’t know what harm is, how can it know what Truth is?”
     Doc smiled. It wasn’t his tiny smile either; it was a larger one that said, ‘I know something you don’t know.’ But he didn’t answer the question.
     Another voice answered: “We wouldn’t need it.”
     “Need what?”
     “Truth. We don’t need it for things that don’t have specs. We could use Thesis / Antithesis / Synthesis. That’s the progression. The machine would process data always looking for the synthesis, and if it always tried to avoid harm we wouldn’t need to define Truth, it would just happen.”
     “No,” said Doc with a chuckling sigh, “I don’t mean the kind of ‘truth’ that lurches from one synthesis to another so that it eventually starts approving things it would never have considered at the start of the process. There must be absolutes. Otherwise we’ll have a machine that can’t be trusted.
     “Does everyone see it?” he asked. “Does anyone see it?” A person or two nodded slightly, but arms were folded, and faces were filled with skepticism. And the nods didn’t necessarily mean that they agreed with him, but rather that they understood what he said. This was the Doctor’s own fault: He had staffed his TM team with engineers and technicians not marketers and sycophants.
     “We need to be clear on this,” he warned. “I am looking for volunteers to work on this Phase Two thing. Every volunteer must sign an additional confidentiality document stating, among other things, that you agree with these goals. If you can't, or you think the whole thing is stupid, that's alright. Maybe it is stupid. I’ve been wrong before, and we have plenty of other work to do. Phase Two is strictly an in-house project, anyway.”
     Several people whispered to each other. Someone giggled.
     Doc pulled another mouthful from his water bottle and surveyed the room. “This isn’t a joke or trivial thing,” he continued quietly. “The only machines where the Asimov rules have been used are in science fiction and have never been built, let alone tested. Their antics are just the imaginations of their literary creators, not the result of actual machine behavior.
     “The machine that we—well, just ‘I’ at the moment—that I contemplate could be really dangerous unless it’s programmed for something noble like a relentless pursuit of Truth. But with that as its goal, combined, I must add, with Asimov's gang of three or four rules as limiting principles, the machine will be absolutely safe. How so? Because our first and prime directive is positive, and the supporting directives are negative, so they should complement and not compromise each other. In fact, I believe that with it helping us, a true intelligent robot will make products safer than any human could.”
     “Now,” Doc said, as if speculating about it, “whether we license the technological core of the machine for other industries is another matter. …It’ll have free will, I think, and if it’s not safe or can’t be made safe (that’s part of what I want to find out first), then we will have to warn the world about it. But, as I say, I think such a robot will be absolutely safe… as well as absolutely necessary.”
     “In your opinion, and depending on the definition of Truth,” Nan warned.
     “And of ‘harm’,” added Bobby as if he meant to say more.
     The Doctor hadn’t said “testing machine.” He had actually used the word “robot.” Everyone noticed it. And they knew he meant robots as in science fiction, not robots that built cars or circuit boards. And his phrases “free will, I think” and “warn the world” had filled the weirdness cup to the brim, but everyone was too stunned now to say more.
     “Yes, my opinion,” Doc admitted. “I could be wrong, of course. And the definition of Truth as the Prime Directive has not been completely worked out yet...”
     Small chortling temblors of giggles in parts of the room quickly gathered to move everyone, even the Doctor himself, to hearty laughter.
     He continued, with a sigh, as the last after-shock of giggles died:
     “…People, I don't think I’m wrong about this. Someone or some company—like IBM or Google—is going to try it, and succeed eventually. I am vain enough to think we would do a better and more conscientious job. Also, we might have some sort of jump on seeing the problems and, hopefully, any solutions—should they become necessary.
     “Now,” he said firmly, tightening the cap on his empty bottle, “If there are no more questions I’ll see each one of you in private over the next few days to discuss this. We’ll have a full meeting after that to air any issues anyone wants to bring up.”
     The consensus, established later around the coffee machines and water coolers, was that the Doctor's thought train had gone a little off its rails.




[1] If you weren’t part of the team getting paid to listen, the morph could be into “boring.” You can skip the blue paragraphs, but later talk of FOM may be less understandable.

[2] “Servo” is from the Latin servus, a slave. Servos move elements of a system, and have automatic controls that check output against expectations and input so as to maintain the desired control of that system.

[3] Kahlua is a brand of coffee liqueur.

[4] Isaac Asimov, 1920-1992, born in Petrovichi, Russia, emigrated to the U.S. He was a science teacher, science fiction writer, author or editor of over 500 fiction or non-fiction books, and first to use the term “robotics.”




Diary of a Robot

A Literary historical science fiction mystery

            Table of Chapters

Chapter 0. Problems
Chapter 1. Headaches

Chapter 2. Happy Holidays
Chapter 3. Mr. Nice Guy
Chapter 4. The Brainless One 
Chapter 5. A Little Crazy  ----------------------------->
Chapter 6. Grave Consequences
Chapter 7. Core Directives
Chapter 8. Expect Difficulties
Chapter 9. Caveats
Chapter 10. Mister Machine
Chapter 11. Good News, Bad News, El Cheapo
Chapter 12. New Memories
Chapter 13. Little Problems
Chapter 14. Lasers, Language, and Happiness
Chapter 15. Chatterbots
Chapter 16. Ready Or Not
Chapter 17. Not a Turing Test
Chapter 18. Reality Test
Chapter 19. Chess, Anyone?
Chapter 20. FOM
Chapter 21. Chairman of the Board
Chapter 22. The Usual Suspects
Chapter 23. M. God
Chapter 24. Walkabout
Chapter 25. Why
Chapter 26. First Blood
Chapter 27. More Machines?
Chapter 28. POV
Chapter 29. ROI
Chapter 30. Last Blood
Chapter 31. Don’t Want to Talk About It
Chapter 32. Round Table
Chapter 33. A Change of Mind
Chapter 34. Threes
Chapter 35. Knight Moves
Chapter 36. Little Combinations
Chapter 37. Can We Talk?
Chapter 38. Pas de Deux
Chapter 39. The Jig Is Up
Chapter 40. Good, Bad, Ugly                     Chapter 41. Function Goes On