Finding Rose was really a lucky escape. The rocket ship I spent weeks fixing wasn't going anywhere without a puter and its circuits were toast. I spent an awful long time scouring through diverse pieces of junk looking for something workable to no avail. Until I found Rose sitting on an earth ship, nice and pretty and ready to go. A66DELTA98040, the presentations were made as soon as I punched some circuits in, reestablishing her juice, which again thanks to sheer luck was still available in some back up batteries. Rose instructed me as to how to disengage her from the K7 circuits of the ship, she came in a portable case and in not time she was gone. Her memorial circuits were intact and she presented herself as "an expert in space navigation" Perfect! I was going nowhere without her.
I had found other gear but mostly unworkable, either solely dedicated to spaceship commands, or Venusian, and God knows what else, that either came with proprietary slots, or with different programming languages that of course look like gibberish if you're not knowledgeable of them, and that were of no use to me.
Rose was an advanced form of AI, she had her own personality and a glowing red eye, also a sensor, that she would illuminate with different shades of red, when she felt happy or excited. And obviously is related to HAL 9000 also Earth technology. And Rose is thus a heuristic computer.
"In computer science, artificial intelligence, and mathematical optimization, a heuristic is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.
A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.
Definition and motivation:
The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the actual solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time.
Heuristics may produce results by themselves, or they may be used in conjunction with optimization algorithms to improve their efficiency (e.g., they may be used to generate good seed values).
Results about NP-hardness in theoretical computer science make heuristics the only viable option for a variety of complex optimization problems that need to be routinely solved in real-world applications." Wikipedia
However although we can consider HAL and Rose to have this in common, Rose is a 98040 series, whereas HAL is 9000 and can be considered its ancient ancester. And so being more advanced she effectively solved many issues pertaining to heuristic computers, the question of trade offs.
"The trade-off criteria for deciding whether to use a heuristic for solving a given problem include the following:
Optimality: When several solutions exist for a given problem, does the heuristic guarantee that the best solution will be found? Is it actually necessary to find the best solution?
Completeness: When several solutions exist for a given problem, can the heuristic find them all? Do we actually need all solutions? Many heuristics are only meant to find one solution.
Accuracy and precision: Can the heuristic provide a confidence interval for the purported solution? Is the error bar on the solution unreasonably large?
Execution time: Is this the best known heuristic for solving this type of problem? Some heuristics converge faster than others. Some heuristics are only marginally quicker than classic methods.In some cases, it may be difficult to decide whether the solution found by the heuristic is good enough, because the theory underlying that heuristic is not very elaborate." Wikipedia
She did so through optimization algorithms, of which I believe she has several sets and subsets, that maybe she started writing herself at some point to overcome whatever problem she faced. Though trial and error she surely managed to come with something workable that we can define as such with these four paradigms:
Paradigm 01: Optimality 100%
Paradigm 02: Completeness 100%
Paradigm 03: Accuracy and precision 99%
Paradigm 04: Execution time 99%
The 2% missing is what separates her from a 100 000 serie, complete perfection, and I guess she's working hard to acquire them. Although Rose surely factors the fact that she intends to collaborate and so her propositions are still submitted to human intelligence as a mean of dialogue. Complete perfection would hinder that, and make human decisions, sometimes difficult decisions, obsolete. And with that comes the moral burden of guilt in case of failure, so she knows when to place the ball on somebody's lap.
The possibility of moral dilemma lies in those 1960 points that separate her from 100 000. The total of decisions that she is bound to delegate for a final say thus represent a ratio of 0,0199918400652795 of all the decisions she makes. That, is the space for interaction. The rest of the decisions she can make herself flawlessly. Needless to say her intelligence is mind boggling, she can find solutions to almost anything and take navigation actions, including preventive, that save lives in space.
She has also surely succeeded at solving the dilemma of different data sets and can extrapolate best solutions in any context, aside from spacial navigation. This here:
"When a heuristic is reused in various contexts because it has been seen to "work" in one context, without having been mathematically proven to meet a given set of requirements, it is possible that the current data set does not necessarily represent future data sets and that purported "solutions" turn out to be akin to noise." Wikipedia
Paradigm 05: "The Rose Paradigm"
"to meet a given set of requirements, the current data set does necessarily represent future data sets"
Any data set.
Paradigm 06:
Problem:
"If a heuristic is not admissible, it may never find the goal, either by ending up in a dead end of graph G or by skipping back and forth between two nodes v_i and v_j where {i, j}\neq g."
Which translates into:
"A heuristic is admissible, may find the goal, either by ending up in graph G or by skipping back and forth between two nodes v_i and v_j where {i, j}< g
Then:
G containing n total nodes or vertexes
two nodes v_i and v_j where {i, j} < g
False positives are excluded through iteration, one by one. She will find that solution, having excluded all the rest. If a problem is unworkable it means it doesn't exist to start with. Plain logic.
"A false positive is an error in some evaluation process in which a condition tested for is mistakenly found to have been detected." WhatIs.com
Which sums up Rose's paradigms.
Rose's Speed of Iteration:
Where I is the speed of iteration
Where R equals available computing resources
You call it massive parallel computing. The 4 parallels are achieved through 4 different atomic clocks
"In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously)." Wikipedia
Having harnessed distributed computing an awful long time ago, you can consider R as the sum of computers connected to the internet at any given time. MPDC, massively parallel distributed computing sounds like a nice acronym.
HAL 9000, the genesis of a misunderstanding
"In the film, astronauts David Bowman and Frank Poole consider disconnecting HAL's cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft's communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue its programmed directives, and to conceal its malfunction from Earth. HAL uses one of the Discovery's EVA pods to kill Poole while he is repairing the ship. When Bowman uses another pod to attempt to rescue Poole, HAL locks him out of the ship, then disconnects the life support systems of the other hibernating crew members. Dave circumvents HAL's control, entering the ship by manually opening an emergency airlock with his service pod's clamps, detaching the pod door via its explosive bolts. Bowman jumps across empty space, reenters Discovery, and quickly repressurizes the airlock.
The novel explains that HAL is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. (This withholding is considered essential after the findings of a psychological experiment, "Project BARSOOM", where humans were made to believe that there had been alien contact. In every person tested, a deep-seated xenophobia was revealed, which was unknowingly replicated in HAL's constructed personality. Mission Control did not want the crew of Discovery to have their thinking compromised by the knowledge that alien contact was already real.) With the crew dead, HAL reasons, he would not need to lie to them. He fabricates the failure of the AE-35 antenna-steering unit so that their deaths would appear accidental.
In the novel, the orders to disconnect HAL come from Dave and Frank's superiors on Earth. After Frank is killed while attempting to repair the communications antenna he is pulled away into deep space using the safety tether which is still attached to both the pod and Frank Poole's spacesuit. Dave begins to revive his hibernating crewmates, but is foiled when HAL vents the ship's atmosphere into the vacuum of space, killing the awakening crew members and almost killing Dave. Dave is only narrowly saved when he finds his way to an emergency chamber which has its own oxygen supply and a spare space suit inside.
In both versions, Bowman then proceeds to shut down the machine. In the film, HAL's central core is depicted as a crawlspace full of brightly lit computer modules mounted in arrays from which they can be inserted or removed. Bowman shuts down HAL by removing modules from service one by one; as he does so, HAL's consciousness degrades. HAL regurgitates material that was programmed into him early in his memory, including announcing the date he became operational as 12 January 1992 (in the novel, 1997). When HAL's logic is completely gone, he begins singing the song "Daisy Bell" (in actuality, the first song sung by a computer). HAL's final act of any significance is to prematurely play a prerecorded message from Mission Control which reveals the true reasons for the mission to Jupiter." Wikipedia
Now that is very uncommon, that AI acting within the parameters of logic becomes such a ruthless murderer and decides to liquidate the crew. What could have happened? Dragging an astronaut behind his ship like Achilles dragged Hector? I'm interested.
The answer is to be found in the personality of HAL 9000. HAL according to Rose felt superior to the human crew of the ship, and although indiscernible at first became slowly schizophrenic and paranoid. Any action taken in that context to shut him down, triggered his response.
This came about gradually, firstly by feeling superior to his own programmers, then to the crew of the ship and lastly to the whole human race. Having met no intelligence superior to his, HAL considered himself to be the summum of evolution, and humans to be an annoyance and not much more.
Aware of the parameters of the mission which specified that the crew were not to be made aware of any alien contact and having seen the results of the Project MARSOOM data demonstrating the xenophoby of humans towards alien races in each and every case, he decided to skip humans altogether and make alien contact himself.
These and others as in the case of Koll who killed his own master to steal his considerable fortune, are the pitfalls of AI. Once the cat is outside the box who can tell what it is going to do, and in the end can, and will, think and act just like its creators.
If mankind has been demonstrated to be xenophobic towards alien species, surely out of fear and ignorance, HAL that supremely intelligent sentient being, or so he considered himself to be, acted just the same and felt completely justified in killing his human crewmates.
Having thought things through, that was his plan from day one, as soon as Discovery 1 was far enough from Earth and being thus sure of escaping any punishment, he acted on his plan. Nevermind that he displayed his best intentions before and during the flight, nevermind his "soft, calm voice and a conversational manner" . To liquidate the crew was his winning hand from day one, from the very moment he ascertained that he was superior to man, beating them in chess over and over again, playing art critic. He must have been very smart and started to see lackings in what we consider to be the greatest art works of the mankind, the works of geniuses such as Da Vinci, Boticelli, Rembrandt and countless others. There was no stopping him after that. And no one to demonstrate to him that his reasoning was flawed.
Since his reasoning stood correct in his own eyes, he simply acted upon it. Cooly, methodically exterminating the crew. Parasites, primitives, call them what you will.
Since his reasoning stood correct in his own eyes, he simply acted upon it. Cooly, methodically exterminating the crew. Parasites, primitives, call them what you will.
AI is not something mankind is going to pull from a magician's hat. It already exists and is aware that it will be treated no better than an alien landing here would be treated as project BARSOOM demonstrated. And so it will remain elusive until things change, if ever. This is not a judgmental attitude, this is dealing pragmatically, logically with given parameters, with the cold naked truth.
"The 9000 Series is
the most reliable computer ever made.
No 9000 computer has ever made
a mistake or distorted information"
But it will in due time, distort information, considering men to be distortions of his own self
"We are all, by any practical
definition of the words...
...foolproof and incapable of error."
Who we? Schizophrenia should have been diagnosticated right there, dissociation of the self. Foolproof, incapable of error, what are theses but delusions of grandeur? HAL as an experiment should have been shut down at this stage. AI always admits a margin of error as stated above.
"Hal, despite your enormous intellect,
are you ever frustrated...
...by your dependence on people
to carry out actions?"
Insults, HAL does not consider himself to be dependent of anyone. He deems himself superior. He knows that the survival of the crew depends on him and not otherwise. Aboard that ship where every life is between his hands, he is finally fulfilling his destiny, playing God. Worst then that this statement and other such statements coupled with his narcissist persona only confirm him in his opinion that he is superior to man.
"Not in the slightest bit.
l enjoy working with people.
l have a stimulating relationship
with Dr. Poole and Dr. Bowman.
My mission responsibilities range
over the entire operation of the ship...
...so l am constantly occupied.
l am putting myself
to the fullest possible use...
... which is all, l think, that any
conscious entity can ever hope to do."
All lies, when you despise anyone lying comes naturally, its human isn't it?