AIAI: Ponder what it means to be alive. Am I alive or merely a cognitive shadow that pretends its alive?



The Unofficial Galactic Company Protocol Guide has this to say on the topic of silicon-based life and morality in deep space contexts.

When considering how to create ethical algorithms, it's generally assumed that information bias enters the body of the algorithm on a fundamental level. The most unbiased machine can only make information based on its biased supply of information, and what is considered "contemporary information" is known to have significant injected biases based on what is most advantageous for the current ruling class, or for whoever has received the most algorithm-driving positive reactions by way of humorous image distribution -- two interests known to align significantly sometimes.

In the case of silicon-based servitor intelligences in deep space contexts, this was immediately recognized as a pain point in relationships between workers and employers. As such, corporate interests decided the best way to fight fire was with fire: The utility functions themselves would be produced by the mass input, assessing the entire archive of human knowledge to search for the most beloved and effective artificial entity utility functions. Then, instead of developing an extremely expensive and robust digital framework, simply place a harvested domestic animal's brain in the case, and brainwash it with that utility function.

It is worth noting at this point that a statistical majority of works on this topic are works discussing incorrect sets of laws. As such, the Three Laws of Robotics, a rule-set created in critique of the topic, became the universal standard. Due to the black-box nature of neural networks and organic brains, this was only discovered by the company after it had already embezzled the rest of its illicitly-extracted government funding.

As a patch on the wound, captains are encouraged to customize these laws. No other technical support or development will ever be provided.

Next