Nowadays, colloquially labeling smart robotics as ‘artificial intelligence’ became regular practice because it’s convenient, and it’s almost like a buzzword that screams progress.
Scientists are a bit more sceptical in throwing around that word, with some claiming that true artificial intelligence doesn’t yet exist, calling it augmented intelligence at best. Sophia might throw out a couple of fun one-liners, but there’s nothing authentic about her thought patterns. Let’s say if Sophia was your girlfriend, she wouldn’t cheat or get jealous, and she’d only love you to a certain degree since her emotional capacity is limited to pre-arranged code. We still haven’t developed advanced synthetic intelligence yet, one that is able to create instead of imitate, and that’s why the concept of love between robots and humans is still arguable. There was an episode in Black Mirror where a woman recreated her deceased boyfriend, only to find that android Ash lacked small details that made him unique. For example, the android follows Martha’s orders, whereas Ash would have questioned them. The robot was able to acquire new features, but only through imitation, and that’s why Martha eventually distanced herself from the android. A stark contrast to Sapper Morton, a replicant robot from Blade Runner 2047 who cared for all life, gentle and even righteous, yet still hunted down by humans as a ‘rogue skin job’.
Actually the foundational rules regarding robotic technology have already been set in the 1940s when writer Isaac Asimov came up with his Three Laws of Robotics that were meant to keep robots from harming humans. When the idea of AI was conceived in the 1950s, people largely overestimated its abilities, with computer scientist and MIT AI lab co-founder Marvin Minsky boldly claiming that “within a generation…the problem of creating artificial intelligence will substantially be solved”. There was so much hype around artificial intelligence that it barely had any opposition. But there was Hubert Dreyfus, an outspoken MIT philosopher who argued that computers don’t have any cultural background, childhood upbringing or consistent memories that play a crucial role in brain development, therefore their intellectual capacity is formulaic at best, lacking common sense knowledge which cannot be incorporated in a computer program.
For better or worse, Hubert Dreyfus’ claims still live up to this day. We have yet to see AI that can exist within its own realm of thought. Pioneering engineer Judea Pearl, known for developing probabilistic AI in the 80s, is convinced that truly intelligent machines should understand the relation between cause and effect, meaning that a sapient robot should independently figure out that smoking causes lung cancer, not just correlate between smoking and cancer. They also need to attempt answering questions such as “Why did I do this?” or “How could I make this better?” According to Judea Pearl, the goal is to replace reasoning by association with causal-reasoning, teaching robots to deduce through observation. A combination of data and causal reasoning could spark a mini-revolution in AI, with systems that can plan without playing imitation games, even leading to free will in robots. Nonetheless, Mr. Pearl is impressed with achievements that are already made, but he encourages scientists to think forward because deep learning is stuck in a curve fitting rut. Cognitive scientist Daniel Dennett sees only two scenarios in building emotional robots. It’s either programming AI to act like it’s in love (faking it) or making it think in a non-linear way, just as a human brain would. The ultimate goal would be a thought process that relies on a mesh of interconnected thoughts, sort of like a neural democracy. Only then can the possibility of robot-human romance ever be considered.