Artificial intelligence (AI) is a subject that brings a mixture of excitement and trepidation to many. On one hand, it sounds really cool, but on the other, what if the robots rise up and destroy us?
Artificial intelligence is a simulated intelligence in machines, usually computers of some kind. Most artificial intelligence is programmed via machine learning, essentially feeding a machine lots of data until it is able to recognise patterns, and fulfill a task without the help of a human. AI has been used by numerous companies like Amazon and Apple to make searching and shopping for products easier, and is commonly seen online in softwares like Twitter bots, which can have tangible impacts on our lives and culture. Artificial intelligence also has a prominent place in pop culture. Films like The Matrix and 2001: A Space Odyssey have given humanity a healthy fear of superintelligent robots but haven’t yet managed to quash our desire to create them.
Will we ever create technology that can surpass us? Or, even more frighteningly, have we done that already? That depends on your definition of “surpass”. On one hand, the human brain is not anywhere near as powerful as a supercomputer, but on the other hand, a supercomputer can’t do anything without a human giving it information and problems to solve. The ability to do calculations does not necessarily translate to intelligence. However, this does not mean that computers aren’t intelligent.
There is a test which can determine the level of intelligence of a computer: the Turing Test, developed in 1950 by Alan Turing, the father of modern computing. It involves a human and a machine responding to questions. Both either attempt to convince a judge that they are human, or the human tries to convince the judge they are a machine and the machine tries to convince the judge they are human. If a machine is able to convince the human judge that it is in fact a human, it is considered to have passed. So far, no machine has managed to pass the Turing Test, though a few have come close.
The most common term for the point in time where computers can surpass humans is the “technological singularity”. This refers to the point in time during which AI is capable of what is called “recursive self-improvement”. Basically, this is when AI is smart enough to create more advanced technology without the help of humans, which, in theory, will happen at a pace unimaginable to humans. Theoretically, technology past the technological singularity would render all human advancement, and possibly humans themselves, redundant.
“Our brains are basically overly complex, squishy circuit boards, and we not only solve problems, we also have emotions, moral centres, and beliefs.”
Other than our complete obliteration, what could come from technological advances in artificial intelligence? Our brains are basically overly complex, squishy circuit boards. We not only solve problems, we also have emotions, moral centres, and beliefs. Could computers one day advance to the point where they have those too? Self-driving cars are an example of the morality that humans are trying to instill in artificial intelligence. A recent study out of the Massachusetts Institute of Technology (MIT) looked at a series of moral conundrums to calculate what moral decisions are more influenced by culture than by human nature, in an attempt to decide how to program morality into self-driving cars. The scientists behind the study say that the results don’t actually tell us all that much about how to program self-driving cars. Just think about the last time you were in a situation where you had to choose between running over a group of doctors and running over a group of babies. It’s not exactly a common circumstance. What will most likely happen is that developers will program self-driving cars to act like a human would in cases like this: slam on the brakes and hope for the best.
“As long as the people who develop artificial intelligence programs are biased, the programmes themselves will have implicit biases as well.”
This brings up another major hurdle for artificial intelligence: it is created by humans, and humans have flaws, so artificial intelligence will have flaws by design. As long as the people who develop artificial intelligence programmes are biased, the programs themselves will have implicit biases as well. This has recently been in the news with Amazon’s sexist AI recruitment program, which was based on previous successful applicants. The AI went through previous applications and noticed patterns like experience and word choice. It then used this information to make recruitment decisions, ranking applicants lower if they used the word “women”, which created explicit gender imbalances. Amazon ended up scrapping the idea altogether. There are ways to combat this implicit bias, but most of the current solutions are short-term and only fix one problem at a time. It is, unfortunately, an inescapable fact of human-created technology that, as much as we might wish, it will not be perfect.
Most current AI exists either on the internet or in explicitly machine form. There aren’t a lot of humanoid intelligent robots out there in the world as of now, but movies and TV shows tell us that that is where we are heading. Exactly how humanoid could, or should, we make robots? Will we have robot pets á la Doctor Who’s K9? Will we have robot friends like C-3PO and R2D2 of Star Wars? Author and robot enthusiast David Levy argues that we will go farther than that. In his book, Love and Sex with Robots, he attests that we should expect to fall in love with and marry robots in the future. After all, if they are intelligent enough to have personalities and emotions, why shouldn’t we fall in love with them?
“Robots cannot consent because of their very nature, and if people learn to conflate robots with humans, this could have serious side effects.”
Arguably, we’re already part of the way there. Sex robots have already been developed and manufactured, and robot brothels exist, though they are not widespread. A robot brothel is in the works in California that will require customers to “get to know” their robot prostitutes before having sex with them. This is a controversial idea that brings a lot of unique ethical questions to the artificial intelligence discussion. Scientists who develop things like self-driving cars and shopping assistants don’t have to worry about the potential implications of their work when it comes to human sex trafficking, and they don’t have to think about things like consent in the same way as those who develop technologies explicitly for sexual purposes. Kathleen Richardson approached this in an article in the journal ACM Computers and Society in 2015. She argued that, rather than being a harmless convenience, the development of sex robots could have serious effects on human sex workers, by re-establishing the conception that sex workers should automatically be subservient to their customers. Robots cannot consent because of their very nature, and if people learn to conflate robots with humans, this could have serious side effects.
Artificial intelligence technology is developing at an astounding rate, and it will likely not slow down. This means that the scientists who develop AI, as well as us as a society, must face the ethical questions that come along with such technology sooner rather than later. We must accept that we, as humans, are imperfect and this means that anything we create will, by design, be imperfect as well. We must be ready to deal with the problems that follow sooner rather than later.