Artificial intelligence has come a long way since Alan Turing first speculated about the concept of the “thinking machine” in 1950, but there’s still a significant gap between the popular conception of AI and the reality of this burgeoning technology. Despite the proliferation of AI applications, the phrase “artificial intelligence” is still more likely to evoke thoughts of HAL 9000 and Lt. Cmdr. Data than anything to do with machine learning or natural language processing.
Indeed, the legacy of AI in entertainment has conditioned us to think of it as technology that operates without human input. No wonder so many have been shocked to discover that Google Assistant relies on human help to improve its understanding of voice conversations or that numerous tech startups hire human workers to prototype and imitate AI functionality.
The reality is that we are still far from achieving generalized AI that is functionally equivalent to the human mind. As the cofounder and CEO of a customer support automation platform that helps enterprises launch and train virtual agents, I’ve realized that whether the first generalized AI is born a year or 100 years from now, AI will always require human input and expertise — technical and otherwise — to operate at its full potential in a way that’s ethical, responsible and safe.
Below, we’ll take a look at some examples of how AI relies on human input across a variety of established and emerging applications and explain why even the smartest AI will still require human assistance.
Social media: Humans course-correct for algorithmic extremism.
For years, social media platforms relied solely on algorithms to sort and recommend content to users. These recommendation engines tracked user preferences by observing their behavior and returned additional content that aligned with user tastes.
Unfortunately, according to a report published by Social Media Today, these algorithms were found to have helped instigate political divisions by favoring divisive content to drive engagement. A report published by Time magazine noted that critics blamed the algorithms for breathing new life into conspiracy theories in the pursuit of new audiences.
Now, many platforms are turning to human workers for some much-needed perspective. In August, Facebook announced plans to hire a team of editors to curate stories on its platform in an effort to restore its reputation as a trusted information source. Other tech and entertainment brands have made a point of highlighting the human element in their content curation systems.
Algorithms may be great at finding content that fits our preferences, but they simply cannot account for the broader societal implications of the suggestions they make. Social media platforms are discovering something that will likely remain true no matter how advanced AI technology becomes: Humans are best suited to curate content for their fellow humans.
Medicine: Combining human and machine intelligence.
In 2016, a team of Harvard pathologists developed an AI-powered technique designed to identify breast cancer cells. On its own, the automated diagnostic method was able to accurately identify cancer cells roughly 92% of the time. This figure stands just shy of the success rate of human pathologists, who accurately identify cancer cells about 96% of the time.
However, the most notable results from the study came when the team behind the diagnostic system combined analysis from human pathologists with their automated computational methods. The combined analysis generated near-perfect accuracy, correctly identifying 99.5% of cancer cells. In this case, human input didn’t just compensate for machine error; instead, the combined efforts of human and machine produced a greater effect than either could achieve alone.
The Harvard system may one day progress to achieve 99.5% or greater diagnostic accuracy independently. However, as long as there is a fundamental divide between AI and biological intelligence, humans will continue to offer unique viewpoints that can be helpful to AI.
Self-driving cars: AI handles corners, while humans handle corner cases.
One of AI’s most exciting emerging applications comes from the world of autonomous vehicles, which have the potential to completely transform the way we move through the world. Autonomous vehicles, or self-driving cars, use sensors to detect their surroundings and then algorithmically determine how best to navigate from point A to point B.
The technology requires human experts to train it — in both the tactical process of operating a vehicle and the cognitive processes needed to recognize common traffic signs and obstacles on the road. Many companies have now developed autonomous cars that are ready to take to the streets, but for the time being, these vehicles still require human input. One startup is developing technology that would allow human operators — expert drivers observing a vehicle’s progress from afar — to take over for the autonomous system in difficult driving conditions.
Thanks to its quick reaction times and ability to perfectly recall and adhere to traffic laws, AI is superior to human drivers under standard road conditions. However, that superiority is a result of training. Those AI-powered advantages are all but erased once something happens that the AI has not been trained to handle. When that happens, AI processing power must take a backseat to human creativity and adaptability.
AI will always need human input.
AI is capable of doing many things that human beings simply cannot, such as calculating enormous sums or sifting through tremendous amounts of data. It’s no wonder that many have grown anxious over the role that humanity will play once these thinking machines are capable of operating independently.
However, as the examples above remind us, AI has always stood on the shoulders of human intelligence. It may be far superior in its ability to predict our preferences or obey every traffic law, but it still has much to gain from our uniquely imperfect perspectives. Ultimately, as long as AI technology is used for humans — as long as there are new applications for AI to learn and new tasks for it to master — AI will need human input.
This article was originally published in Forbes.