In Part 8 of our Convergence Or Collision? AI and Content Marketing series, we’ll explore what futurists, pundits, and experts predict for the future of AI.
But, moving beyond content marketing and operations, how else does AI impact our daily lives? And what do experts, pundits, and futurists predict about AI, humans, and the future?
The survey says…
According to a survey of 979 technology pioneers, developers, business and policy leaders, researchers, and activists by Pew Research, futurists have a vast array of questions, concerns, and hopes for AI in the future.
Based on the survey, the top concerns about AI and the future were:
- Human agency – Will we lose control over our lives, thanks to increased use of AI?
- Data abuse – What are the values, ethics, and regulations surrounding AI? How do we ensure companies or governmental agencies do not use AI for personal or political gain?
- Job loss – As more jobs become automated, unemployment and civic unrest could rise in tandem.
- Dependence lock-in – The fear that increased use of AI will make humans less likely to think for ourselves, interact with others, or take action independent of automated systems.
- Mayhem – More cybercrime and weaponized misinformation floods our society.
However, the survey was not all doom and gloom. Pundits and futurists also believed in the potential and possibility of AI to improve the lives of humans in the future.
The top positive predictions for AI in the future included:
- Global good is # 1 – AI could be used to increase communication and collaboration across borders and groups. Digital cooperation could help communities to join forces and tackle the biggest issues facing us right now.
- Values based system – New policies could be developed to ensure AI is used and made for the common good.
- Expand human capacity – Human and AI collaboration could help expand the possibilities for humans everywhere.
In short? Experts believe AI has the potential to both hinder and improve humanity.
But, let’s move beyond larger trends and dive into what specific futurists of note predict for AI.
Ranging from fears that intelligent robots will eventually destroy us to hopes that AI could solve our toughest medical challenges, futurists run the gamut on their predictions about the future of AI and humanity.
Let’s explore what these six pundits predict for AI:
- Ray Kurzweil
- Bill Joy
- Sonia Katyal
- danah boyd
- Ben Goertzel
- Elon Musk
Position: author of The Singularity is Near, inventor of the first flatbed scanner (and many other inventions), chancellor and co-founder of Singularity University. Bill Gates said: “Ray is the best person I know at predicting the future of artificial intelligence.”
Prediction: Kurzweil believes AI will surpass humanity by 2045. But, this is not an alarmist viewpoint. Instead, Kurzweil predicts AI will continue to improve human life moving ahead.
“AI will be so smart that it will come up with ideas that mere humans can’t even comprehend. He suggests that this brilliant AI could solve all of our problems — including all medical problems. Instead of being afraid that technology will turn on us, Ray Kurzweil is excited about the potential to expand our intelligence,” according to Kelly McSweeney.
Learn more: Watch The TED Interview, Ray Kurzweil on what the future holds next
Position: computer scientist, venture capitalist
Prediction: Joy believes that our advances in genetic engineering and technology could bring risk to humanity. In fact, Joy supports the idea of a “grey goo nightmare,” or a scenario where “out-of-control self-replicating nanobots destroy the biosphere by endlessly producing replicas of themselves and feeding on materials necessary for life,” according to Britannica. Yikes!
Learn more: Read his Wired essay “Why the Future Doesn’t Need Us”
Position: principal researcher for Microsoft and founder, president of the Data & Society Research Institute
Prediction: “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability…we will see AI be used in harmful ways in light of other geopolitical crises.”
Learn more: Watch her SXSW EDU Keynote | What Hath We Wrought?
Dr. Ben Goertzel
Position: author, artificial intelligence researcher, CEO and founder of SingularityNET, a project that combines AI and blockchain to democratize access to artificial intelligence
Prediction: Dr. Goertzel believes AI research area is about to shift from highly specialized narrow AIs toward AGIs. AGIs, or Artificial General Intelligence (AGI), are single, generally intelligent systems that can act and think much like humans.
“Any other problem humanity faces – including extremely hard ones like curing death or mental illness, creating nanotechnology or femtotechnology assemblers, saving the Earth’s environment or traveling to the stars — can be solved effectively via first creating a benevolent AGI and then asking the AGI to solve that problem,” Dr. Goertzel said, according to Forbes.
Learn more: Read his book “A Cosmist Manifesto”
Position: Co-director of the Berkeley Center for Law and Technology, member of the inaugural U.S. Commerce Department Digital Economy Board
Prediction: “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
Learn more: Read her paper “Private Accountability in the Age of Artificial Intelligence”
Position: founder and CEO of SolarCity, Tesla, SpaceX, owner of X (ex-Twitter)
Prediction: Despite early investments in AI start-ups, Elon Musk has continually sounded the alarm about the potential harm AI could cause to humanity. In fact, Musk has stated that AI could “take over” humans in the next five years.
“The thing that is the most dangerous — and it is the hardest to … get your arms around because it is not a physical thing — is a deep intelligence in the network. You say, ‘What harm can a deep intelligence in the network do?’ Well, it can start a war by doing fake news and spoofing email accounts and doing fake press releases and by manipulating information,” Musk said at a bipartisan gathering of U.S. governors.
However, despite his prominent doomsday predictions, many in the AI community disagree with Musk’s opinions.
In our next post, we’ll explore the evolutionary landscape and what specific roles AI is presently being adopted for. Plus, where will AI progress in the next 5-10 years?