In Part 3 of our Convergence or Collision? AI and Content Marketing series, we’re taking a dive into the history of AI.
A brief history of AI
As we mentioned before, the concept of “artificial intelligence” is rather difficult to pin down in historical terms. The roots of the field go back to at least the 1940’s, yet the idea of AI was crystallized in Alan Turing’s famous 1950 paper, “Computing Machinery and Intelligence.”
Turing’s paper posed the question: can machines think? The paper also proposed ‘the imitation game,” later known as “The Turing Test,” for answering that question. Turing’s paper raised the possibility that a machine might be programmed to learn from experience much as a young child does.
However, the actual term “artificial intelligence” wasn’t coined until 1956, by John McCarthy. In fact, he complained that “as soon as it works, no one calls it AI anymore.”
So what exactly do we consider AI, then?
Well, it’s a bit complicated. According to the White House, “there is no single definition of AI that is universally accepted by practitioners.”
Some define AI as a computerized system that exhibits behavior that is generally considered as requiring intelligence. Others define AI as a system that is able to rationally solve complex problems or take fitting action in order to achieve its goals in whatever real-world circumstances it encounters.
Okay, we know what you’re thinking. Sounds like robots! But let’s try not to get hung up on that. A robot would, in essence, be the outer shell of AI, the artificial intelligence is the “mind” inside the robot. Think of the AI we have in our everyday lives; Siri and Alexa are common examples of AI software, but neither are robots in the way they are typically envisioned.
AI research in the late 1990s began to progress rapidly. By roughly 2010, we saw the kind of progress and enthusiasm for AI that is prevalent in today’s world. This was driven by three intertwined factors: the availability of big data, which was offered by a variety of sources including social media, government, e-commerce, and science. The raw material allowed for improved and advanced machine learning; which relied on more powerful computers.
A few notable moments in AI history include:
- In 1997, IBM’S Deep Blue Computer played chess against the world champion Garry Kasparov and won
- DARPA’s Cognitive Agent that Learns and Organizes (CALO), started in May 2003 and ended in 2008
- CALO led to many spinoffs, with a significant one hitting the Apple Store in 2010 – Siri
- In 2011, IBM’s question-answering computer Watson won against two all-time best champions in the TV game show “Jeopardy!”
- In 2014, Eugene Goostman, a super computer designed to simulate a 13-year-old boy, became the first to “pass” the Turing Test. However, that’s up for debate, depending on who you ask.
Similar, but different
It’s also important to note that there are several terms used interchangeably with artificial intelligence, that are actually different. The three most commonly used are:
- Cognitive Computing – this is defined as the simulation of human thought processes in complex situations using computerized models. It involves the use of self-learning systems that implement pattern recognition, data mining, and natural language processing to imitate the way our human brains work.
- Machine Learning (ML) – is a type of AI that allows computers to learn without being explicitly programmed. It focuses on computer programs that can change and learn when exposed to new data. Essentially, it stems from the idea that humans should be able to give machines access to data and have them learn for themselves.
- Deep Learning – this is a subfield of machine learning that focuses on algorithms inspired by the structure and function of the brain. Artificial neural networks, or ANN, are the pieces of computing systems that simulate the way human brains analyze and process information.
In our next post, we’ll explore another form of AI: Natural Language Generation.