When Did Artificial Intelligence Begin?

Over the past year artificial intelligence has taken on an increasingly prominent role worldwide. We keep hearing about AI that can speed up our work—or even replace us. But when did artificial intelligence actually begin? If you’re asking yourself this question, you’re in the right place. Below we’ll retrace every milestone of this technological innovation, examining how artificial intelligence was born and how it has evolved on the global stage. Enjoy the read!
Table of Contents

Where and when did artificial intelligence begin?

Artificial intelligence traces its roots to 1956, when J. McCarthy organised a conference at Dartmouth and coined the term “artificial intelligence.” This new technology emerged from experiments designed to prove mathematical, algebraic, and problem-solving theorems. The objective was to show that machines could think. But in 1956 how could anyone talk about artificial intelligence? How could they believe the discipline had a future?

How did artificial intelligence get started?

Some researchers introduced programs capable of performing intelligent behaviours, including:
  • Logic Theorist: because AI is built on mathematics, the first step was to demonstrate—through this software—that a machine could prove mathematical theorems.
  • General Problem Solver: humans constantly confront problems and naturally want to spend as little time as possible on them; the underlying idea was to create software that could emulate human problem-solving across general tasks.
  • In 1959 H. Gelernter presented a program able to prove geometry theorems, followed shortly by another program for symbolic integration (algorithms).
when was artifical intelligence born

AI Winter

Like every technological revolution, artificial intelligence has gone through several downturns known as “AI winters.” The term was coined in 1984 at the AAAI annual meeting, when researchers Roger Schank and Marvin Minsky warned that AI was about to enter a tunnel of pessimism. “We are facing a chain reaction—much like a ‘nuclear winter’—that will start with pessimism within the AI community, spread to the press, lead to drastic funding cuts, and finally end all serious research.” Three years later the billion-dollar AI industry began to collapse. In the UK the Lighthill Report (1973) had already prompted severe funding cuts; the United States followed suit, though with less immediate impact. Expectations had to be scaled back. During the 1970s–80s, researchers refocused on creating expert systems—software able to emulate the decision-making of a human specialist in a specific field (finance, medicine, geology, etc.). This shift from general-purpose to domain-specific systems revived investment and spawned successful programs such as MYCIN (medicine) and XCON (computer configuration). Yet high maintenance costs and limited “intelligence” led to a new AI winter when expectations once again collapsed.

The Advent of Machine Learning

In the early 1990s researchers changed course: instead of hand-coded rules, they embraced machine learning, which relies on automatic learning from data. Two key factors enabled this shift:
  • Neural networks—revived after years of neglect in favour of rule-based AI.
  • The rise of the internet—machines now had access to far more data, paving the way for Big Data.
Coupled with increasing computational power, new statistical techniques (SVMs, Random Forests, advanced probabilistic models) and deep learning began to flourish. Breakthroughs such as AlexNet (2012) in image recognition and voice assistants like Alexa and Google Home pushed AI into mainstream products. Success drew new investment and players—OpenAI, Gemini, Claude, DeepSeek, and others.

Future Developments

What challenges and opportunities await?
  • Large-scale AI: today’s “Large Language Models” (such as GPT) show increasingly advanced performance in text understanding and generation, pattern recognition and other complex tasks.
  • Ethical debate and regulation: as AI grows, topics such as privacy, algorithmic transparency, job impact and long-term risks have surfaced, prompting governments and international bodies to discuss new regulations.
  • Ongoing research: despite recent breakthroughs, AI research still faces challenges such as generalisation, explainability (Explainable AI) and bias mitigation.
  • Weak AI or Strong AI? Right now the focus is on weak (narrow) AI, but strong AI may arrive soon—I cover this in the article.

FAQ

When and how did artificial intelligence originate?

The term “artificial intelligence” was coined in 1956 at the Dartmouth Summer Research Project organised by John McCarthy and other researchers. That conference laid the theoretical groundwork for getting computers to solve problems that require human reasoning.

Who is considered the founder of artificial intelligence?

John McCarthy is regarded as the father of AI: besides coining the term, he created the LISP language and played a pivotal role in early automated-reasoning systems.

In what years did people start talking about artificial intelligence?

The first experiments date back to the 1950s (Alan Turing 1950, Arthur Samuel 1952), but the field became official in 1956 with the Dartmouth conference.

When did artificial intelligence arrive?

From 1950s theory we moved to practical applications in the 1980s (expert systems) and to the current boom driven by deep learning from 2012 onward, culminating in the mainstream spread of generative models from 2022.

What is artificial intelligence?

AI is the branch of computer science that aims to create systems capable of tasks that normally require human intelligence, such as perception, language, planning and learning.

Free artificial-intelligence tools

Many platforms have free or open-source options: ChatGPT Free, Google Gemini Basic, generative-graphics tools like DALL·E 2 in trial mode, and frameworks such as TensorFlow, PyTorch and scikit-learn.

Generative artificial intelligence

This category of models (e.g., GPT-4, DALL·E, Stable Diffusion) creates new text, images, audio or video from prompts instead of merely analysing or classifying existing data.

Artificial-intelligence apps

Mobile and web apps powered by AI include voice assistants (Siri, Google Assistant), translators (DeepL), AI-enhanced image editors (Adobe Firefly), coding tools (GitHub Copilot) and productivity solutions (Notion AI).

Artificial intelligence: examples

Facial recognition, Netflix recommendations, medical-image diagnosis, self-driving cars, customer-service chatbots, banking-fraud detection.

How does artificial intelligence work?

Models collect data, extract patterns using algorithms (machine learning, deep learning) and optimise parameters to minimise error; the result is a system that generalises to new data.


Author
Nicolò Caiti
I’ve made MarTech my career, focusing on artificial intelligence for digital marketing. In this blog I analyse how AI is transforming the sector—improving web performance, optimising digital strategies and speeding up everyone’s work. With years of experience in marketing automation and advanced customer-journey management, I share practical insights, case studies and best practices to help people harness AI’s potential in their roles. I hope you find the answers you’re looking for!