The History of AI

June 17, 20234 min read

Artificial Intelligence has recently become an integral part of the modern world, with its use in day to day life growing exponentially. Previously outlandish dreams like self-driving cars, and robot soldiers will soon become reality in the 21st century. But the concept of AI is not a modern development; with its roots able to be traced back to before the Second World War. In this article, we will discuss the fascinating history of AI, exploring its origins and key milestones in development.
The story of Ai starts in the 1930s, when the renowned mathematician Alan Turing proposed the concept of a "Turing Machine," a hypothetical device capable of following any given recipe to complete a task. He initially conceived this device as a solution to a mathematical problem that existed at the time, but this idea of a programmable machine that could follow recipes to completion laid the foundation for the birth of modern computers. During World War II, the first modern computer, ENIAC, was built off Turing’s idea, and it was able to solve complex military calculations. This computer was a powerful tool, but it was not intelligent, only able to follow instructions. Any tasks that current AI can accomplish like recognising faces or translating languages were very far out of reach from these first computers.
In the 1950s the first golden age of AI research started, where researchers focused on a "divide and conquer” approach into cracking the mystery of intelligence. They identified characteristics of intelligence like perception, problem-solving, and language recognition and decided to make tackle these characteristics individually, to create useful programs that could accomplish tasks that previously only humans could do.
During this golden age, one of the most notable projects created was the SHRDLU system, which showcased AI's potential in problem-solving and language understanding. It worked in a simulated environment with a series of blocks and the program was able to transform these blocks from one arrangement to another. It could also understand English instructions and give feedback to the user in English. It was a landmark project, but had limitations such as the simplicity of the simulated environment and the constrained language set which it could understand.
One significant obstacle faced by early AI researchers was the "combinatorial explosion" in problem-solving. As problems become more and more complex, the time and computing power required to find solutions increase exponentially. This issue led to the AI research hitting a roadblock, and an emergence of an "AI winter" in the early 1970s to 1980s, with AI research receiving criticism and reduced funding.
However, in response to these challenging times, a new system of AI emerged called knowledge-based AI. It employed “expert systems” that utilized human knowledge to solve problems. These systems represented knowledge as rules in a “knowledge base” which were manually entered by experts, and then the AI would employ their inference engines to draw conclusions based of this knowledge. A notable expert system developed was called MYCIN, a doctor's assistant. With the help of doctors, a knowledge base was entered into MYCIN about blood patterns and which patterns indicated diseases. With this knowledge base, the AI could then use its inference engine to make a diagnose of blood given to it. This program showcased AI's ability to surpass human performance in industry and help people in a meaningful way.
Additional approaches to artificial intelligence, such as behavioural AI and agent-based AI, were explored in this time period. Behavioural AI involved the program having a hierarchy of behaviours, each with different priorities. The AI would work through the priority order, doing the highest priority behaviour at all times. This system was used in the creation of robot cleaners. Agent-based AI was created, which would be feed your preferences in numeric values, and would work rationally to
maximise those preferences. They could also sense their environment and make rational choices based off their environment to maximise the user’s interest. Whereas in the first golden age of AI, a divide and conquer approach was used to create AI capabilities, AI agents had a complete set of integrated abilities. Example of prominent agents is the software agent Siri, and DeepBlue, which was able to beat world champion Kasparov in chess in 1997.
Recent times have witnessed the dominance of machine learning in AI research. Machine learning can be done in two ways, through supervised of reinforcement learning. Supervised learning is where the AI is trained by being given loads of data so it can identify patterns off that data. Reinforcement learning is where the AI experiments by making decisions and then receiving feedback on them so they can learn which decisions are based. In these models, “neural nets” are used, where the AI would have layers of ‘neurons’ that had active and inactive inputs with a numeric weight assigned to them. Active inputs weights will combine until a numeric activation threshold is reached, that will then make the neuron's output active. Deep Learning was also created, an extension of machine learning where you had more layers of neurons, with each neuron being more interconnected to others. More computer processing power was necessary for this system, causing it to only emerge into prominence more recently. AI has had a topsy turvy history, with many ups and downs, and few can predict what lays in store for the future.