Cast your mind back to the end of last year and think about your Instagram, Facebook, and Twitter (as it still was) feeds. If they were anything like mine they were likely full of friends’ AI-generated photos. Or, at least, the ones that made them look smarter, prettier, taller, thinner, etc. Generative AI had exploded into the mainstream.
You could be forgiven for thinking it was invented around then too – and are possibly surprised to know that Artificial Intelligence is as old as the aunties and grandmothers who asked you about it around the dining table at Christmas 2022.
Putting AI to the test
AI is around 70 years old. Its roots can be traced back to Alan Turing (of ‘The Imitation Game’ fame), the British WWII codebreaker. Turing was a leading mathematician, developmental biologist, and a pioneer in the field of computer science. His earliest work created the foundations for AI as we know it. His eponymous test, The Turing Test (created in 1950), tests a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another.
The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine’s ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine’s ability to give correct answers to questions, only on how closely its answers resembled those a human would give.
A thought experiment
The Golden Age of AI followed, spanning roughly 1956-1976. During this period, scientists and researchers were optimistic about the potential of AI to create intelligent machines that could solve complex problems by matching human intelligence – or even surpassing it.
Whilst the era fizzled out it delivered many a ‘first’ that still holds value today. ChatGPT’s ‘great-great-grandmother’ could be considered to be ELIZA – one of the first chatbots (then called ‘chatter bots’) and an early passer of The Turing Test, which was created from 1964 to 1966 at MIT by Joseph Weizenbaum.
Moving into the next decade, John Searle (a prominent American philosopher) set the tone with his Chinese Room Experiment theory. Searle proposed the Chinese Room Experiment as an argument against the possibility of Al, aiming to illustrate that machines cannot have understanding.
Searle uses the following scenario to demonstrate his argument:
“Imagine a room in which a man, who understands no Chinese, receives, through a slot in the door, questions written in Chinese. When he receives a question, the man carefully follows detailed instructions written in English to generate a response to the question, which he passes back out through the slot. Now suppose the questions and responses are part of a Chinese Turing Test, and the test is passed”.
Chess and penguins
The years that followed this ’downer’ of a start to the 1980s were low in ambition and confined to what we now look back as ‘Behavioural AI’. Knowledge based systems, sometimes called ‘expert systems’, were trained to reproduce the knowledge and / or performance of an expert in a specific field. They mostly used the “if this then that” logic flow and they didn’t always get it right – with the identification of penguins (birds but flightless birds) being an oft-cited example of basic errors of the time.
This era produced a few big wins – especially in the efficiency space, like Digital Equipment Corporation’s ‘RI’ application which saved it $40m per year by optimising the efficiency of computer system configurations. But it was prior to the advances of computerised automation which really made corporate adoption commonplace. It’s also acknowledged to be the period of time that birthed the first bias in AI.
A lot happened in the world of AI in the 1990s – seeing major advances in defence, space, financial services, and robotics. So it’s perhaps surprising that most AI historians and computer scientists point to the same turning point for machine intelligence. In 1996 ‘Deep Blue’, a chess-playing computer from IBM, beat then-champion Garry Kasparov. Prior to this, chess had been singled out as a ‘frontier’ for machine vs. human intelligence, with many people believing the human brain to be the only one capable of mastering a game with between 10¹¹¹ and 10¹²³ moves. (A ‘googol’, being the inspiration behind Google’s name, is 10 to the 100th power, which is 1 followed by 100 zeros). Machine intelligence had arrived.
A new era emerges
As the use of computers in domestic settings proliferated, there was an exponential surge in Internet usage in the mid-1990s, with the last few years of the decade renowned today for the dotcom bubble (1995–2000) and its ultimate implosion. Throughout this time AI took a backseat in social contexts, despite already starting to power many consumer applications and early-version software, websites, and applications. Commercially the focus was on automation and efficiency. Neither of which were particularly “sexy” or fun.
Enter…the self-driving car. A longheld obsession and science fiction staple, the period between March 2004 and October 2005 was to become the start of a whole new age. The DARPA Grand Challenge was a competition for autonomous vehicles funded by the Defense Advanced Research Projects Agency, the research lead within the United States Department of Defense. The race saw 21 teams, each with their own self-driven vehicle, prepare to compete in a race spread out over 150 miles / 240km.
A grand total of zero entrants finished the race in 2004. But in the 2005 race, five vehicles successfully completed the course. Of the 23 entrants, all but one surpassed the 7.32 miles / 11.78 km distance completed by the best vehicle in the 2004 race. The winner on the day was Stanley (named by its entrants, the Stanford Racing Team) but the overall winner was AI itself, with optimism levels rallying and the machine intelligence conversation building in reach and volume.
Humanoid robots and sci-fi dreams
In the late 2000s, AI entered its ‘modern era’. A number of humanoid robots brought AI closer to science fiction, driverless car projects became abundant, AI was being built into consumer and commercial applications, and the Internet of Things (IoT) emerged – with the ratio of things-to-people growing from 0.08 in 2003 to 1.84 in 2010 alone.
The 2010s were really where we saw mass proliferation of AI in society. When we think about the mainstream tech we take for granted today much of it was born (or matured) in this decade. Virtual assistants like Siri. Machine learning tools. Chatbots capable of human-quality conversation. Mobile phone use cases. Photography aids. In-car innovations like satnav and cruise control. Smart watches. Smart appliances. Real-time share trading platforms that everyone can use, not just financial giants. Even the humble product recommendation engine. They all use AI.
We arrived in to the 2020s with 70 years’ build-up in artificial intelligence, machine learning, and deep learning. The past few years have seen significant advances and the next few will undoubtedly too.
Embracing AI’s evolution
When you next ask ChatGPT to write that report for you, when you use Papirfly’s Generative AI to create hundreds of illustrations in an instant, or you think about the potential pitfalls of AI (we’ll address these in an upcoming article) do remember that we’re not dealing with a brand new toy here.
Instead, we are working with technology that started its journey in the 1950s. A journey that has seen an amount of change its early creators could never have predicted in their wildest dreams, and one that is likely to transform almost every aspect of human life in the next decade.