A Brief History Of AI - Magnimind Academy

A Brief History Of AI

adminran

    The world became familiar with the concept of AI-driven robots in the first half of the 20th century, thanks to science fiction. It was the Wizard of Oz that set the ball rolling with its Tin Man. The trend continued with the humanoid robot in Fritz Lang’s film Metropolis that impersonated the real Maria. But what was the stuff of science fiction started showing signs of turning into reality by the 1950s, when the world witnessed a generation of mathematicians, scientists, and philosophers, whose minds had the idea of artificial intelligence (AI) embedded into them. It’s normal today to talk about the massive computing power of supercomputers, the domain of data science that facilitates data availability and analysis, among others, and AI that can mimic mental actions similar to humans. But the road to the modern world’s AI, big data, and deep learning has been a long one. Let’s take a tour down the history of AI and the historical avenues to find out how AI evolved into what it is today.

     

    The 1950s — Early Days of AI

     

    History Of AI

    The 1950s — Early Days of AI

     

    It all started in 1950 with Alan Turing. He was a young British polymath, who examined the mathematical prospect of AI. It was he who suggested that just like humans, machines too could use available information and reasoning to make decisions and solve problems. In his paper on creating thinking machines, titled ‘Computing Machinery and Intelligence’ in 1950, this was the logical framework based on which Turing discussed how intelligent machines can be built and their intelligence tested.

    But why couldn’t Turing start work on his concepts right away? The problem was with the computers available in those days. They needed to change to facilitate such work. Prior to 1949, a precondition for intelligence was lacking in computers — they were unable to store commands; they could just execute the commands given to them. To put it differently, computers in those days could be told what to perform but they couldn’t remember what they executed. Additionally, computing was exceptionally pricey. Leasing a computer in the early 1950s would set you back by a whopping monthly amount of $200,000. Thus, testing this unfamiliar and uncertain field was affordable only for big technology companies and prestigious universities. Under such circumstances, anyone wishing to pursue AI would have needed proof of concept together with the backing of high-profile people to persuade the funding sources into investing in this endeavor.

     

    The Conference Where It All Began

     

    The Conference Where It All Began

     

    It took five more years for the proof of concept to be initialized by Herbert Simon, Cliff Shaw, and Allen Newell’s program — the Logic Theorist. Funded by the RAND (Research and Development) Corporation, Logic Theorist was created to imitate a human’s problem-solving skills. Many consider it to be the first AI program, which was presented at the DSRPAI (Dartmouth Summer Research Project on Artificial Intelligence) in 1956, which was hosted by Marvin Minsky (an MIT cognitive scientist) and John McCarthy (a prominent cognitive scientist and computer scientist). It was at this conference that McCarthy coined the term ‘artificial intelligence’ and presented his thoughts in an open-ended discussion on AI by bringing together some of the top researchers from different fields.

    Though McCarthy envisioned a great collaborative effort, the conference failed to meet his expectations. People attended and left the conference as they pleased, and a consensus couldn’t be reached on the standard methods that the field should use. But despite this setback, everyone enthusiastically agreed that AI was attainable. This conference was a significant milestone in the history of AI because it prompted the subsequent twenty years of AI research.

     

    The Golden Years of AI

     

    The Golden Years of AI

    As computers became more accessible and cheaper and were able to work faster and store more information, machine learning algorithms too improved. This helped people become better at knowing which algorithm would be apt to apply in order to solve their problems. Early demonstrations like the General Problem Solver (GPS) by Newell and Simon, whose first version ran in 1957 (though work on the project continued for almost a decade), could use a trial and error method to solve a remarkable range of puzzles. But the GPS lacked any learning ability, as its intelligence was totally second-hand, and came from whatever information was explicitly included by the programmer.

    In the mid-1960s, Joseph Weizenbaum created ELIZA at the MIT Artificial Intelligence Laboratory. ELIZA was a computer program designed for natural language processing between man and machine (or computers, to be specific). These successes, together with the backing of leading researchers (specifically, the DSRPAI attendees), persuaded government agencies like the DARPA (Defense Advanced Research Projects Agency) to fund AI research at numerous institutions.

    It’s important to note the government’s interest was predominantly in machines that were capable of high throughput data processing as well as translating and transcribing the spoken language. There was a high degree of optimism about the future of AI but the expectations were even higher.

    The First AI Winter and Subsequent Revival

     

    The First AI Winter and Subsequent Revival

    It started in the early 1970s when public interest in AI declined and research funding for AI was cut after the promises made by the field’s leading scientists didn’t materialize. More than a few reports criticized a lack of progress in this field. The first AI winter continued from 1974–80.

    In the 1980s, AI research resumed when the British and U.S. governments started funding it again to compete with Japan’s efforts of becoming the global leader in computer technology with its Fifth Generation Computer Project (FGCP). By then, Japan had already built WABOT-1 (in 1972) — an intelligent humanoid robot.

    AI also got a boost in the 1980s from two sources. One was attributed to David Rumelhart and John Hopfield, who popularized “deep learning” techniques that let computers learn from experience. The other was Edward Feigenbaum, who pioneered expert systems that imitated a human expert’s decision-making process.

    It was in the 1980s when XCON — an Expert System of DEC, was put to use. XCON used AI techniques to solve real-world problems. By 1985, global corporations had started using Expert Systems.

     

    The Second AI Winter

     

    The Second AI Winter

    From 1987 to 1993, the field experienced another major setback in the form of a second AI winter, which was triggered by reduced government funding and the market collapse for a few of the early general-purpose computers.

     

    The 1990s and 2000s

     

    The 1990s and 2000s

    Several landmark goals of AI were achieved during this period. In 1997, IBM’s Deep Blue (a chess-playing computer system) defeated grandmaster Gary Kasparov, who was then the reigning world chess champion. This was a huge step forward for an AI-driven decision-making program. The same year saw the implementation of Dragon Systems’ speech recognition software on Windows. In the late 1990s, the development of Kismet by Dr. Cynthia Breazeal in the AI department of MIT was another major achievement as this artificial humanoid could recognize and exhibit emotions.

    In 2002, AI entered the homes in the form of Roomba (launched by iRobot), the first robot vacuum cleaner that was commercially successful. In 2004, NASA’s two robotic geologists named Opportunity and Spirit navigated the Martian surface without human intervention. In 2009, Google began work (secretly) on developing its self-driving technology and testing its self-driven cars (which later passed Nevada’s self-driving test in 2014).

    2010 to Present Day

     

    2010 to Present Day

    AI has developed by leaps and bounds to become embedded in our daily existence. In 2011, Watson — IBM’s natural language question-answering system, won the quiz show Jeopardy! by defeating two former champions, Brad Rutter and Ken Jennings. The same year, Eugene Goostman — the talking computer ‘chatbot’ captured headlines as it tricked judges during a Turing test into thinking he was human.

    In 2011, Apple released Siri, a virtual assistant that NLP (natural language processing) enabled, to infer, study, answer, and suggest things to its human user while customizing the experience for every user. This was followed by similar versions of other companies in 2014 — Microsoft’s Cortana and Amazon’s Alexa.

    Some other pioneering developments in the field of AI during this period were:

    • Sophia — the first robot citizen (created by Hanson Robotics in 2016), which can make facial expressions, see (via image recognition), and talk via AI.
    • In 2017, Facebook designed two chatbots to engage in start-to-finish negotiations with each other by using machine learning to continuously improve their negotiating tactics. But as they conversed, these chatbots diverged from human language and invented their own language to communicate, thus displaying AI to a great extent.
    • In 2018, Google developed BERT, which uses transfer learning to handle a wide range of natural language tasks.

    Wrapping up

     

    Wrapping up

    Today, we live in the age of big data where the rapid speed of data generation and unlimited sources facilitating data availability coupled with the massive computing power of machines, AI, and deep learning technologies have found successful applications in various domains. From banking, technology, and healthcare to marketing and entertainment, AI has achieved what once seemed to be inconceivable. The future of AI is bright as it’s poised to steadily improve further and significantly change how we live and work.

    Related Articles