Artificial Intelligence

What is Artificial Intelligence? An overview of AI history, mechanics, and practical use

Today, AI is everywhere: from Facebook’s digital face recognition tools to a casual morning chat with Siri. But let’s face it: there are so many artificial intelligence tools right now that it is easy to lose sense of what this technology actually is about. We decided to have a brief overview of AI and machine learning and take a look at this technology’s history and application. 

What is Artificial Intelligence?

Broadly speaking, the definition goes like this: artificial intelligence is a computer science branch of studies concerned with creating technologies that would be able to perform tasks that usually require human intelligence. In other words, AI tries to convey some (or all) human intelligence features into machines. But how is that supposed to work?

How does Artificial Intelligence work?

Connectionism vs Symbolism in AI

Modern AI tools are inspired by the connectionist approach — the idea that artificial intelligence tools should mimic the biological structure of a real human brain, using interconnected neural networks to process data, learn, and react to a changing environment. That is precisely where machine learning takes its roots — in constant data processing through neural networks aimed to perfect a certain task. 

Interestingly, up until the turn of the XIX century, an entirely different line of thinking was dominating the world. A symbolist approach viewed artificial intelligence as the machine’s ability to apply basic rules of logic to operate with symbols and representations of real-world objects and phenomena. In a way, this approach copies the way we perceive language and abstract thinking — by decoding symbols, communicating and giving meaning to them. 

Still, human intelligence is neither just raw neural networking nor abstract juggling with symbols — it is both, and there probably are some other qualities to it. While there are studies that successfully combine these two approaches to make machines learn essentially like human kids — by looking out and asking questions about the world — the times of AI outplaying the versatility of the human mind seem quite distant. 

Narrow vs General Artificial Intelligence

The case of AI technology’s “mind” becoming as adaptable and flexible as humans’ would be called Artificial General Intelligence. AGI would be able to learn any tasks in any environment, equalling and possibly overpowering humans in their inventive use of mind. The scientific advancements in this area bring a lot of fears and criticism, with even Elon Musk calling artificial intelligence “biggest existential threat.” Although such a pessimistic scenario is speculative and far from the modern state of AI, the majority of field experts do suggest that a superintelligent technology might come around in a few decades, not centuries. 

Still, when talking about modern commercial AI tools, we always mean Narrow, not General AI. Narrow AI is everywhere: it understands human speech, drives some of our cars, detects plagiarism, suggests us individually things to buy, etc. The point of Narrow AI is not to make a machine broadly intelligent. Instead, these AI technologies concentrate on mastering a very specific field by processing enormous amounts of data. These tools are not very new — as far as in 1997 IBM’s chess-playing machine Deep Blue was able to beat Garry Kasparov, one of the best players in the history of chess. However, with the rise of the Internet and the enormous amount of open data that it provided, AI has become much more precise and effective. But the Internet was not the only factor that determined the modern use of Narrow AI. It was the progress in machine learning — the method behind this approach — that also made artificial intelligence so smart in our eyes.

Machine Learning and Deep Learning

Machine learning is at the core of everything that we call Artificial Intelligence tools these days. The basic principle of machine learning claims that it is more effective to have a machine being able to teach itself instead of humans writing codes for every iteration and context. We will not go deep into the mechanics of machine learning in this text but what you need to know is that it is built similarly to the human brain neuron system: multiple interconnected nodes constantly exchanging tiny bits of information. As massive amounts of data are being processed by these systems, the machine learns by trying, making errors, and analyzing its own effectiveness. For example, which shapes, corners, lines, angles matter when trying to decode a hand-written text? How variable does the letter “W” look when written by a thousand people? Machine learning would process millions of various hand-written pages to get better at scoring the right letter each time. Much easier and faster than writing an endless code for decrypting every shape ever written.

The “data feeding” process is usually referred to as supervised or unsupervised learning. Supervised learning occurs when artificial intelligence software is being told what it is learning — all data units are labeled and structured. Unsupervised learning involves unlabelled data, leaving it entirely for the machine to figure out what it is looking at. Now, deep learning refers to the number of layers of interconnected nodes and reflects the intensity of data procession. More layers mean more information processing, which leads to faster learning and greater efficiency. First neural networks had only one layer of nodes, while it is only during the past two decades that machine learning has become really deep, operating with more than three layers. Almost all modern AI technologies involve deep learning because contemporary issues require machines to perform complicated tasks. 

How is AI used?

Application of AI technologies into real life — that’s where the real fun begins. Artificial intelligence software has enormous potential, but how can AI-powered companies teach machines to actually help people? 

Natural language processing (NLP) is one of the most common ways artificial intelligent companies find their way into the markets. Teaching AI to understand and analyze texts or speech for various purposes seems like a great way of optimizing human working routines. Unicheck is one of such products, as it detects plagiarism in texts and provides authorship verification tools. For universities, this tool becomes a great way to battle academic dishonesty using modern AI. For students, Unicheck is a great way to double-check themselves and perfect their academic writing skills. 

NLP can also work with audio information. Each time you ask Siri about the weather or request Alexa to call your mother or ask Google Home to update you on tomorrow’s appointment, these devices decode the sounds of your voice into a precise message and react to your requests. It is mindblowing to think about the cognitive complexity that is required to perform these seemingly mundane tasks. Still, this could happen when you make three devices talk to each other. So there is still some room for improvement, isn’t there?

Apart from language processing, there is also a completely different battlefield for deciding how AI should and should not be used — image recognition. We are now accustomed to the fact that social media platforms can detect our faces on random photos and report them to us. However, face recognition can go beyond the norms of privacy, identifying people in real-time and putting extra surveillance on our daily lives. Ensuring that face recognition technology serves our security without demolishing our privacy is one of the biggest challenges of modern AI studies. 

History of AI

All modern types of artificial intelligence did not just appear out of thin air — decades of struggles and scientific breakthroughs had crawled their way into the modern booming AI startup market. It’s essential to grasp some sense of AI history if you want to get a better understanding of the state-of-the-art machine learning tools. Here’s a (very) short overview of the history of AI.

1949 Alan Turing published “Computing Machinery and Intelligence,” where he presented the “Turing Test” that shaped the future of AI studies.
1958 Frank Rosenblatt invented “Perceptron,” the first analogue neural network able to (relatively poorly) distinguish basic geometric forms.
1969 Marvin Minsky and Seymour Papert wrote “Perceptrons,” a book that criticized the framework behind Rosenblatt’s neural network machine. The book had an enormous impact on AI history, marking the rise of skepticism and anticipating the beginning of the first “AI Winter.”
1970s The first AI winter is on. During this time, it was generally believed that a powerful AI is simply beyond humanity’s reach, which led to a decreasing scientific interest in the field.
Early 1980s The emergence of US and Japanese investors’ interest in AI-driven expert systems effectively ends the first AI Winter. Expert systems were introduced to optimize complicated business and production processes.
Later 1980s Due to the lack of computer power, limited data availability, and high expectations set at the beginning of the decade, many government programs and corporate investments fall into disbelief regarding AI and fade out, starting the second AI Winter.
1997 IBM’s Deep Blue machine beats Garry Kasparov in a game of chess. This event sparked a wave of conversations about whether AI has the potential to overpower human intellect.
2006 The second AI Winter is officially finished. Giants like Google, Apple, Microsoft, and smaller tech startups all join the race for better commercial use of AI.
2016 In the world’s most complicated board game called Go, Google DeepMind’s AlphaGo beats world champion Lee Sedol, breaking another wall in the remains of AI skepticism.

So What’s Next?

If there is one lesson to be learnt from the history of AI developments, it is that the potential of this technology was always incredible, while the scientific and economic contexts for fulfilling this potential were the main problem. If there is an “AI Summer,” we are experiencing one right now. In fact, the hype around using AI is so high that many companies exaggerate their use of AI just to have a better and more innovative public image.

There’s no way to know what’s going to be the next big thing for AI. Will we use AI to make better public policies or to create dystopian surveillance states? Will AI become a luxury lifestyle product or will it reinforce sustainable development trends? We simply don’t know yet. It may be that as the current peak of AI interest dries out of ideas and technological capacities, we will witness another AI Winter. For now, the most we can do is to make these tools serve people’s well-being and take care of the beautiful ideas encapsulated in this technology.