The Community Economy

AI Uncovered: Part 1 - What Is AI?

Being one of the recurring themes of science fiction, we all have at least a vague idea of what Artificial Intelligence (AI) is. In countless novels and movies, often in the guise of an ominous and threatening machine, other times instead portrayed as a technology so futuristic as to seem almost unattainable, AI has risen to the role of a mysterious entity that mostly belongs to fiction. At least, this was the scenario a few years ago. More recently, in fact, we all came to know AI by witnessing its expansion and pervasiveness to a point where it constitutes a substantial part of our daily interactions, either via smart assistants (e.g., Amazon’s Alexa or Apple’s Siri, just to name a few), or even by the AI-powered features of our smartphones (e.g., facial and fingerprint screen unlocking, or photo enrichment), not to mention self-driving cars. However, despite being already deployed in millions of products and services around the world, AI is still something obscure. This is in part due to the nature of AI itself, which is often regarded as a complex and expensive technology that is out-of-reach for most people. As a byproduct of this misconception, many Small and Medium-sized Enterprises (SMEs) are wary of adopting AI solutions that could actually represent a game changer for their businesses.

What follows is the first part of a series of articles aimed at shedding light on AI and its applications. We begin our journey by taking a look at what AI actually stands for, and what are the differences between the most popular AI techniques that most people might have heard of. It is worth mentioning that this series represents a piece in the mosaic of our company’s mission. In fact, Expandigo is working to demystify, democratize, and deliver AI enabled tools to SMEs in an affordable and practical way that helps them achieve their goals as technology continues to trend in this new direction.

What Is AI?

Given the context for a specific problem, AI represents a set of algorithms and techniques that enables a hardware system (a digital or analog device, a computer program) to be able to automatically generate a set of rules that lead to an optimal solution. This is clearly in contrast to traditional hardware or software design, where an expert (i.e., a software programmer or a hardware engineer) has to manually design the rules and instructions that solve a specific problem. In other words, AI gives computers the ability to learn without being explicitly programmed. However, it is important to understand that AI is a broad and generic term that entails different flavours of machine intelligence. In fact, as depicted in the figure below, AI can be seen as the bottom level of an abstract hierarchy, where a subset thereof is represented by Machine Learning (ML).

Levels of AI

The main difference between traditional non-ML AI and ML techniques dwells in how they intrinsically operate. If the former is mainly composed of specific algorithms that are able to solve specific problems mimicking human intelligence (e.g., finding the shortest path in a maze via the A* and Dijkstra’s algorithms, or playing Tic-tac-toe leveraging Minimax, just to name a few examples), the latter achieves a similar result by also employing (usually large amounts of) data. In this case, the term “data” embodies the “past experience” of the intelligent system. Let us consider, for instance, that we want to create a program that predicts the variations of the financial market. In this case, our ML solution would leverage data collected by stock markets located across the globe during, let’s say, the past 20 years. By relying on time-series analyses and forecasting techniques, it will then be possible to estimate the financial performance for the foreseeable future. As years go by, all past information will be encapsulated in what we could call the “previous knowledge” of our program, thus creating a self-sustained system capable of achieving more precise predictions as time goes by. In other words, data and algorithms are tightly coupled. Albeit being extremely powerful, ML techniques are usually applicable to structured or semi-structured data sources only, i.e., data organized schematically within a file or record, like, for instance, the localized stock market values in the previous example. Dealing with unstructured data (images, text corpora), however, requires a higher level of abstraction of the underlying information. This is where Deep Learning (DL), i.e., the highest level of the AI hierarchy, comes into play.

DL comprises biological-inspired computational models, called Deep Neural Networks (DNNs), that are capable of operating semi-autonomously, i.e., with limited human intervention. In fact, even if both ML and DL require data preprocessing in order to craft the inputs as needed by a specific predictive model, feature selection is no longer required. As a result, this relieves the user from tedious, and sometimes rather complex, data engineering tasks, like discarding non-relevant columns in a database (feature ranking), or projecting the original data in a different, and usually smaller, dimensional space (feature extraction). Therefore, DNNs mark a substantial paradigm shift, since there is no need to manually pinpoint where the focus of the predictive model should be. In other words, a DNN autonomously understands hidden patterns, thus extruding the knowledge from a cluster of data. Due to these unique capabilities that enable a seamless computation on heterogeneous data, DNNs have rapidly emerged in the last decade as a de facto standard approach for solving a variety of otherwise-hard-to-solve problems, including image and speech recognition, image segmentation, human language understanding, and many more.

After the recent breakthroughs of DNNs in different fields of science and engineering (e.g., AlphaFold, Tesla’s Autopilot, just to mention a few), mainstream information sources have usually conveyed an inaccurate message by using machine learning, neural networks, and AI interchangeably. As we have seen in this brief explanation, there are several and substantial differences between each technique, like several and non necessarily interchangeable use-cases apply to each one. Even though such differences could seem slight and subtle, having a clear view on AI and its subfields could allow you to craft a more informed idea on what your business needs, how to evolve it, and, more importantly, recognize what works in your specific case and what to ask when you interface with an AI/ML/DL practitioner.

Initially conceived as a means to replicate human thinking through machines, AI, DL in particular, is rapidly changing our everyday life and how we interact with others or with the environment. Despite being introduced around 60 years ago, neural networks have gained an unprecedented interest only in the last few years. In the next episode of this series, we will give a brief overview of the origin of DNNs in the twentieth century, and we will see what made them the new holy grail of business, science, and technology.