The Modern-Day AI Executive: Defining and Demystifying AI

Data Strategist Corner
Author:

Robert Han

Jonathan Ericksen

Date Published:
November 2, 2021
This is part one in a series aiming to demystify artificial intelligence (AI) for today’s executives, identify its strengths and weaknesses, and walk readers through the ingredients needed to embrace, employ, and extract its benefits.

Demystifying Artificial Intelligence

The age of artificial intelligence (AI) has brought with it a flood of fanfare, technical jargon, and new technologies. Advances in AI sound impressive but are replete with mystery to the layman. Some say machine learning – the enabling mechanics behind AI – is little more than math, statistics, and data; add in computing resources and… presto, artificial intelligence!

Figure 1
Source: A Survey on Machine Learning-Based Performance Improvement of Wireless Networks: PHY, MAC and Network Layer

To understand the positive and negative impacts of AI, we must first define terms. AI, and the related fields of data science (DS) and machine learning (ML), are used to extract information from past events to make valuable inferences about the future. Or more simply, AI “learns” from the past to predict the future.

It is helpful to understand the relationship of AI-related fields that are often used interchangeably. Figure 1 is a reasonable illustration of how these subdomains relate (though we believe Machine Learning is far closer to Data Science than to AI).

The most important difference between AI and Data Science is that…

AI was traditionally keen to mimic human methods of problem-solving, and relied strongly on top-down deductive reasoning, logic, and rule-based systems.

Data Science has mostly ignored theories about humans and instead employed bottom-up inductive reasoning, equations, and models to best relate inputs (observations) to outputs (decisions).

Interestingly, all the great recent advances in “AI” – from self-driving cars (good progress), playing poker (great progress), and mastering the ancient game Go (phenomenal progress) – have been made possible by these two historical rivals – AI and Data Science – working strongly together.

Humans form conscious and subconscious ideas about the future based on experience and stories (historical information) as a regular part of our day-to-day life. If a shopper, for instance, identifies a rhythm in the days or weeks their favorite items go on sale at the local grocer, he or she will plan future trips to the store when the next sale is predicted. Or, if a youth has encountered the neighborhood bully in a particular alley late in the evenings, he or she will likely avoid that alley, and perhaps all dark alleys, for years.

Human minds “build” such rudimentary models (mostly unconsciously) every day, which points to how adaptable and resilient (intelligent) we humans are — learning from the past to better thrive or survive in the future. By automating and scaling a related process, AI can add great value (though it currently cannot replace most human judgment).

The foundational assumption of Data Science (and these days, AI) is that the past has much to tell us about the future. Making predictions is at its heart. The authors of the book Prediction Machines: The Simple Economics of Artificial Intelligence argue:

“The current wave of advances in artificial intelligence doesn’t actually bring us intelligence but instead a critical component of intelligence: prediction.”

An AI model isn’t something that “thinks” but is a prediction-making factory that can powerfully augment human decisions.

Strengths and Weaknesses of Artificial Intelligence

Machine learning is a powerful way to gain a useful form of ‘intelligence’. It mines datasets of past events to construct mathematical models that can generalize to future, never-before-seen events. Dozens of useful methods differ in how they accomplish the ‘learning’ and each has its strengths and weaknesses, which we won’t go into here. We will instead compare the key contrasts between human and machine.

AI Strengths // Human Weaknesses

The Impartiality of AI

By its nature a machine is impartial. It doesn’t get hungry or offended, hold a grudge or seek revenge. It doesn’t need companionship or exhibit loyalty, courage, or bravery, and doesn’t experience fear and anxiety. Our emotions as humans are unique and powerful, but they come with their obstacles.

Figure 2: Proportion of rulings in favor of the prisoners by ordinal position.
Circled points indicate the first decision in each of the three decision sessions; tick marks on x-axis denote every third case; the dotted line denotes food break. Because unequal session lengths resulted in a low number of cases for some of the later ordinal positions, the graph is based on the first 95% of the data from each session.

Machine prediction (when trained well) is consistent, and this is usually a powerful strength. Consider the example below: Figure 2 is an aggregation of 8 Israeli judges making rulings during a single day. They are reviewing parole petitions and overall approve about a third. But the approvals are far from uniform throughout the day! They drop off sharply during three distinct periods. What is happening? It turns out the dotted lines indicate a meal break (i.e., lunch, tea, or dinner). Ostensibly, favorable decisions depend strongly on the state of the judge’s stomach. Though we humans have firm convictions in our ability to execute fair and objective decisions, the evidence is not favorable.

Assuming the data used to train an AI’s prediction engine is without biases, its output will avoid the negative biological, psychological, and emotional influences humans are subject to. Recency bias, survival bias, selection bias, as well as racial/class/gender bias are among several flaws humans are prone to. Usually, these are subconscious and not obvious when they emerge. We tend to inflate our sense of objectivity and believe we are comprehensive in our approach towards decision-making. By contrast, if an underlying dataset used to train a model is not infected by human biases, its output is closer to pure in its judgment.

This topic has gained notoriety in recent years as evidence of biases has emerged in some AI implementations. This is not an inherent flaw of AI itself but is a result of poor training data, which “teaches” from cases negatively influenced by humans. Careful inspection and proper AI development is necessary to keep an AI system from accidentally propagating human bias when it exists.

AI at Scale

Another primary advantage of AI is the pace, scale, and endurance of its prediction making. AI is mechanical in its approach to prediction making and is infinite in its focus.

For instance, Elder Research developed the AI product company Blackmarker which uses machine intelligence to detect, at scale, personally identifiable information (PII) in legal documents. Once detected, Blackmarker will redact instances of PII to avoid privacy, compliance, and disclosure concerns.

A highly trained legal aide can detect and redact PII with reasonable accuracy, but not 24 hours a day, seven days a week! The quality of a person’s work deteriorates due to fatigue. Little creativity is needed to redact text, and the never-ending repetitive task does not engender a sense of purpose, pride, or excitement. A consistent stream of data crunching is where an artificially intelligent machine can outperform even the most capable legal aides – even in accuracy.

Further, once an AI model is trained, optimized, and implemented, the marginal cost of employing more computing resources to handle increasing volumes of legal documentation can be much lower than that of onboarding, training, and managing additional legal aides. This ability to scale with a flick of a switch is a powerful benefit of AI.

Nimble Responses

A further advantage of successful implementations of AI is the centralization of prediction making. Humans require expensive management structures to ensure proper behavior and quality of output. This can lead to bureaucratic and management-heavy organizational structures. AI, by contrast, centralizes control over some behavior which lends itself to quality control, cost reduction, and predictability. Figure 3 illustrates this idea:

Figure 3: Contrasted Operating Models

Consider a use case where an AI solution is implemented by banks or credit lenders to determine whether loan applicants should be approved. A decision depends on several factors including personal information, capital requirements, broader economic trends, as well as the financial institution’s existing portfolio of loans. For many institutions, AI is already in use to approve or deny loan requests.

Suppose a shift in policy dictates a higher creditworthiness threshold for applicant approval. With human workers approving loans, organizations would need to communicate this change in policy and follow it with oversight ensuring decisions align with policy adjustments. With AI, this shift in policy can be managed instantly, helping financial organizations respond nimbly to changing market conditions.

Human Strengths // AI Weaknesses

Creativity

Humans excel at creativity – an activity today’s AI is far from mastering. For example, a model can be trained using historical candle data to subsequently make even more efficient candles. But AI cannot use such data to invent the incandescent lightbulb. That invention was “out of the box” and was a multidisciplinary effort spanning physics, chemistry, material science, and electrical engineering, not expertise in the old way of illuminating.

Creativity is vital to all aspects of our lives. It goes far beyond painting an oil canvas or sculpting a statue. Connecting disparate concepts with novel ideas to solve problems is where humans thrive.

Cause and Effect

A critical human ability is our core understanding of cause and effect. We understand early on that a ball rolling across the floor is the result of an outside force that set the ball in motion. Perhaps the ball was pushed by a child. Or a gust of air forced it to move. This relationship that is obvious to us is puzzling to current AI machines. They do not understand the relationship between observing a moving ball and the reason for its movement.

A customer support chatbot might experience an increase in customer inquiries and react properly to keep pace. But it cannot understand the cause for the increases in volume and thus it cannot attempt to solve the root of the problem. Humans, by contrast, will investigate the reason for the increasing volume and work to implement solutions for it.

Common Sense

Common sense is nonexistent in today’s AI. Intuitively most of us learn certain events are not related no matter how similar or coincidental they appear. To illustrate this, Figure 4, drawn from tylervigen.com, graphs the number of swimming pool drownings per year and the number of released Nicolas Cage films1. Unquestionably, no true relationship exists between these variables, but it sure looks intriguing! The folly here is obvious.

Figure 4: Data sources: Centers for Disease Control & Prevention and Internet Movie Database

 

We at Elder Research call this the vast search effect2. If you search far and wide, you can usually find data that appears related – even strongly “statistically significant” if you’re using equations that don’t take the search process into account. But common sense, human judgment, and statistical validation techniques should always step in to assess and evaluate whether something that seems related is not meaningful.

AI Explainability (or lack thereof)

The final weakness we will examine is AI’s inability to explain why certain predictions are made. Neural networks, particularly deep neural networks, are a machine learning technique that are very difficult to deconstruct to understand why predictions are what they are. In general, humans can explain what led to predictions, why they were made, and in what context they were made, but a machine cannot. And deep-learning networks are the most opaque of today’s popular modeling techniques.

Lack of model explainability can have regulatory and compliance implications. As a result, organizations implementing modern machine learning techniques for AI development must carefully consider the risks involved. In addition, lack of explainability can lead to issues of trust in AI thus reducing chances of successful implementations. Most non-experts find it hard to trust a prediction without understanding the logical path to that decision.

Techniques for improving explainability from complex machine learning models have seen some levels of success. As this topic grows in significance, we expect these techniques to mature. And with it will come further trust in AI. These trends should be followed closely as governments ratchet up regulations of AI.

Conclusion

In a recent Harvard Business Review article titled ‘Collaborative Intelligence: Humans and AI are Joining Forces’, authors H. James Wilson and Paul R. Daugherty state:

‘Through such collaborative intelligence, humans and AI actively enhance each other’s strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter”.

Augmenting humans with artificial intelligence holds significant promise. Leaders who embrace AI and adapt their organizations to enable human & AI collaborations will see its potential unleashed. Effectively implementing AI is a challenging strategic exercise that, for many organizations, requires a reconfiguration of their operating models – a topic we will discuss in future blog posts.

In this blog, we looked at the ‘What’ of AI. We highlighted some of its inherent strengths and weakness which leaders should carefully consider. In subsequent blog posts we will dive deeper into the ‘Why’ and ‘How’.

Footnotes:

  1. How often are such “spurious correlations” picked up and highlighted by automated search systems? Even human review of findings can be fooled unless rigorous tests (such as Target Shuffling) are used to weed out findings with a strong chance of being due to “vast search” possible with today’s automated systems.
  2. Our colleague, Dr. John Elder, once illustrated this concept dramatically, when teaching a course at a client site.  Those bank analysts used “data cubes” where each customer was represented by a data point. The points values defined what cell they belonged to, and the analysts searched for cells having unusually high or low response rates to products and offers. Dr. Elder tried several ways to convince them that even with randomly-distributed data, the large numbers of cells would naturally yield many that were “unusual” entirely by chance, and that new data would not reflect the same ratios of results. (There are many hazards with proportions of small samples.) The floor of the room was a beautiful marble two-color “chess board” (defining nice 2-dimensional cells). The bowl of individually wrapped snacks was handy, so he suddenly threw the snacks all over the floor! “Look, customers in the Midwest” he said, waving his arms along one row, “who are older” (waving at an intersecting column), “really like M&Ms!” It became viscerally clear that the randomly generated data did have several example cells with proportions of items very different from the global proportions. (It is not known if the analysts fully appreciated the lesson, as for some reason he wasn’t invited back!)