HomeBlogE-commerce TipsExplainable AI explained

Explainable AI explained

Marcin LewekMarketing Manageredrone

It is said that even Facebook creators do not know why the algorithm is making this and that decision. Well, it’s true. But, not because they lost control over it, and doomsday is nigh—they simply never had it… and that’s fine.

The engine makes the car run, and that is clear. Anyone can potentially understand how an internal combustion engine works. Likewise, the neural network makes decisions, and that’s also clear. However, understanding what the Neural Network has “under the hood” is practically impossible, even when it comes to the simple ones.

XAI is a sort of solution for this problem.

Explainable Artificial Intelligence

XAI, Explainable A.I. (or Interpetable A.I.), is an exciting field of Artificial Intelligence studies since efforts aim to build a bridge between mysterious algorithms and simple human beings. People nowadays need to trust A.I. solutions since their presence will be more and more evident every month. We, as people, don’t count and are afraid of things we don’t know or do not understand.

Why at all do we need to explain artificial intelligence? Let me explain it in the example. Let’s pick computer vision.

Does the image picture the cat? Why cat is the cat? Of course, we all know how to recognize this animal, but let’s focus on its traits. So, why a cat is a cat?

  • Triangle ears.
  • Whiskers.
  • Triangular face.
  • Slightly slanting eyes.
  • Tail.

Now, let’s think about Lion. Basically looks familiar; however, it is bigger and has a mane.

Round 1

Assume that we design a Computer Vision system that learns how to distinguish animals. In the picture, it’s hard to assess the actual size of objects, so the fact that the lion is 7 times larger doesn’t matter that much. Yet, mane will definitely be the hot point in such a decision.

In fact, it will be. Image recognition systems learn patterns that indicate one or another decision. Simplifying – convolutional networks identify patterns, edges, shapes, silhouettes, etc., on each subsequent layer. For example, a pattern recognized as mane appears on each image classified as a lion.

Let’s take another example. We want our network to recognize a cat from a wolf!

Round 2

Everything is going fine on training and evaluation examples; however, when it comes to testing our network on one of the examples, it appears that it recognizes the cat as a wolf.

After a short reconnaissance, we discover the problem. The cat, which was wrongly classified as the wolf, was in the snow on the unlucky picture, same as most of wolfs we used to learn our network.

On simple reasoning, nobody will classify the cad as the wolf just because it’s sitting on the snow; however, you need to remember that A.I. is the tool. Acting smartly, but still, just the tool.

To much to handle

In the example shown, we could easily indicate the “reasoning mode” of the network, yet, it’s not always that easy. The power of A.I. lies in its complicacy. Algorithms can smartly make decisions based on dozens of data points, mapping complex relations between them the way that is impossible for humans to embrace.

Technically this mode of work is called the black box. The black box has input and output, but what is happening inside is unknown or beyond our comprehension (since despite knowing what nodes are activated, we don’t understand ‘why’ is that precisely.)

In opposite to machines, our understanding of multidimensional relations is limited. To understand the network’s decision, we need to process dozens of thousands of events simultaneously.

The glass box

Ok, machines struggle to distinguish human faces or understand the language we speak. Those tasks are, in fact, one of the most complex engineering challenges. Yet, at the same time, 3-year-old children are doing it with ease. This is because those skills are deeply buried in our subconsciousness. Math and cause and effect analysis are, on the other hand, relatively ‘new’ skills. Here computers outperform us, and that’s fine. This is why we use them.

XAI techniques aim towards turning it into a glass box. Like the engine we mentioned in the introduction.

It’s completely natural to ask yourself, at this point, why do we create networks that we don’t understand? Why not incorporate XAI within any Deep Learning project or simply use only the networks we embrace?

The answer is brutal yet straightforward – it’s an inefficient approach. Simple networks lack performance on complex tasks. At the same time, putting XAI overlay on each would consume a lot of computational power. The power we desperately and constantly need to train and use more and more efficient A.I. tools. XAI techniques are simplifications, and simplicity is not what you are searching for, reaching for A.I.

So what is the reason for using XAI at all?

There are a couple of them. From the society perspective, the most important is that some decisions with which A.I. is helping need to be motivated. Loan decisions are an example. The second one was already mentioned – we want people to understand it better; it’s critical for social acceptance.

From the technical point of view, XAI can also be used as the diagnostic tool (do you remember the network we used to differentiate wolves from cats?). We can list the reasons for a long time. I think everyone can come independently, depending on the purpose. What’s mine?

Understand to understand

Despite usefulness, exploring XAI is also a matter of being human. We subconsciously want to know how things are working, and this urge is incredibly frustrating since neural networks are ‘our’ invention.

It can sometimes be frustrating that our tool doesn’t work the way we want it to, yet it’s a fact. A.I. is the mirror reflecting reality.

If you can’t explain something simply, you don’t understand it at all. If you cannot put your problem, task, in simple terms to the smart yet, dummy machine, you don’t genuinely understand it, or you don’t understand the nature of phenomena you explore. And that’s fine. “I know that I know nothing,” Plato once said. Accepting this fact and constantly pursuing knowledge is a sign of wisdom.

If you want to dive deeper into XAI, you are welcome to read the article written by one of the AVA team members – Grzegorz Knor.

Marcin Lewek

Marketing Manager

edrone

Digital marketer and copywriter experienced and specialized in AI, design, and digital marketing itself. Science, and holistic approach enthusiast, after-hours musician, and sometimes actor. LinkedIn

Let us show you around the world of e-commerce.
Subscribe to our Newsletter

The administrator of your personal data is edrone LLC. We will handle your contact details in line with our Privacy Policy.