Turing test is known chiefly thanks to sci-fi movies. However, it and other similar human-computer interactions tests are closer to your every day than you think. And the Turing test is not the only one!
When it comes to applications of A.I. that heavily mimics genuinely human behaviour, sooner or later, we will face philosophy. It’s actually good news – another proof that this, sometimes laughed, field of the study states the added value to humanity’s heirloom.
One of the examples can be the application of the Trolley problem to reality.
Trolley problem – thought experiment which consists in imagining the following situation. Trolley went crazy, and it’s running on rais down the hill. At the same time, the mad philosopher tied and laid five persons on the rails so that the trolley would kill them.
There is a chance for the victims. You are standing next to the lever of the railway switch. However, on the side rail, the mad philosopher tied down one additional victim. If you pull the lever, only one person will die. If you don’t do anything, the death toll will be five. What are you doing?
Initially, the problem is, in nature, purely academic and refers to ethics (Utilitarianism field precisely.) However, upon autonomous cars appearance, it’s getting real. Brutally real. Imagine quite probable situations.
- An autonomous car will unavoidably have an accident. It can hit a group of people running across the street. If the machine tries to dodge them, it will hit one marauder on the sidewalk. What should the car do?
- An autonomous car will unavoidably have an accident, and one person will be hurt. The kid or old lady. What should the car do?
- The accident is near. The car has to hit one of the cyclists (otherwise, both will be hurt). The one is wearing a helmet, and the second, don’t. Which one to ‘choose’?
People decide based on intuition, and in every case I presented, your judgement would be subconscious and wouldn’t be the subject trial (directly). However, when A.I. make such a decision, it is. Designers, programmers, engineers should make such decisions before the accident, so the case is getting even more creepy.
Disclaimer: The last example is especially tricky, as, in addition to ethics, we face a lawmaking dilemma. The intuitive answer is: the one with a helmet – a higher chance of surviving intact. Yet, do we want to ‘punish’ cyclists for wearing helmets?
The person that is going to make such a decision will perform at some sense role of the so-called wizard. We’ll come back to the wizard shortly.
Testing the Voice Chatbot
Let’s come down closer to edrone’s field. As I mentioned, no matter how hard we would try, we cannot avoid philosophy dealing with A.I.
What is consciousness?
Surprisingly, you have to ask yourself this question – at the end of the day – while attempting to create a working voice commerce assistant.
What do you require from voice commerce?
If you ask yourself, what is voice commerce at all, I’ll help you with the explanation.
Voice commerce is one of the upcoming (running) trends in eCommerce. It comes down to the application of so-called zero-interface – natural spoken language.
That’s the shortest definition possible, but let’s extend it.
Voice commerce is hard to define since it embeds itself into the fuzzy definition of omnichannel. It assumes that online shopping should be as easy and natural as possible.
- You can quickly buy the essential products simply by saying you want to order them.
- Sit cosily on the couch before your laptop and talk with eCommerce, telling it what you are searching for, exploring possibilities shop offers – just like you do it in the brick and mortar store with customer’s assistant (edrone’s case)
- Everything between that.
Conversation & Exploratory Search
As you hopefully noticed, our approach is almost fully about conversation—the conversation about products available (or not) in certain eStore.
We can use the experiment in various situations, where we want to test the prototype quickly, or more precisely, if our current design is worthy of spending additional time on the development of the actual working prototype.
Everything you need to conduct such an experiment is a prototype or proxy of U.I. In the core form of the Wizard of Oz experiment, the test subject doesn’t know that it’s interacting with a human tester, not the actual working prototype. Thus, in addition, you need a ‘space’ where the researcher can “hide” – behind the proxy interface, for example.
WoO works excellent if you want to test A.I. There are, at least, two reasons for that:
- A.I. based solutions require a lot of work to be prototyped.
- A human tester – a ‘wizard’ – is a living being, and as AI-based solutions imitate human behaviour, they is an ideal “substitute.” Unlike non-intelligent applications, there is barely a risk that the tester will go far beyond app capabilities.
The tested one is human, not a machine, as Artificial Intelligence isn’t actually involved. The human object is asked to interact with A.I. and assess their mode of work, user experience.
What’s more – While testing NLP, the test works even better – the chat interface is easy to develop and pretty to use for both wizard and subject.
Anonymity is not necessarily mandatory – in some cases, the researcher conducts tests openly. You can run WoO using maquettes, stick notes, etc.; however, in this case, the test looks more like a regular UX test, so we are actually testing the U.I. and main assumptions rather than the user. In the ‘proper’ version, the user is tested more than the system to indicate thinking, expectations, intuitive reactions. Yet, the difference is subtle.
While testing AVA assumptions, we used a chat interface, as we wanted to check how clients would do an exploratory search via our AVA’s (Autonomous Voice Assistant) concept. We wanted to test how an assistant who knows everything about products should talk about them and help the client find, pick, and purchase them—autonomous Virtual Assistant trapped in the Chinese Room.
The so-called Chinese Room Argument states that machines – understood as programmes cannot have the mind and cannot ‘understand’ what they do. Argument aims primarily at the concept of strong A.I. – stating that a programme cannot develop consciousness through the course of running.
At the same time, the human brain works basically like a machine, so the Chinese Room Argument applies to humans as well. So what consciousness really is, then? 😉 It’s the question for another store.
To seek is to speak
Another cool thing we’ve discovered during AVA R&D also carry philosophical meaning. Quite a cool thing about natural language processing is that data can give you extremely useful insights yet almost poetic.
Those who ask do not wander. Those who ask, find.
Those who ask and those who seek are doing the same thing. If you are developing NLP Assistant for eCommerce, you’re making a search engine simultaneously. Have an excellent read!
Digital marketer and copywriter experienced and specialized in AI, design, and digital marketing itself. Science, and holistic approach enthusiast, after-hours musician, and sometimes actor. LinkedIn