Check out this Meetup with Data Science Melbourne here.
Everything you ever wanted to know about AI but were to afraid to ask.
A subject my colleagues in higher ed mightbekedn on. https://www.eventbrite.com.au/e/computer-marking-of-essays-tickets-7359565659?aff=eand
This was the question posed at Deloitte Australia’s ‘Conversation – Designing the Interface of the Future’ event, hosted in conjunction with ‘Disruptors in Tech’, a Melbourne based MeetUp group.
I have to admit, quite what a brand sounds like isn’t something I’ve ever really thought about before.
- NAB is about to launch a chat bot. It’s an interesting design journey.
- Conversation is the ‘next big thing’ augmenting user design and experience.
- Perton’s law was coined.
- Always disclose that the user is interacting with a bot. (It’s a trust thing.)
- Bots that aim to emulate humans reach a tipping point where they become flat out creepy… As in, close, but not exact, or (to use the literal right word for this reaction,) uncanny.
- For those of you wanting to career hack and benefit from a free introduction to both chat bots and voice command (or conversational user interface, a.k.a CUI) Academy Xi has you covered here.
Close your eyes for a moment.
Imagine a world in which every product in the shop is blaring about itself in natural language. Possibly, even holding a conversation with you, calling you by name, fielding your questions and answering them based on preferences it gleaned from the data that you verbally offered up; your smart devices silently told it (your GPS, purchasing and browsing history); your loyalty accounts, biometrics and even the embedded chip in an existing purchased item that you’re currently wearing, that is interfacing via Near Field Communication, informed it about, all serving as context.
It’s not that far-fetched, but it’s hardly everybody’s idea of Utopia.
I instantly assume that every spokesperson, every brand ambassador and every voiceover talent who ever carved out a career is facing becoming automated, and that an audio apocalypse of the kind ‘Minority Report’ foreshadows is on the cards.
On reflection, I fervently hope that the option to switch to classic mode, by which I mean ‘return to mute’ isn’t overlooked, or else that I can default everything to Patrick Stewart or everyone’s favourite meerkat, Alexsandr Orlov.
(I’d be pretty keen for it NOT to sound like Microsoft Catherine.)
Currently, I’m mid way into a two part webinar, learning how to structure and design text based chat bots, with Microsoft Worldwide online, so this long-ago lecture, (which pre-dates the blog, and is one of its three inspirations,) is starting to resonate with me, but in a way that makes the chat bot idea seem a little bit old fashioned.
In my humble opinion, text bots take up far too much on-screen real estate. Especially on mobile. And I say that as someone who loves writing and reading, (but not chatting or instant messaging,). Since I’m on a roll, for the love of all that is good and worthwhile, if I’ve just agreed that your site may install a cookie, my dear UX designers, please don’t follow that up a split second later with a request to take a survey about my experience of the site.
You know that I literally don’t have any experience with the site yet, don’t you? I mean, you just installed a cookie a second ago, so it invites the question, complete with raised eye brow, “what experience, prithee?”
For the UX and CX designers confusing metrics with success; high pressure tactics with what people want; bells and whistles with colour and interest; and making account closure more than a two-step process, peppered with mildly threatening, condescending “warnings” that imply I don’t know what it is I’m really doing, I do not need yet another reason to switch to a low footprint lifestyle.
Getting back to the subject at hand….
I think it’s self evident what a Rolex watch would sound like (Roger Federer, obvs.) But what does an Australian shiraz sound like? How about the bus stop timetable? And will the bot have a name, or will it be openly robotic? Devoid of personality?
This event is one of my favourites.
It’s so good, that it’s taken a few weeks to process and decide how to best present it.
The NAB Voice Bot story.
This edition of Disruptors In Tech was held at National Australian Bank’s 700 Bourke Street outpost, (not to be confused with its dockside 800 Bourke Street headquarters, or its two or three other Bourke Street properties which, although equally imposing, are also the utterly wrong address for this particular event)* showcases the bank’s thought process and design considerations as it prepares to launch a chat bot.
A bank bot?
I’m unfamiliar with any scenario where I might be so caught up that I have to make an urgent bank transaction and my hands won’t be available, but OK. People do strange things when they’re in transit.
1. Designing a conversational user interface (CUI) is more fraught than you think.
Lesson 2: personality is hard to do.
As it happens, creating a bot with a flat personality, or no personality (and no name) is just as complex as its alternative. In having no personality, the bot still has a personality. Just not a very sassy, cool or chatty one.
Compound the problem with an assistant that has to flatly, blandly and consistently cover multiple divisions and myriad product lines of a newly agile, complex business, (with the added bonus that the bank is currently investing its time in realising that being complex is not an excuse for not being a disreputable corporate citizen) and this makes for an interesting case study.
Have they got it right?
Lesson 3: Spoken word is a different animal to text.
Human conversation is less formal, more shorthanded and incomplete than dialogue. As a result, conversation bot chat shouldn’t aim to replicate, or in any way be substantially based on, text.
(The sound of a low flying sunk cost whooshes by, while the trumpet heralding a serious new overhead plays.
Choose your interface wisely.)
I have to say, this is kind of a bummer for enterprise if they jumped on the wrong bandwagon and are being leapfrogged, but it’s a boonoonoonus for those who write and script for a living, because if you installed a chat bot: the kind that pops up in a window, asking “can I help you?” (a bit like Clippy used to do,) under no circumstances should you use it as the basis for your new conversational interface.
If someone is calling about their home loan, you can’t just text-to-voice your website or ‘play to end,’ as that’s not how people converse.
This is because even the punchiest of chat bot text comes across as stuffy and long-winded.
NAB’s bots triage and escalate non-self service problems to actual humans, on the fly, whether the channel is text or voice, ensuring that you don’t have to repeat yourself once you and you’re compound issue are connected.
Lesson 4: Testers will be jesters
The bank found that in testing the feature some customers could not help but test the limits of the bot to answer non-banking related questions such as “how tall is a horse?”
The flat answer this bot provides to such fodder is that it cannot assist with the question, but in principle it could be programmed to answer all questions put to it, one day, once the basics are covered and are up and running beautifully.
I made the mistake this week of “upping my game” and attending two tech events that the experts hosting them described as “entry level.”
Unless post graduate, PhD, or decade-plus experience counts as the new “entry level,” they definitely were not “entry level.”
At one event, I hoped to understand the state of play with machine learning, specifically, its approach to causality. Without going into a yuge amount of detail, causality has huge implications for AI, decision support, risk management and crisis response management and the planning thereof, assuming it actually works. So, does it?
I was informed that this would be “the Wikipedia article version” of said subject, and I probably grasped about half of what was said.
I’m going on the record with my disappointment that data analysis still rests on statistical assumptions, that I don’t want it to rest on.
I want real data. Thanks.
If machines are programmed to work things out the way we already do given our existing limitations, such as our assumptions, then they’re only thing to make the same mistakes that we already do, on a larger scale.
At the second event, (which was the first chronologically speaking) it’s difficult to know who the audience is.
I appreciate that information technology operations are highly detailed, and constantly changing, but I’m not the only one here sitting in total dumb-foundedness at the monotonous monologue that issues from the presenter’s mouth, (never an effective move). We sit through whatever this is without the examples that were promised making this tedious and effortful. There is no reason that the words that are tumbling out have been chosen and if the configuration of Azure has been done correctly then a lot of this noise is unnecessary as it won’t let me deviate or depart from the process flow.
I grasp the idea of “virtual machines” and also containers, and the difference between them, and why you might choose one and not the other, by googling the answer.
I grasp that Azure is Microsoft’s cloud storage product, several times over (but not much else,) and the chairs we’re seated in, at this brand new HQ, are so wildly uncomfortable that my backside goes numb in record time, forcing me to have to sit to one side and then the other. (As a rule, I wouldn’t normally admit to any bodily discomfort at a public function, least of all in a public forum, but this was remarkable.)
Get rid of the chairs! Their backs flex.
At least one of the challenges facing technologists, (and confronting me as a communicator, on a regular basis,) is the unfortunate habit of re-purposing words that already have a popular meaning, to mean their opposite, (for example ‘hack’) or something that they just do not mean, for example ‘policy’.
(Sidenote. Dear IT industry. Is there an app for coming up with new words, perhaps using Greek, Latin, or Norse, or some non-US English modern language roots, that you might be able to deploy to make new meaning-filled, technically accurate words?
For inspiration, please refer to ‘The Surgeon of Crowthorn’ and the method etymologyists use to unpack and come up with word forms. Thx)
More than once my purpose on a project has been to explain in lay terms how the information system either is or isn’t going to work the way that management thought it might, whilst co-designing the human, manual, prerequisite inputs, interim and subsequent steps and workarounds that make up a process workflow.
In that role, in an Azure environment, I would be at pains to explain that what I.T. means when it uses the word ‘policy’ does not meet the test of a policy is, and that what they’re describing as a “policy” is at most”a business rule.”
I work in government environments. As you might imagine, they already struggle with ‘big p’ and ‘little p’ policy, by which they mean
- public policy: “we shall have a transport system funded by taxes and administered by departments” and
- corporate policy “Employees are responsible for securely holding their ID pass, reporting its lost and not allowing its misuse”.
A policy, big or little, is a statement of principle.
As examples go: ‘Thou shalt not kill’ is a pretty good one. (Also, fairly universal. It doesn’t need to be a law for people to go, hey, yeah, I can remember not to do that.)
‘Be a good person’ is another one.
Don’t misuse the corporate resources is another.
Public or corporate, policies are like the Ten Commandments, both in the sense of portent and serious consequence they convey, and how few of them one requires.
I am afraid, my dear Microsoft, that a list of permitted websites and a second list of prohibited ones, are not in any meaningful sense of the word a ‘policy,’ let alone two separate policies.
What they are, is a set of dot point specifics made pursuant to IT security rules, and the higher level principle, or policy that we don’t allow staff to access inappropriate content, whether that content is illegal, obscene or malicious, or only prospectively so.
If you would like help in defining your business rules and mapping these to policy, as part of your corporate governance and its technical manifestations, I am available for hire.