This is “Towards a Smarter World,” and I'm your host Cruce Saunders. Very pleased to be joined today by Elizabeth McGuane, who is the content strategy lead at Intercom
where she is part of the product design team, and owns the language of the core product including its messenger app. Elizabeth's been working in UX for 10 years and before that was a journalist. I'm really glad she could be with us today. She recently wrote an amazing article to check out on TechCrunch, called “On Bots, Language, and Making Technology Disappear.
Elizabeth, with that article, would you summarize some of your thinking behind how you ended up arriving the conclusion that actually naming a bot is not necessarily the best strategy?
Thanks for having me. We did it through research, but I think where we started was through a really careful and considered approach to testing the language. When I started, this was one of the first projects I worked on at Intercom when I joined just over a year ago, and we were looking at introducing bot-like, very simple bot, into our messenger. We make a B2B messenger, so not to get to complicated in terms of the UX of our product, but we always have to think of our users in terms of two layers: we have our customers and then our customers' customers, and we were really creating a bot that businesses would use to communicate concepts or to get data from their customers.
I knew that we need to be really careful about how we express things so that we would marry with the business' tone of voice so that we wouldn't be overstepping the bounds of what we could say on their behalf. I had a feeling, and this was really just my gut instinct, that having a very chatty personality would not necessarily marry with the tone of voice of every single business that wanted to use our messenger. It was a very practical consideration on that front. When we went into testing, we tested with a name and without out a name. We also did testing with different tones of voice because going into this I think the design leads were interested to see whether a more friendly tone of voice or a more functional tone of voice would work. That was the initial consideration of “let's just try different kinds of copy, and see what works.”
I felt that I wanted to take a more structured approach and try names, no names, functional, more friendly, then we also tried with a pronoun, without a pronoun. Once we realized that names didn't work we also tried removing the first person “I”, and removing an introduction so that the bot didn't say, "Hi, I'm so-and-so's digital assistant," or what have you to see what impact that had. That's really where it started was with an actual structured approach to research. I think if we hadn't taken that approach, I don't know that we would have uncovered this in as surprising a way as we did.
Interesting -- the idea that bots are off-putting with names came out of a research derivation. Have you been able to, since you've launched the product, find any opportunities to validate that with user feedback?
Yes, absolutely – we get constant feedback. One of the benefits of Intercom is that it's a messaging app, and all of our customers, the businesses that use the messenger, communicate with us on our messenger to us, and so we have a constant stream of user research that's just readily available to use. We can do what we call customer voice reports, and we can look at things like product confusion, and confusion over terminology. Anything that happens in the messenger is always the most right place for user feedback. I think if there was one product strategy take away over the past year, it's been that the slightest terminology, or the slightest language change in the messenger has 10 times the feedback you would expect it to have. Any language change you make in the core app itself in terms of being UI of the product, the admin app, as we call it, has very little impact in terms of the immediate impression that it gives people.
I think the messenger is just so much a part of how businesses communicate to their customers that it's just super important for them. Once we arrived at the point of realizing that end users were happier without a name, the more interesting feedback we found is that it didn't prevent them from understanding they were communicating with something automated. We went into it thinking, "you should be transparent about what you're actually presenting to people. They should know that they're communicating with a bot, and therefore the bot should say, ‘Hi, I'm a bot.’”
What we found is that people are a little bit smarter, and more nuanced than we thought. They made up their own terminology to describe what this thing was in a way that showed us they were completely comfortable with it, understood what it was, even when it didn't introduce itself, call itself by a name, etc.
What are the cues that the users had to understand that they were interacting with a non-human in a chat interface? Which cues helped them to understand the context for the conversation?
There are a couple of different things happening. There are visual cues like the avatar – the avatar of anyone that's human, who's communicating through Intercom is always the actual person, the actual person's photo. [The non-human interface] does not use. It uses a standard pictogram avatar, which is not a human. In that way it's not identified as a person, sort of contrasted with everything else that you're communicating with. The more subtle differences are it usually is appended with a form interaction. It's asking for things like, "Hey, we need your email address so we can send this reply to you," or, "Hey, what did you think of that conversation so we can actually get some feedback from you."
It's usually asking for a single shot interaction from the end user. I suppose it's more of a mechanical piece of content than a normal, chatty response from a real person, which would be like, "Hey, how can I help?" In that way it's deliberately not trying to do human things that humans should do, which a lot of other bots are trying to do, trying to use natural language responses to answer questions. It doesn't say that we won't continue to explore that, but we found that the benefit of Intercom is that it enables real humans to have conversations with each other, and we wanted to use bot-like interactions to speed things up, to make life easier for the teammates that use Intercom to communicate with their customers, but not to take over their job.
I think the only other thing that deliberately differentiates it is the fact that the bot is not considered to be a member of the team that it's speaking about. For example, if you were sending this message from Intercom, it would only refer to the team that was maybe not around right now to answer your question, as they. It wouldn't say, "We're not around right now." It would try to remove itself from the situation, so it feels more like a butler or a sort of Downton Abbey situation where it's kind of telling you what's going on with the people you want to talk to, but not pretending to be a member of that group, if that makes sense.
Yeah, that makes sense. It reminds me, there was an article “Chatbots are the New Skeuomorphism
” out there that is about the Facebook director of brand, Paul Adams, and some of his research about users getting frustrated and feeling tricked if they get drawn into a conversation with something they feel should be acting like a human because it's been presented as a replacement human. That creates a user frustration.
It does. Unsurprisingly, actually Paul is now our VP of product at Intercom. He used to be at Facebook, so is actually the same research that I'm talking about. Those basically are speaking to a similar experience, or the same experience.
Dovetails perfectly. It's really been interesting because obviously Paul's our VP of product, he's my boss, and he's fantastic. One of the primary reasons I took this job, and has really great ideas on it. What's been interesting being the only content member of a design team is to see how the more design focused members of the team and I had arrived at the same conclusions through separate paths somewhat. He's looking at it like this is the skeuomorphism basically at a product level. I really started off by thinking about it as a communication problem, and thinking about the linguistics of it at a really granular level. We joined in the middle, which is great. It just shows you how much content is a design challenge at heart.
Looking at the big picture, what do you think it means for technology to disappear as a tool for humans? What does that mean for technology to longer be the object, to be a vehicle for experiences?
I always like to, and I think at Intercom we like to look to history to see what examples there are for technology that we can prepare ourselves with. We're trying to recreate a very real life conversation experience in the digital world, and I think that one of the things with tools that we use all the time, and I reference this in the article, are that you sometimes give things names but only when they have personal meaning to you. The rest of the time an object or a tool is really something that's just part of your life. It doesn't call attention to itself.
I think in the digital world the reason that feels kind of surprising to us is that marketing and design have really merged into this amorphous thing. We feel that everything that is design should also be something that is marketed, and shouts about itself a little bit more, and calls attention to it. UX is really all about creating an interface or an experience that is focused on the user and not on the experience. I think sometimes we've lost the way with that, and we do focus on the experience and not on what the user needs. I think 9 times out of 10 what user needs is to get their job done, and they need things to facilitate that process.
People are inherently selfish. If you see that on any user test, especially with bots, people are annoyed by things that don't immediately serve their needs. They're annoyed by things that don't react in the way that they think they should react, or return the right affordance that they think they should return. I think technology should disappear as much as the users want it to disappear, if that makes sense. I think we need to get better at understanding when we're building technology for use and not building technology for fun. There is a difference between building the most amazing game that you want to actually absorb it and think about it, and it should surround you and immerse you into this experience where you're kind of aware of that fact that you're having an out of body AI type experience. Tools that we need to use to do our daily lives and our daily work should not get in the way of our daily lives and our daily work.
I've always thought this maybe from my geeky childhood. Love of this idea of technology that was around you but didn't call attention to it. Kind of like the Star Trek ship's computer that you can just talk to, and it answers your questions but it never really had a name. It had a gender, which is maybe a bit iffy because it was a voice, but it wasn't a personality. Another content strategist that I really respect called Amy Thibodeau, who's written some great stuff in the past year about bots and about chatty interfaces, she compared it to loving R2D2 when she was a kid because R2D2 didn't try to pretend to be a human being. That's another level.
It's like maybe there's more adventure and interest to be found in letting technology disappear, but then even when we do want to personify technology not limiting it to being a pretend human. It could be so much more than that if we took the shackles off a little bit.
To me this dynamic move towards bots is hitting across the spectrum of UX and experiences we're creating for customers. At [A] we've had conversations about chatbots with associations, and nonprofits, and others that we wouldn't think would necessarily be interested in this. There's this move to bots that is playing out more rapidly than I think many of us who have been in the industry a long time could have imagined. How do you see that move playing out over time? Is it primarily B2C applications, or do you see B2B applications, or bots within traditional publishers, or bots across the spectrum of various customer experiences?
Yeah, I see it playing out by becoming more diversified. I definitely see it for B2B because we are B2B, and we definitely see scope for doing it in different ways. I think that what it looks like will look very different perhaps from a B2C bot.
To speak to what I was saying about the smart system, I think there's also the reality that we think of a bot as a very specific thing right now, as a text bot that you interact with in a messenger, or Alexa, or Google Home, or you're interacting with something with a voice, which probably has much more room to grow into something more robust. I think the pipeline between an interface, a messenger interface, or a web interface, or project interface, and a bot is going to become – the lines are going to blur. I think that interface language itself is becoming more chatty, and we have this interface that is literally chat. We're a little bit obsessed with that dichotomy, but I think that the two things are going to merge together, and become another way to interface with a customer.
I think people are already seeing that there are good and bad use cases for using bots. Sometimes it's taking many more steps to get something done that you could much more easily get done by looking at a UI and picking what you want. I think there's also use cases and times and places where bots are incredibly helpful. I don't think it's going to go away anytime soon. I think it's going to evolve and change, and I think it's going to evolve and change over formats as well. Differences between voice and text, I think, are really pronounced, and they're only going to become more interesting.
Have you spent any time looking at the mark-up standards that are being built around bots, like botML, or SMIL, different under to hood technologies for engineering content that bots are using and being developed? Wonder if that's been on your radar.
I'm aware of them, but I don't know that much about them. I'm like a technical content strategist in a sense that I'm always thinking about taxonomy and structure, but I would never consider myself to be technical to the point of knowing every market standard going, and being conversant in every language. I think when I read about bots and natural language processing and machine learning, I think it's great that we're trying to find standards, but I think that I'm more interested in the science of how we're going to better understand users, how we're going to better parse meaning from what they say or type. I think that's where we're making great strides, but we're still super far away from being able to understand the user's intent, and the meaning, and the tone, and the nuance of what they say. That's how my interest has been going.
I believe in natural language processing, but I believe ultimately that structure counts as well. I think we'll find a common ground I believe that bots over time, in order to get canonical answers will require schema and metadata in order to be able to present the best responses, and so that there's some sort of move to structure that will continue to march forward even though natural language processing continues to get better and better and improve meaning and relationships between semantic concepts.
I absolutely agree. I was more saying neither of these things being part of my 9-5 work where I'm reading is like more magical, science part of it. But I absolutely believe that a structured approach will always be required. I think that when you look at the science of natural language processing there is structure inherent in every decision that's being made and in terms of how we're mapping topics together, and understanding topics associations, all of that is just structure to a different degree. Any kind of content delivery has to be structured. Has to be appropriately tagged, and I think that that's just – well, I shouldn't say it's a given because certainly isn't the reality everywhere, but to me that's just like the gospel I would never argue with.
How technical should content strategists get? We see the technical side of content strategy is its own practice, so we call that content engineering, and we built a lot of disciplines around that. There's content strategists who go deep into microdata, and markup, schema and taxonomy. Ann Rockley
has referred to this role as the back end content strategist. What's your take on the level of technical focus that a content strategist in general should have?
Yeah, I think when I became familiar with [A], I realized I hadn't heard the term content engineering as a defined skill before. It really spoke to me because I've done talks where I've said engineers are content strategists' best friends. They have similar outlooks, and that they're considering things at a very granular level whether it's linguistics or the actual engineering. Often a lot of content strategists, as you say, are really concerned with schemas, and technology, and taxonomy, and microdata, and so on. I've done my fair share of content modeling, and topic modeling, more at a definition level. I don't know. I'm never quite sure whether back end content strategists, for whom that's their full agreement and their full concern, are going into things deeper than I have done in the past. I certainly just don't feel that I'm a full expert at that level.
That being said, I have often found that I'm the most technically interested or technically minded content strategist on a given team that I might have been on simply because it's such a big industry in terms of its interests. I would be working with people who had a background in business strategy, or a background in editorial, and who were very, very far removed from the more technical side of things.
I was a journalist first, but when I started in UX I was in quite a small agency, and I was mentored by a fantastic information architect. I've always approached things from that point of view. I really think of it as not technical because it's really library science. It's really like organizing information. I think that's the level that I'm at. I feel like I'm somewhere in the middle. I do think that there's also sometimes an inclination or a feeling that things that are at the front end, compared to back end content strategist, is somehow ephemeral or creative without being systematic and thoughtful or basically just copywriting.
I think sometimes content strategists slander each other with that a little bit as well. People who think that content strategists who are just editorial, or who deal with content strategy for marketing products, are somehow not as serious as content strategists who do the more technical work. I think that approached in the right way, and definitely approached in a way that can be rigorously tested and learned from, the front-end side of things is just as technical.
I think the bot research that we did is a perfect example of that. You had to think about that at a completely technical level. It's also true that when we changed words we iterated on things, and the product manager that I was working with said to me that he was shocked at how a tiny string of text could hold so much power, and was really an entire product job in one string. The nuance of changing pronouns, or changing verb tenses – the whole team, the engineers, the product manager, me, the designer, were all looking at this five words, and thinking how much can we tweak and finesse this string to communicate the right thing.
That to me is just like joyful because I think that's getting people to think about what we're communicating and how it's being understood at a really technical and deep level.
Interesting. It's an alchemy of many different disciplines and right and left brain thinking. You put together quite a few experiences and backgrounds both as a journalist and an IA, and a designer, to come up with your specific approach to content strategy. It's amazing to see that knowledge coming together and forming something greater than the sum of its parts. I wonder about the role of the content strategist in general because there are so many different parts to what content strategists can do. How is that role evolving? It used to be that content strategists were brought in almost like designers to digital projects, like a message architect. They did the message architecture, and sometimes some of the core copy. Then directed the writers. It's become much more diverse than that. Can you describe the shift that you've seen in the content strategy role and it's very many functions?
Yeah. I sort of see content strategy as a diaspora of people, like a family that's been spread around the world. Actually, rather it shifting in any one person's perception, whatever they originally thought content strategy was has been through many shifts as they start to learn about the different aspects that can be covered, the different concerns that people have under that same umbrella. That same name of content strategy, or content marketing, or content production, or whatever they choose to call it.
I think what I've actually noticed is that people had multiple starting points. There's a content strategist called Richard Ingram
, who's based in the UK, who back in 2011 came up with this beautiful wheel diagram where he interviewed all of the content strategists he knew at that point, which was probably all the people calling themselves content strategists at the time. It's a very small, or at the time it was a very small group, and showed all the different paths. It was called Journeys to Content Strategy
. All the different paths that people had come from, and different disciplines and backgrounds that they come from.
It was really beautiful to me because it showed this wheel of color of all these different ways of thinking about what amounted to people coming in to areas of design, of business, of marketing, of technology, where there was an absence. Where there was a void that needed to be filled. I think that absence was always something to do with words. Something to do with language. It was always whether people hadn't thought about how long it takes to produce words, and so there was content production planning, and thinking about Kristina Halvorson's
original thinking around we content strategy was really the effective planning, production, and execution of content.
That's a way limited summary of her first book, but that was the production planning side of things. It was a journey to get people to understand that it takes time to produce good content, and you should know what you're trying to produce is the right stuff.
Then there was also a gap in people's understanding of how that should be governed, so web governance came in. Then there was a lack of understanding of how content should be structured, and certainly people who were giving content any structure at the time maybe weren't people with language expertise, so people came in and started doing content modeling and so on. Even with things like brand messaging, and more front-end or brand-y side of things, it was also a gap of certainly not thinking about words, but maybe not thinking about words as things that need to persist, and things that need to be core to how business worked, and things that weren't ephemeral.
The advertising the model, for better or for worse, was all about creating things that to a certain extent were ephemeral, and was focusing on the visual, and focusing on look and feel and not meaning and expression. I think that that's really how I see it, is that there were all of these different starting points that all existed at the same time. They all existed because there was a gap in mysterious places, and these different people who all had language skills came in and filled those gaps. Then once they got close enough to each other they realized, "Oh, these are kind of the same thing, or these are all under the same umbrella."
What's similar with content strategy to things like UX in a broader sense is that we're still in that phase in our life span as a discipline, or as an industry, where we're having those regular discussions about what we should call ourselves, and trying to explain to people what we do, and how what we do is different from what our cousin over here does. There are issues of: I'm on a design team. I've been on strategy and planning teams. I've been a consultant, so I've done everything under the sun in terms of content strategy. I hang onto that name more as a sense of tribalism, maybe, that I'm part of a tribe of content strategists. I keep it for that reason.
I was asked this question. I was at the content strategy forum in Paris in 2010, I think, which was like the first European content strategy forum. One of the first big conferences, and I was asked this question. I think I gave the very same answer. I said, "When people start understanding what content strategy is, and I don't have to explain to them, I'll think about changing the name. Right now it makes more sense for me to have this sense of community, and to align myself with that rather than come up with the exact right name to describe the job that I'm doing right now." What I do, and how I approach my work has always been very varied. That's kind of how I think about it, but it's very personal.
The number of people in the industry that come from diverse backgrounds always has astounded me. I've seen, as I look around, pretty much every imaginable origin story for finding content strategy superpowers, and it's beautiful to see that there is then a tribe that forms out of that. That's coming from different backgrounds but sharing a way of understanding this palpable stuff called content which ultimately is the prima material that moves minds inside of digital space. The art of working with the customer's experience, the art of working with the customer's mental representation of what they are experiencing out there on the planet through digital lenses. The content strategist helps to orchestrate that, and the fact that content strategists come from all these different backgrounds I think lends itself to the fact there's so many different experiences that are being facilitated and created.
Yeah, I think so. It's interesting, when I started out in UX in the late 2000s, in 2007, my colleagues; I was like the tenth person to join the company that I joined. My mentors were someone who was a historian, and another person who had a psychology degree. The people around me were also people who were coming to UX with different perspectives. That's always felt natural to me that I think it creates a really interesting working environment when you have people from different walks of life. People who study theater, and who study improv, and all these different ways of communicating. I think for content strategy you are a content strategist to me if you are primarily trying to close the gap between the interface, whatever that interface is. Between the words and the user, and you're trying to more clearly communicate something to that user.
Where I think people talk about what's the difference between content marketing and content strategy, and good marketing is trying to do that as well. Good marketing is trying to get a communication to a user. I think where I would say maybe not thinking about content strategically is where you're not trying to be more consistent or more clear or put the user before the business's needs. It's definitely a balance. I'm not a total purist and think that we can just discount business needs as well, but I think that as any designer, any UX designer in the broad church of UX design, you should always be putting the user first. That to me is the only thing that if you weren't thinking that way, then I would say maybe you're not a content strategist.
That's interesting. The closing the gap phrase called to me because where a content strategist focuses on closing the gap between the words and the customer, I think the content engineer closes the gap between the words and the delivery.
Yeah, absolutely. I've always had this fascination with just understanding how things work even though I don't think I'm necessarily the most technically minded person, like math was never my best subject. I know my strengths, but I also think that, like I said before, language is inherently a structural and technical thing. I've always had this fascination with understanding how things work.
Just today I had a really great session with one of our engineering managers about how we can better define our system model and understand how to relate our data model to the way that our designers think about the objects that we're using to design. To me that is, if I was ever not allowed to do that because I had the word content in my title, I would be pretty devastated, I think.
No matter how much we separate content and presentation and delivery systems ultimately they all have to work together. Those of us that look at the whole or the gestalt of the system that we're creating want to be able to understand little bit about everything at least. What about the executives -- the CEO and CFO level -- how should they view the investment in content because content investment, content marketing budgets continue to grow, but there's often the sense that it's just something that marketing handles as an expense, or that product handles sometimes. So many organizations are forming business models that are dependent on these content assets as a core part of the customer experience. Content has come from being an also ran, or a documentation expense, or an expense along the way to getting messages out in marketing, to being much more center stage.
As a CEO or a CFO evaluating this kind of spend, how should I think about those investments? What kinds of factors should I consider when I'm investing in content programs when I don't fully understand, and it's always been an expense item for me, and somebody else's budget?
Yeah, I think this is where I take off my purist users-come-first hat, and put on a slightly more sales-driven hat. I think there's always a sense that content is something that you pour out of one bucket into another bucket. That it's just stuff, and that the containers that carry and deliver that stuff, and how those are architected, and how the stuff in the bucket is described to people so they know what's inside the bucket, that that is somehow not content. If you're thinking about content organizations like Netflix, where their whole business model is content, you still have the apparatus that needs to deliver that, the user experience of that.
The UI, and the language in UI, is so much a part of that. To me, I suppose what I would be trying to get a CEO or CFO to understand is probably not, "Hey, everything is content, and you need to care about everything," because that's kind of an unpalatable and probably not useful thing to tell them. More about breaking it down into it's individual parts, thinking about product design, which is something I think they can understand because they might have a product team or a UX team, and thinking about engineering. That's something that they know that they need. Appending what we might call content engineering, or content design is another term that's used by some people to talk about content within product, or UX writing, and append it to that, and say this needs to be integral to every team. It's more that it's a distributed model where language disciplines need to be a factor in every team that is delivering the business assets.
I think that's presently a more useful way to think about it simply because the word content is almost too broad to have meaning at that level. There's a really great Slack channel called Content + UX
that a lot of content strategists are on, and your listeners should definitely join if they're interested in the topic. Their was a content strategist from Spotify who was saying we can't call the UX of the experience the content in the UX, in the UI rather, content because our content is music. That's when we say content we mean music. They have a very specific definition.
Just purely from a terminology level, or how should you describe to them the value of it, I think that you need to break it down into it's individual jobs that they're trying to do. They're trying to deliver a better experience. They're trying to deliver their content in a better way almost like Trojan horse-ing the content part of it, or the content engineering, or content design part of it into that. That's probably how I would approach it at the moment.
That very much speaks to where I am in my team. I'm the only content strategist at Intercom, and I'm on the design team. I'm a node within the design team that works on all of the projects. My job is to basically make sure that language expertise becomes just core to the design discipline as opposed to maybe creating a separate content strategy discipline. Not to say that either of those is the right way to do it, but it's the way that I feel is the most effective for what Intercom needs to do, and how good our product needs to be.
On that inside Intercom blog, you wrote an interesting article that I think speaks to this ROI conversation, which is closing the gap between data and product development. It's also a UX dimension, and a customer experience dimension, but it talked about using clear event descriptions to provide product owners with really clean visibility into how users are actually using applications, using products. If you're building a product, but disconnecting the use of it, and analytics of the use of it from the product, we can't affect the user experience or the content that's facilitating it, the language that's facilitating it well.
Do we have a gap in understanding between the analytics and true understanding of how our content is being received and used to make decisions? How do you think we can close that gap?
I think we might. I think it's probably just a given that we do have a gap there simply because we built and architected our own product, and in order to instrument and understand our own product we still needed a better way to structure our own data. Even when we had complete control over what we were building and doing ourselves, we still needed to actually stop and take stock, and think about what behavior are we trying to understand. We've just been creating these events, and we have all of these data points that only our cleverest analyst can understand, and nobody on the product team can really understand without a little bit of help.
It's just not efficient.
This was one of the, again, great joys of working at a startup is that the products analytics team were like, "Oh, you should talk to Elizabeth, so she can help you come up with a nice structure, and a taxonomy for your events." What we wanted to do was create a structure where we could almost instead of anticipating in advance exactly every single behavior or moment we would need to track, we wanted to understand what are the core actions across the whole app or the messenger, and what are the objects that we want to understand, and what are the people that are actually acting on those objects. Really understanding in this very structural way, three-dimensional way, and then breaking those down into parts, and then allowing the analyst to dive down into that data, and understand more about the use of our messenger, and the use of our app in different ways.
Rather than having what we had which was quite a flat structure that had just grown organically. I would say that anyone who's using any analytics tool in a content marketing site is probably using something that's relatively templated, and isn't necessarily aligned with the questions that they really need to ask, and the jobs that they have to do. It may by accident be answering 80% of them, but by no means has it been designed for them to understand their data in the best way possible. I think it's just like words. You can't not have a better result when you have experts sitting down and thinking really carefully and deeply about what they want to learn from the data, as opposed to just saying, "Well, this is data, and these are our metrics and our DPIs, and if we hit these number of views, or these number of visits, or whatever, we're doing well."
That might be true, but you're not really understanding health of your content, or maybe the popularity of your content. You're not necessarily understanding how people feel about it. You're not maybe understanding the interaction between your content and the marketing when an experience and the kind of scaffolding of the experience itself. I think there's always room for a more nuanced exploration of data. Big Data was the buzzword for many, many years, and last few years it got a lot of bashing then I think in the last 12 months or so. Data's incredibly powerful, but like anything else, like bots, it needs human support.
No matter what the interface is, we're still people talking to other people. We're still trying to understand what other humans are doing on the other side of the screen. You can't help but have better result when you pay careful attention to that.
That's really beautiful. I have a challenge sometimes getting lost in the weeds on the data side because it's very easy to lose the forest for the trees when you're dealing with 150 potential transactions along a customer journey, and making them all explicit and analyzable. It all starts to look like data; at the end of the day it's somebody trying to make something happen. It's a very, very real human activity that's at stake. In the case of health care applications it's life and death. In the case of other applications it's the difference between making a major life decision, career change, or other thing based on the experience they're having with that content. There's a map to the data that's powering it, but it's a human impact. That's a great reminder.
How does a content strategist approach structuring content for voice access? There's no screen. The user's talking to their device or their car or their refrigerator or their watch, and it's talking back. How does the content strategist need to adapt for these content experiences that are emerging, which leap entirely past the screen to voice dialogue?
When you take away everything but the words, what are you left with? I haven't worked on voice experiences yet. I would love to, but I can't really speak to it with real authority. What you're getting is just my impassioned fan opinion. I feel like right now they are slightly different worlds. It depends on how familiar we actually become with voice interaction.
Obviously Alexa and Google are doing really well, and they're getting a lot of coverage, sometimes not for great stories, but they're definitely getting a lot of interest. As we become more familiar with those experiences, we'll see how much things move entirely. I don't think it ever will move entirely, entirely, but things move maybe with more force in that direction.
No, this definitely speaks to what I was saying about the invisible interface, that technology would just be at once omnipresent and invisible. I do think that text will persist. I think that it's been surprising to many that text does persist, and that people love video. Everyone's saying like, "Oh, text doesn't matter. Don't write articles anymore. Everyone just loves video." Video is still largely a passive kind of thing, a passive experience. I think there's a difference in intent and a difference in experience, and no one format is going to rule them all. There's always going to be variety there and that's great.
What I do think is, and I reference the Star Trek ship's computer, I'm exposing my geeky youth, thinking about how we can interact with voice in a way that wasn't necessarily overtly personified. I talked a little bit in the TechCrunch article about why it's important, or it seems to be important right now for a voice activated system to have a handle that is a name. In Star Trek they said computer. In Alexa you say Alexa. In Google you say, "Hey Google," or, "Okay, Google." Those are the triggers that activate it. Even at that level, what that is doing is the interface is coaching the user in terms of how to communicate with it. It's saying, "You need to meet me halfway," or maybe a little more than halfway. I've read a couple of articles where people have said the number of times they have to say, "Okay, Google," in a day is kind of driving them nuts. It makes them very aware that they're interfacing with something. That they're doing something that still feels quite unnatural to them.
To me that isn't true invisible technology. I compared it to when we search Google itself on a UI. I think the moment that spontaneous movement from I want to find an answer to something to going, opening a tab and searching for something on Google is almost probably subconscious at this stage because we're so familiar with it. We don't even necessarily think of it as interfacing with something. Probably to our detriment because I think we also have way too much time on our computers. We'll naturally just take out our phones, and pick up a laptop to answer our question, where in the past we would have just been happy not knowing, or we'd have just figured it out later, asked somebody.
What voice needs to do is become as subconscious interfacing with a laptop or a phone kind of feels at the moment. Yeah, I think how a content strategist needs to adapt for that is to pay attention to the nuances of how close we feel to that experience. I think one thing that makes an experience feel out of touch with a real user's expectations is when it isn't examining their intent and their expectation, and their up to the minute expectation of what they're communicating with. We read articles about interfaces, and we read articles about design, and we think, "Oh, okay, I know how to write for this, or I know what the expectation is. Our bot should have a name because everybody says that bots should have a name." Until you test it with real people, for the specific use case that you're designing for, you just don't know how people are going to naturally respond. Humans are very surprising.
I think the same thing is true for voice. We know the placement for research. I think we'll need to adapt better to how people naturally communicate. We'll need to not make them learn how to communicate with the computer. It's like in the old days before UX was really a thing, people knew how to search for something using specific ways. Whereas Google has now created a world where you can just type in any old thing and it will give you some kind of useful result. It basically realizes it needed to do 90% of the work to get closer to the user, and I think just voice and writing for voice will need to both from a delivery point of view in terms of structurally understanding how the language works, and how it needs to be packaged. From a design and concept point of view, we'll need to really understand how comfortable people grow to feel with voice, and how we can bring them closer to it without them feeling like they're doing to much work.
Have you used any voice applications yourself? What's that like for you?
I don't have any in my home, so I haven't to any consistent degree where I'm using it day to day and getting that experience. There were a couple articles that I've read about people having, "Oh, this feels unnatural because I'm having to say this name," but I also have a colleague who sits beside me, who has Alexa, and uses it all the time, and loves it. Definitely spectrum of experience, but I haven't. Have you used it yourself, or you know of it?
I'm in the space every day, and I have not. I have Alexa, but not as a device, just an app to experiment with, but I haven't integrated it into my life yet. I know people that are starting to depend on them. It's interesting to see that adoption curve starting to happen. The colleague that uses it next to you, do you hear them talking to Alexa?
No, they don't use it at work, but use it at home. He says he uses it for very specific use case. This is what is interesting is that it's ... Any product will try to sell itself as the solution to all of your life's needs. Usually people chose a product for a specific job first, and then maybe that expands into other jobs. He uses it for cooking, so when he's cooking he doesn't want to have a laptop open beside him. Actually, I think he uses Google Home, not Alexa. He was the one who was telling me the difference between “Okay Google” and “Hey Google,” where now you can say, "Hey." He said he likes it so much better, which is really hilarious.
He uses it for cooking because he can just say, "Hey, Google, read me the recipe about," whatever, and it tells it. He also has a history of all of the questions he's asked it. We went and looked at his history together, and I said, "You know, Google knows what you've had for dinner every day for the last however, year, that you've been doing this." He stopped and went, "Yeah, that's kind of disturbing. It's interesting. It's tracking our behavior in the same way that a search would, but I wonder how likely people to clear their Google Home activity compared to how likely they are to clear their search history on a regular basis. There's interesting security things going on there as well.
Privacy is one of those things that there seems to be a curve to it where it's very, very important at the beginning of an adoption cycle, but then starts to decline in important as the utility exceeds the privacy concern. We saw that with typewritten Google searches to Gmail and other all-encompassing kinds of self-exposure that as it's increased between Facebook and Google and Amazon. There's certainly a good compendium of most of our lives.
Absolutely, as much as I talk about the value of technology existing to support people, and not to force them to work for the technology, when technology becomes ubiquitous we tend to stop thinking about it as something that has our data, and just start thinking of it as part of our life. A darker side of that is we stop paying attention to things like our own privacy. There might be spikes of attention or spikes of concern about it, but it tends to diminish over time. We really rely on these organizations to take care of our data.
My sister actually works in communications very similar to the work I do, but in security, so we have really interesting conversations about the two sides of that, how open technology, and technology that should become ubiquitous versus having more control over your own data, and more ownership of it.
I think data hygiene will become one of those services that as a consumer we ultimately end up subscribing to as another service. Being able to be more selective about how our digital selves are represented to the bots, and to the wider world.
I wonder if we'll have to come up with a better name for it, something like data wellness. Something that's more palatable or more interesting for people.
Right, not dental hygiene, but data wellness.
Indeed. The bots are going to start talking to each other too. I see so many of these services that are isolated in little silos. Ultimately, like the internet is wont to do, it will become service oriented, and start connecting, so there's utility between, ultimately, I hope, there's utility between Alexa and Google and the Smart Home and Apple devices and what we do at work, so that between Siri, Corona, Watson, Google, and Alexa, there can be federated experiences and content that can exist between platforms. In other words, we end up in these little isolated neighborhoods or ghettos associated with just one particular platform.
Absolutely, integrations are incredibly powerful now. For any platform, even Intercom, to become meaningful for a business let alone an individual you need to be able to integrate with everything else that they have. I think that maybe B2B product are a little bit ahead of the game in just accepting that reality, and trying to build with that in mind, and not create wall gardens, and bubbles that people can't get out of.
I think there'll be a tipping point where it'll just be obviously that the richness of the experience that you can get from having a user who has their own data, they exist in their own data bubble, that is then shared out through multiple platforms, maybe that's the way it could be structured. That will just generate a rich experience across to board, and create more loyalty across to board so that they don't tend to turn from one product to another. Maybe that will be a tipping point, and more B2C products will start thinking that way as well.
In all events, it's an interesting and compelling and beautiful world that we're stepping into. I think I'm more optimistic than pessimistic about our relationship with bots. I think that the article that you wrote is a good step in that direction, actually that technology needs to disappear, and bots need to serve their human masters in a more humane way behind the scenes that makes them known without imposing upon the artificial expectations of the user. I think there's this landscape we're encountering now that has ever more increasing need to rely on what we do as content strategists and content engineers.
I wonder for you in the big picture, how do you think the work that we do as content strategists, content engineers, makes the world a smarter place?
Yeah, it's interesting when you say it back to me, the outcome of my article, which is that bots should serve their human masters. I've spent 15, 20 years of my life watching lots of movies that are all about the rise of robots, and how there's power struggle between humans and robots. It's an interesting thing to be like, "Oh, no they definitely should serve us." I think that really what'll be interesting, and certainly this speaks to what I've learned from doing research into language around bots, and just language in interfaces generally, is that there's always a gap between what you say and what you intend in communicating in an interface, and how it's interpreted.
Bots are really interesting way to try to bring those two things closer together to investigate and understand how people interpret a machine means that we have to write what that machine will say, and then understand how people interpret that. If machines can be the intermediaries between a business and it's customer, between one human being and another, then in a way the machines are helping us to understand each other better. That's the perfect world way of looking at it. Yeah, I'm really hopeful that that's how we continue to explore it, and that we understand that words and any kind of communication are not just flat packages, or deliverables. They're really living things, and any kind of human-to-human communication is complex and is nuanced and is multi-layered.
It's a wonderful home and industry to have found myself in, and I'm really excited for what the future holds.
We will definitely look forward to learning more about your work. I would invite everybody to take a look at your blog, which is available up at blog.Intercom.com
. Anywhere else that folks should look out for keeping track of where you're doing in the world?
Yeah, I'm very much firmly lapsed blogger other than on the Intercom blog, but I do blog there quite regularly. I'm relatively active on Twitter at emcguane
, and I'm actually “emcguane” on most channels. If people are interested in hearing my thoughts about language and pronouns, they can certainly follow me there.
Thanks very much, Elizabeth. Have a great evening, and we'll look forward to connecting again in the future.
Thanks so much.