Engineering Intelligence: An Interview with Cruce Saunders

Orchestrating and choreographing the design, acquisition, management, translation, and delivery of content seamlessly across networks, platforms, and devices is the turf of content engineers. In this Meet the Change Agents interview, Scott Abel talks with Cruce Saunders about content engineering, content intelligence, and the role of technical communication professionals in ensuring that content achieves its business goals.

Scott:  Cruce, thanks for taking time to speak with me about the emerging discipline of content engineering and its importance to technical communication project success. Before we dive into questions about content engineering, tell our readers a little about yourself, your company, and what you do for the customers you serve.

Cruce:  Thanks, Scott. I am a Content Engineer and the founder and principal of [A], at All of us involved with technical and marketing communications know that enterprise publishers are buried in a soup of existing content systems, structures, and standards that don’t connect and communicate, especially when confronted with more and more publishing modes, variants, and channels. [A] was founded to address enterprise content scale issues using an architectural and engineering patterns approach. So, we’ve created [A] as what we call the ‘Content Intelligence Service.’ Our goal is to help smart, content-rich organizations get smarter about content production and delivery. We are privileged to work with some of the largest publishers and most complex content sets on the planet, under the hood with the people, processes, and architecture that power modern content experiences.

Scott:  Self-described content philosopher Joe Gollner defines content engineering in The Language of Content Strategy (2014, XML Press) as: “The application of engineering discipline to the design, acquisition, management, delivery, and use of content and the technologies deployed to support the full content lifecycle.” 

For many technical communication professionals, writing is at the core of the work they perform. Why is it important that a technical communicator understand the role of the content engineer?

"Tech comm should embrace its leadership position and early adoption of structural norms to facilitate the transit of content."


Cruce: Technical communicators are structuralists, and have been for decades. That’s huge! Who else really, deeply understands content structure? Who else really gets the relationship between words, visual assets, and technology? Some in IT understand structure, but from a data modeling perspective. Some in marketing, but from a strategic and UX perspective.

Lots of silos have part of the puzzle. But where else in the enterprise does there exist trained professionals, who understand subject matter, content structure, metadata, and topic relationships? Very, very few groups outside of tech comm have the right DNA to influence content structure, annotation, and enrichment. Other groups CAN get there, but it takes extensive new training. Tech comm is ahead of the game.

Technology and content together are the keys to omnichannel publishing.

Tech comm should embrace its leadership position and early adoption of structural norms to facilitate the transit of content. And we need to think bigger. It’s too easy to get trapped in a deliverables mindset.

I am a huge admirer of Joe Gollner. He has been leading toward structural and semantic content normalization since way before engineering content became so clearly essential to the future. His prescience benefits the whole industry. It’s a great honor to get to work with him in collaboration at [A].

Enterprise content warriors and innovators such as Ann Rockley, Sarah O’Keefe, Rob Hanna, and others have been leading successful content transformation and enrichment projects for years, and teaching others. As an industry, we are indebted to those leaders for shaping messy enterprise content ecosystems toward order.

Tech comm people from all walks, especially writers, can and should give themselves the charter and permission to start leading more of the conversation around content portability, reuse, and reshaping for use across multiple customer experiences. That doesn’t mean just DITA and internal XML standards, although that’s a good place to start.

Scott: You’ve been a proponent of intelligent content and have extolled its business value in articles and conference presentations. As you know, this isn’t a new idea. Many large multinational companies have implemented changes in their content creation, management, translation, and delivery processes to enable capabilities they could not enjoy without making such changes. Despite the maturity of the content strategy and engineering disciplines, why do you think more organizations have yet to adopt intelligent content?

Cruce: Content publishing is driven by large, distributed teams of contributors hidden in every nook and cranny of the enterprise: every regional geography, every product team, every internal- and external-facing group produces and consumes content. So it’s just seemingly overwhelming. And who actually even owns content? Everyone, and no one.

Unfortunately, today, the challenge of managing content strategically is falling to underfunded marketing and tech comm groups, and sometimes to knowledge management or training departments, who are ill-equipped with the authority or budget to enact change across the bigger picture of customer experience. Rarely does anyone have the budget to actually improve systems. They have the budget to make donuts in a given quarter or year. There’s budget to ship content, but not to pay attention to the ship itself.

But as everyone who has been fighting for content’s rightful place knows, it comes down to education, budget allocations, and organizational structures to support intelligent content enrichment programs.

Mid-level and senior functional leaders simply do the best they can with the resources they have to improve publishing maturity. Real change won’t happen until content intelligence initiatives are seen as strategically valuable by the c-suite. But there’s anecdotal evidence that’s starting to happen. In the last year, [A] has been approached by several individuals in the office of, or reporting directly to, a CEO or CMO. Strangely, not a CIO yet, although people in their organizations are almost always involved. Executives are starting to understand that massive strategic market outcomes depend on how we treat this most-valuable and most-misunderstood asset: content.

Scott:  If a technical communication team decides to consider creating intelligent content, what should their first step be?

Cruce:  Moving toward content intelligence requires an iterative, experimental process. First comes education and awareness. We need to agree on the problem among a cross-functional group of content stakeholders. And agree on the opportunities. Where are things breaking now? How can we start to quantify our technical debt around content? Who is spending what amount of effort on copy-and-paste and other forms of content transformation? What publishing scenarios are not possible with the status quo? What would be possible if we did not have so much inherent friction in moving content sets around within a subject domain like marketing, support, or training? What customer experiences and marketing scenarios become possible if we can move content between domains?

After creating a shared sense of the problem, pilot projects (even paper ones) are the beginning and a necessary first step. Intelligence is like an organism that grows through cellular division and mitosis. We need to form and spread the DNA from a focus point and seed content coherence within one part of the ecosystem at a time.

Scott:  To make intelligent content work to its full potential, we need to orchestrate a host of tools and technologies and choreograph the movement of our content across multiple domains, platforms, systems, and delivery channels. What role does content engineering play in the creation of successful intelligent content projects?


Cruce: "First and foremost, we need to know that content intelligence is a team effort and an ongoing process. It’s not a project. It’s not a committee. It’s not just a governance function." 


True cross-domain, cross-silo content portability requires a set of ongoing practices, functioning on a persistent basis. And content engineering is one of those practices.

It’s important for enterprise leaders to approach content strategy and content engineering together, as a part of an integrated approach toward improving customer experiences across channels. [A] sees the practices of content strategy and content engineering, along with content operations, as the pivotal functions within a new enterprise-wide content operating model.

When addressed holistically,content assets produce ongoing economic returns in terms of customer acquisition, share of wallet, and loyalty. Content strategy by itself is not enough. Content engineering by itself is not enough. We need both the content and the distribution. We need the message and the method. We need the essential knowledge and the multimodal manifested states of that knowledge. We need strategy and engineering. So, the practice of content engineering gets included within an organization’s overall planning around customer experience, content marketing, and customer support.

[A] has built a Content Intelligence Framework that describes the contributors, the shared artifacts that drive structural and semantic norms, and the architectural approach to systems alignment. It’s true: there are dozens (sometimes hundreds) of systems that handle content in an enterprise. And usually lots of internal IT and external vendor developers maintaining those systems. We have to create a superset of content orchestration and management that enables content flows, so that they share structure (schema) and agree on semantics (taxonomy, vocabularies) and relationships (graph) from system to system.

"Content strategy by itself is not enough. Content engineering by itself is not enough. We need both the content and the distribution. We need the message and the method."


Scott:  For the past few years, I’ve been focused on helping technical communicators better understand tangentially related tools and techniques that may — if we implement them thoughtfully — provide improved customer experiences. At our 2017 Information Development World conference in Menlo Park, you spoke about chatbots and structured content. What role should chatbots play in the delivery of technical support content today, and what might we expect their role to be in the future?

Cruce:  Technical support, as with all post-sales customer experiences, provides us with lots of opportunities to have our content intelligently interact with our customers to create brand value. We need to see chatbots as another entry point, or mode, of interaction with our existing topic-based content sets.

Start with “if we knew X about our customer, what could make their experience as low-effort as possible?” Or ask “How could a chatbot help our customers avoid a call, form, or email?”

Think of chatbots as another way to facilitate interactions that would normally involve more customer effort:

  • Search
  • Content suggestions
  • Form fills
  • FAQs
  • Front-line service

Ultimately, chatbots help us reduce customer effort and increase interactions with our brands. Being a customer is hard. Chatbots should make the experience a little bit easier. Low-effort experiences increase repurchases, wallet share, and word of mouth.

Scott:  Over the past year, we have seen widespread adoption of voice-enabled systems like the Apple HomePod, Google Home, and various incarnations of the Amazon Echo. Each of these systems requires us to prepare our content in ways that support discovery by these newfangled interfaces. What role can technical communicators play in helping their organizations make their content available and accessible to these voice-enabled chatbots?

Cruce: As you know, [A] advocates for aligning chatbots with the rest of the content ecosystem, so answers involving related content items can be brought directly back to a customer in response to a related intent. Baking conversational content strings into the CMS, and having the chatbot access those as part of a fulfillment process, is a good way to accomplish that alignment. There’s a lot more to that story, and interested folks can look up the webinar we did together last summer to see more of those details, or read more on Often, the strings we need to add to our CMS content types to get started are simply questions and answers related to a given topic, which is intuitive for writers. When posed in Q&A pairs, content items are easier for chatbots and voice interfaces to match intent with responses that make sense when spoken, so getting started doesn’t have to be too hard. Voice assistants can consume from the same content-as-a-service architectures that support headless content consumers, such as mobile applications, kiosks, AR/VR applications, wearables, and third-party content licensees. But before the architecture and technology, or even the content structure planning, the very first step to preparing content for chatbots comes on the strategy side — knowing and writing down what our customers’ intents are, so we can begin to address those intents through interactive content.


Scott: In an interview you did recently with tcworld, you stated that “chatbots are not the future of technical communication.” What did you mean by this? And what role, if any, do you believe chatbots should play in the communication of technical and product-specific instructional content?

Cruce: Ah, yes. I’ll quote from that interview and then elaborate:

 “The future is omnichannel. Actually, one could make the argument that the present is already omnichannel, as evidenced by the proliferation of channel and function-specific authoring groups. Chatbots are just a channel. People still will want PDFs. And Web interfaces. And search results. Conversational interaction with our content already happens every day, whether we structure for a clean experience or not. We already ask Google for answers, and Google crawls our sites to find the answer. Every atomic fragment of content is a customer’s potential first interaction with us. We need to structure content for conversational variants within our CMS, along with the traditional article form content. If we care about meeting customers at the channels, we need to accommodate for channel interactions at the level of content structure. We design the model, then the modes. Structure precedes presentation.”

So, basically, we should not put all of our content interaction eggs in any one channel basket, including chatbots. We should put our energy into coherent, topic-based content sets working against a single Master Content Modeltm. And that model should be represented in our systems of record and customer interactions, including chatbots.


Scott:  Far too many times, organizations quickly ditch one content technique for a shiny new one without regard for the amount of time and money invested. When new technologies make themselves known, some teams gravitate away from techniques that provided substantial value in the past in hopes of making things easier, faster, better. This type of situation can often lead to some organizations throwing out all of the work they did previously in an attempt to gain value from the new approach.

When thinking about chatbots and voice-enabled devices, can we benefit from leveraging our semantically-rich, consistently structured, modular, intelligent content? Or, do we need to start all over again and adopt a totally new way of creating support content?

In other words, can we repurpose the way we create, manage, and deliver content and leverage those processes and approaches to feed answers to questions submitted by our customers through a chatbot or voice interface?


Cruce: Oh, yes! It’s a sin to leak knowledge. Or bottle it up. Or recreate it lots of times. Let alone scrap it. We should honor and respect knowledge and the effort it takes to form it into content artifacts and experiences. If the knowledge exists, and it’s still valid and can serve customers, let’s use it. Take the time to enrich semantics and structure and reuse it. It seems like a lot of effort, but losing knowledge, underutilizing it, or recreating it is far more expensive long-term than creating the pipelines and connectors to reuse and repurpose it.


Scott: There’s a little bit of confusion in the technical communication space today caused by these two terms: markup and markdown. Can you help us understand the value of markup, and how it is different from markdown?

Cruce: Markup makes machines able to understand and present content. Almost everything that gets sent from a server for a browser to parse and present contains markup. Markup is originally a term from the print world. An editor would receive raw text, then go through it to add annotations about what was to be italic, what was bold, what needed to be a title, a subhead, or a drop cap. The printers would use these “markups” when producing a final version. This became known as “marking up” text. Of course now, markup contains all forms of metadata, microdata, and other annotations for all manner of machine interpretation. XML is a form of markup too. Markup can incorporate semantic metadata and references, giving our content defined relationships. So now, instead of human editors prepping for one print publish event, machines are now parsing the markup for presentation and transformation across hundreds of device types, and extracting the text for use in countless ways by search engine and intelligent assistants. Markup is how content moves.

Markdown is a shortcut that some authoring groups use to attempt to simplify indicating presentation markup by using a limited set of tags that later get converted into structurally-valid HTML and CSS. Markdown can be considered a subset or precursor of markup, made easier for authors who want to indicate formatting without dealing with the full, hairy specifications of various markup standards.

When a client has a markdown authoring lifecycle in place, we work with ingesting and transforming markdown to align it with richer annotation cycles later in the content lifecycle.

Scott:  What content-related technologies are you most excited about and why?

"We have too many content authoring management and publishing systems trying to orchestrate a single customer experience with content that doesn’t transit between systems."


Cruce:  There’s lots of great emerging semantic services platforms and smarter Digital Asset Management applications, Customer Experience Management platforms that get smarter by the minute, and lots of approaches to delivering content APIs … all of that is great. But I am not as interested in specific tech.

We have too many content authoring management and publishing systems trying to orchestrate a single customer experience with content that doesn’t transit between systems, is redundant, or is otherwise overwhelmingly inefficient. Technical communicators are painfully aware of this, and the answer lies within the practice of content engineering.

To create this unified customer experience, we have to engineer not just the content authoring systems, but all the systems that the content touches in its lifecycle from authoring to delivery across channels. Our content must inhabit many, many representational states. The states to which our content must conform are no longer just independent of design and presentation, they are increasingly context-dependent and personalized.

So at [A], we advocate for the Master Content Model as a unifying model that gives structure to this process. That kind of abstraction layer to content, independent of tech, is what interests us the most. We can work with innovative technology once we have an orchestration layer. Once we have the sheet music and agreement on the basics of music theory, then we can make the orchestra play coherently.

Scott:  Let’s shift gears a bit. You run a popular podcast during which you interview thought leaders about topics related to customer experience, digital transformation, and intelligent content (among others things). How can our readers subscribe to your show?

Cruce:  Listeners can find us on popular podcast services, such as iTunes or Soundcloud, under our [A] Podcast Series name: Towards a Smarter World. We also host the podcast on our website,, along with a transcript and the audio file for direct download. We interview industry thought leaders and innovators in content intelligence, omnichannel publishing, machine learning, and enterprise content operations.

Scott:  Many people are super excited about the opportunities to improve content creation, management, translation, and delivery with the help of machine learning systems and other forms of artificial intelligence (AI). What possibilities does artificial intelligence provide that you are most excited about and why?

Cruce:  Goodness, there’s a lot here, and all of it bleeding edge. We spent two days exploring this topic area in workshops recently for a client. Way too much to get into here. Let’s just focus on a couple of primary areas: Machine-based annotation and enrichment of content, and Recursive Neural Network-authored content summaries. It’s my current opinion that various Natural Language Processing (NLP) technologies are not truly ready to provide consistent quality entity identification, except on the largest, most normalized, and most predictable sets of content. However, it can do a pretty good job identifying 80 percent of the most relevant entities in a given set of text—albeit with a variable error rate. Therefore, we can use NLP to reduce the time it takes to define semantic relationships in our content sets—to provide entities and keywords—as long as humans are in the loop and are the final editors.

So it can be seen as a sort of “intern” providing some pattern recognition, but professionals have to vet it before the content sets are updated on the live production servers. This is true for Recursive Neural Networks (RNN)—often referred to in the broader scope of Natural Language Generation or NLG—that are being trained to author content summaries. With a corpus of enough consistent content, an RNN can start to generate valid summaries or abstracts of long form material, but human involvement is necessary for quality assurance. So think of AI today as a power-up and potential time saver but not a magic pill.

Scott:  If you could leave our readers with just one thing of value, one piece of advice, as they consider what new skills to add to their toolkit to advance their careers, what advice would you give them?

"Awareness creates conversations. Conversations create connections. Connections create shared understanding." 


Cruce:  Study content structure and semantics. Build relationships outside of tech comm. Walk around, browse around. Look for patterns beyond tech support, customer service, and knowledge management. Everything relates. Study the nature of content by getting to know how it expresses itself in authoring, management, and delivery in the many settings in which it interacts. Awareness creates conversations. Conversations create connections. Connections create shared understanding. We’ll move the industry forward with that shared understanding translated into actions.

Scott:  Well, it appears we’re out of time. I thank you for sharing your knowledge and expertise with our readers. I really appreciate all the great work you and your team do to educate members of our professional community. It’s much appreciated.

Contact Us for a Free Consultation.
[A] Editor's Note:  This article was originally published in Intercom as "Meet The Change Agents: An Interview with Cruce Saunders".
[A] Publication
Content Engineering for a Multi-Channel World